text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## The question
**Prj05**
Consider the Vasicek model
$$d r_t = \alpha (b - r_t) dt + \sigma dW_t$$
with the following parameters:
$$r_0 = .005, \alpha = 2.11, b = 0.02, \sigma = 0.033.$$
**Todo**
1. Implement Euler simulation and draw a plot of $\mathbb E[ r_t ]$ on $t\in [0, 10]$.
2. Find explicit form of $\mathbb E [r_t]$ and $\lim_{t\to \infty} \mathbb E [r_t]$.
3. Zero bond price has the formula
$$P(0, T) = \mathbb E[\exp\{-\int_0^T r(s) ds\}].$$
Find the exact value of $P(0,1)$.
4.
Run Euler, Milstein, exact simulation on $P(0,1)$ with different stepsizes, and find the covergence rate for each using linear regression. (Hint: one shall approximate integral with finite sum)
__1. Implement Euler simulation and draw a plot of $\mathbb E[ r_t ]$ on $t\in [0, 10]$.__
__Solution:__
We use the Euler Method to generate a path of $r_t$, and then calculate the mean of $r_t$, $t\in [0, 10]$.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from pylab import plt
plt.style.use('seaborn')
%matplotlib inline
## define the class of SDE
class SDE:
def __init__(self, Mu, Sigma, InitState):
self.Mu = Mu
self.Sigma = Sigma
self.InitState = InitState
def PrtCoef(self, r, t):
print('At state r = ' + str(r) + ' time t = ' + str(t) + '\n')
print('Mu ' + str(self.Mu(r, t)) + '\n')
print('Sigma = ' + str(self.Sigma(r, t)) + '\n')
def PrtInitState(self):
print('Initial state is ' + str(self.InitState) + '\n')
def Euler(self, T, N):
r0 = self.InitState
Mu = self.Mu
Sigma = self.Sigma
t = np.linspace(0, T, N+1)
Wh = np.zeros(N+1) #init BM
rh = r0 + np.zeros(N+1) #init Xh
for i in range(N): #run EM
DeltaT = t[i+1] - t[i]
DeltaW = np.sqrt(t[i+1] - t[i]) * np.random.normal()
Wh[i+1] = Wh[i] + DeltaW
rh[i+1] = rh[i] + Mu(rh[i], t[i]) * DeltaT + \
Sigma(rh[i], t[i])* DeltaW
return t, rh, Wh
#define function Mu and Sigma and decide the parameters in the problem
alp = 2.11
b = 0.02
sig = 0.033
T = 10
Mu = lambda r,t: alp * (b - r)
Sigma = lambda r,t: sig
r0 = 0.005 #init state
## creat a object iSDE
iSDE = SDE(Mu, Sigma, r0)
iSDE.PrtInitState()
MeshL = .005
MeshN = int(T/MeshL)
NumSimu = 100
R_mean = np.zeros(MeshN) #Simulation of mean of r at the time ti
## Implement Euler simulation and draw a plot of r_t, t from 0 to 10
for i in range(NumSimu):
[t, R, W] = iSDE.Euler(T, MeshN);
plt.plot(t,R); ## the path of r_t
## get the mean of r_t at the time [0,t], t from 0 to 10
R_append = list()
for i in range(MeshN):
R_append.append(R[i]);
R_mean[i] = np.mean(R_append)
print("One path of the mean of r_t:")
plt.plot(range(MeshN), R_mean, label='plot of the mean of r_t ')
plt.axhline(b, color='r', ls='dashed', lw=1.5, label='benchmark')
plt.ylim(-0.01, 0.03)
plt.legend()
## build a matrix to store the R_mean of each simulation
R_matrix = np.zeros(shape = (NumSimu, MeshN))
## Implement Euler simulation and draw a plot of r_t, t from 0 to 10
for i in range(NumSimu):
[t, R, W] = iSDE.Euler(T, MeshN);
# plt.plot(t,R); ## the path of r_t
R_append = list()
for j in range(MeshN):
R_append.append(R[j]);
## print(R_append)
R_mean[j] = np.mean(R_append)
R_matrix [i,] = R_mean
## print(R_matrix)
R_mean2 = np.mean(R_matrix, axis = 0) ## calculate the mean of each simulation
print("Using all of the path to calculate the mean of r_t:")
plt.plot(range(MeshN), R_mean2, label='plot of the mean of r_t ')
plt.axhline(b, color='r', ls='dashed', lw=1.5, label='benchmark')
plt.ylim(0, 0.03)
plt.legend()
```
__2. Find explicit form of $\mathbb E [r_t]$ and $\lim_{t\to \infty} \mathbb E [r_t]$.__
__Solution1:__
We define $Y_t = F(t, r_t) = e^{\alpha t} r_t$, and we have
$$ \frac{\partial Y}{\partial t} = \alpha e_{\alpha t} r_t, \quad \frac{\partial Y}{\partial r_t} = \alpha e_{\alpha t}, \quad \frac{\partial^2 Y}{\partial r_t^2} = 0.$$
By the Ito lemma, we have
$$d Y_t = [\alpha e^{\alpha t} r_t + \alpha (b - r_t) e^{\alpha t} +0] d t + \sigma e^{\alpha t} d W_t. $$
Simplifying the above formula, we can get
$$d Y_t = \alpha b e^{\alpha t} d t + \sigma e^{\alpha t} d W_t ,$$
and doing integration on both sides, we have
$$ Y_t - Y_0 = \int_0^t \alpha b e^{\alpha s} ds + \int_0^t \sigma e^{\alpha s} d W_s,$$
where $Y_0 = e^{0} r_0 = r_0.$
So we have
$$
\begin{align}
e^{\alpha t} r_t = r_0 + b e^{\alpha t} -b + \int_0^t \sigma e^{\alpha s} d W_s \\
= b e^{\alpha t} + (r_0 - b) + \int_0^t \sigma e^{\alpha s} d W_s.\\
\end{align}
$$
Easily we know the solution to the Vasicek model
$$ r_t = b + e^{-\alpha t} (r_0-b) + \int_0^t \sigma e^{\alpha (s-t)} d W_s.$$
Next we calculate the $\mathbb E[r_t]$ and $\lim_{t\to \infty} \mathbb E [r_t].$
$$
\mathbb E[r_t] = \mathbb E[b + e^{-\alpha t} (r_0-b) + \int_0^t \sigma e^{\alpha (s-t)} d W_s] = b + e^{-\alpha t} (r_0-b).
$$
So we have
$$\lim_{t \to \infty} \mathbb E [r_t] = \lim_{t \to \infty} (b + e^{-\alpha t} (r_0-b)) =b. $$
__Solution2:__
We also can get the mean of $r_t$ without necessarily solving explicitly the SDE. Since
$$r_t = r_0 + \int_0^t \alpha (b-r_s) ds + \int_0^t \sigma d W_s,$$
so we have
$$ \mu_t = \mathbb E[r_t] = r_0 + \int_0^t \alpha (b- \mathbb E[r_s]) ds .$$
Hence we doing differential both side, we can get
$$ \frac{d}{d t} \mu_t = \alpha (b- \mu_t),$$
which is a linear ordinary differential equation. Consequently, using the integrating factor $e^{\alpha t}$ we have
$$\mu_t = \mathbb E[r_t] = b + e^{-\alpha t} (r_0-b), $$
and then
$$\lim_{t \to \infty} \mathbb E [r_t] = \lim_{t \to \infty} (b + e^{-\alpha t} (r_0-b)) =b. $$
__3. Zero bond price has the formula
$$P(0, T) = \mathbb E[\exp\{-\int_0^T r(s) ds\}].$$
Find the exact value of $P(0,1)$.__
__Solution:__
We know that for the Vasicek model, we have
$$
\begin{align}
P(t,T,r_t) = \mathbb E[\exp\{-\int_t^T r(s) ds\}] \\
= \exp\{- B(t,T) r_t + A(t,T) \}, \\
\end{align}
$$
where
$$B(t,T) = \frac{1-e^{- \alpha (T-t)}}{\alpha},$$
$$A(t,T) = (b - \frac{\sigma^2}{2 \alpha^2}) [B(t,T) - (T-t)] - \frac{\sigma^2}{4 \alpha} B^2(t,T).$$
To solve $P(0,1)$, we have
$$P(0,1) = \exp\{- B(0,1) r_0 + A(0,1) \}, $$
where
$$B(0,1) = \frac{1-e^{- \alpha}}{\alpha},$$
$$A(0,1) = (b - \frac{\sigma^2}{2 \alpha^2}) [B(0,1) - 1] - \frac{\sigma^2}{4 \alpha} B^2(0,1).$$
```
## solve the exact value of the zero coupon bond
B = (1 - np.exp(- alp))/alp
A = (b - 0.5 * sig**2 / alp**2) * (B-1) - sig**2 /4/alp * B**2
P_01 = np.exp(- B * r0 + A)
print(f"The exact value of the zero coupon bond is: {P_01}")
```
__4.
Run Euler, Milstein, exact simulation on $P(0,1)$ with different stepsizes, and find the covergence rate for each using linear regression. (Hint: one shall approximate integral with finite sum)__
Since zero bond price has the formula $P(0, T) = \mathbb E[\exp\{-\int_0^T r(s) ds\}],$ and we can calculate the integral by the formula
$$\int_t^T r_s d s = \int_{t_0}^{t_1} r_s d s + \int_{t_1}^{t_2} r_s d s + \cdots + \int_{t_{n-1}}^{t_n} r_s d s,$$
where $t_0 = t$ and $t_n = T.$ So we can simulate $r_t$ firstly and then approximate the integral by
$$\int_t^T r_s d s = r_{t_0}(t_1 -t_0) + r_{t_1}(t_2 -t_1) + \cdots + r_{t_{n-1}}(t_n -t_{n-1}),$$
when we can guarantee $ max_{0 \leq i \leq n} (t_i - t_{i-1}) $ to be a very little number.
To exact simulate on $P(0,1)$, we know that the Vasicek model is a special form of Hull-White model when $g(t) = \alpha b, h(t) = - \alpha$ and $\sigma(t) = \sigma$. And we know the exact simulation of the Hull-White model is
$$r_{t_{i+1}} = e^{H_{t_i, t_{i+1}}} r_{t_i} + \int_{t_i}^{t_{i+1}} e^{H_{t_i, t_{i+1}}} g(s) ds + \int_{t_i}^{t_{i+1}} e^{H_{t_i, t_{i+1}}} \sigma(s) d W_s, $$
where
$$H_{s,t} = \int_s^t h(u) d u .$$
So the exact simulation of the $r_t$ based on the Vasicek model is
$$r_{t_{i+1}} = e^{-\alpha \delta} r_{t_i} + b (1-e^{-\alpha \delta}) + \sqrt{\frac{\sigma^2}{2 \alpha} (1- e^{-2 \alpha \delta})} Z_i,$$
where $\delta = t_{i+1} -t_{i}$ and $Z_i \sim N(0,1)$ for every $i$ from 0 to n.
By the Euler method, we can estimate the $r_t$ by the formula
$$r_{t_{i+1}} = r_{t_i} + \alpha (b - r_{t_i}) \delta + \sigma \sqrt{\delta} Z_i ,$$
and by the Milstein method, we can estimate the $r_t$ by the formula
$$r_{t_{i+1}} = r_{t_i} + \alpha (b - r_{t_i}) \delta + \sigma \sqrt{\delta} Z_i + 0.5 \sigma^{'}(r_t) \sigma (r_t) (W^{2}_{t,t+1} - \delta).$$
Since $\sigma$ is a constant, the Milstein method is as same as the Euler method.
The code is as follow:
```
## define a new class include the three method related above.
class SDE_v2:
def __init__(self, Mu, Sigma, InitState):
self.Mu = Mu
self.Sigma = Sigma
self.InitState = InitState
self.SigmaP = lambda r, t: 0 #first order derivative of Sigma used for Milstein;
## initial the SigmaP to 0 since sigma is a constant
## Euler method
def Euler(self, T, N):
r0 = self.InitState
Mu = self.Mu
Sigma = self.Sigma
t = np.linspace(0, T, N+1)
DeltaT = T/N
Wh = np.zeros(N+1) #init BM
rh = r0 + np.zeros(N+1) #init Xh
for i in range(N): #run EM
DeltaW = np.sqrt(t[i+1] - t[i]) * np.random.normal()
Wh[i+1] = Wh[i] + DeltaW
rh[i+1] = rh[i] + Mu(rh[i], t[i]) * DeltaT + Sigma(rh[i], t[i])* DeltaW
return t, rh, Wh
## Milstein method
def Milstein(self, T, N):
r0 = self.InitState
Mu = self.Mu
SigmaP = self.SigmaP
Sigma = self.Sigma
t = np.linspace(0, T, N+1)
DeltaT = T/N
Wh = np.zeros(N+1) #init BM
rh = r0 + np.zeros(N+1) #init Xh
for i in range(N):
DeltaW = np.sqrt(t[i+1] - t[i]) * np.random.normal()
Wh[i+1] = Wh[i] + DeltaW
rh[i+1] = rh[i] + Mu(rh[i], t[i]) * DeltaT + Sigma(rh[i], t[i])* DeltaW #Euler
rh[i+1] = rh[i+1] + \
0.5 * Sigma(rh[i], t[i]) * SigmaP(rh[i], t[i]) * (DeltaW**2 - DeltaT)
return t, rh, Wh
## Exact simulation
def Exact(self,a,b,T,sigma,N):
r0 = self.InitState
t = np.linspace(0, T, N+1)
delta = T/N
rh = r0 + np.zeros(N+1)
sigma_hat = sigma**2*(1-np.exp(-2*a*delta))/2/a
for i in range(N):
rh[i+1]=rh[i]*np.exp(-a*delta)+b*(1-np.exp(-a*delta))+ np.sqrt(sigma_hat)* np.random.normal()
return rh
#define function Mu and Sigma and decide the parameters in the problem
alp = 2.11
b = 0.02
sig = 0.033
T = 1
Mu = lambda r,t: alp * (b - r)
Sigma = lambda r,t: sig
r0 = 0.005 #init state
P=P_01 ## the exact value of the zero coupon bond
## creat a object iSDE
oSDE = SDE_v2(Mu, Sigma, r0)
NumMesh = 1000
NumSimu = 100
deltaT=T/NumMesh
ArrLog2Steps = np.arange(6)
NumMinLog2Steps = 4
ArrErr_Euler = np.zeros(ArrLog2Steps.size)
ArrErr_Milstein = np.zeros(ArrLog2Steps.size)
ArrErr_Exact = np.zeros(ArrLog2Steps.size)
for n in ArrLog2Steps:
NumMesh = np.power(2, n + NumMinLog2Steps)
errsum_Euler = 0
errsum_Milstein = 0
errsum_Exact = 0
for i in range(NumSimu):
## Euler method
[t, rh, Wh] = oSDE.Euler(T, NumMesh)
PhT=np.exp(-1.0*deltaT*np.sum(rh))
errsum_Euler = errsum_Euler + np.abs(PhT - P)
## plt.plot(t,rh)
## Milstein simulation
[t, rh, Wh] = oSDE.Milstein(T, NumMesh)
PhT=np.exp(-1.0*deltaT*np.sum(rh))
errsum_Milstein = errsum_Milstein + np.abs(PhT - P)
## plt.plot(t,rh)
## Exact simulation
rh= oSDE.Exact(alp,b,T,sig,NumMesh)
PhT=np.exp(-1.0*deltaT*np.sum(rh))
errsum_Exact = errsum_Exact + np.abs(PhT - P)
## print(PhT)
plt.plot(t,rh)
ArrErr_Euler[n] = errsum_Euler/NumSimu
ArrErr_Milstein[n] = errsum_Milstein/NumSimu
ArrErr_Exact[n] = errsum_Exact/NumSimu
## print out the covergence rate
x_coordinate = ArrLog2Steps+NumMinLog2Steps
y_coordinate_Euler = np.log(ArrErr_Euler)
y_coordinate_Milstein = np.log(ArrErr_Milstein)
y_coordinate_Exact = np.log(ArrErr_Exact)
plt.plot(x_coordinate, y_coordinate_Euler,label='Euler')
plt.plot(x_coordinate, y_coordinate_Milstein,label='Milstein')
plt.plot(x_coordinate, y_coordinate_Exact,label='Exact simulation')
plt.legend()
lg0 = stats.linregress(x_coordinate,y_coordinate_Euler)
lg1 = stats.linregress(x_coordinate,y_coordinate_Milstein)
lg2 = stats.linregress(x_coordinate,y_coordinate_Exact)
rate0 = -lg0[0]
rate1 = -lg1[0]
rate2 = -lg2[0]
print('rate for Euler is '+ str(rate0))
print('rate for Milstein is '+ str(rate1))
print('rate for Exact is '+ str(rate2))
```
| github_jupyter |
```
from __future__ import division
import os
import numpy as np
from collections import OrderedDict
import logging
import pandas
from astropy.io import fits
import astropy.wcs
from astropy import table
import sep
import warnings
from astropy.utils.exceptions import AstropyWarning
warnings.simplefilter('ignore', category=AstropyWarning)
import astropyp
print 'astropyp version', astropyp.version.version
from IPython.display import display
logger = logging.getLogger('ipy_session')
logger.setLevel(logging.INFO)
img_path = '/media/data-beta/users/fmooleka/decam/std_resamp/'
conv_filter = np.array([
[0.030531, 0.065238, 0.112208, 0.155356, 0.173152, 0.155356, 0.112208, 0.065238, 0.030531],
[0.065238, 0.139399, 0.239763, 0.331961, 0.369987, 0.331961, 0.239763, 0.139399, 0.065238],
[0.112208, 0.239763, 0.412386, 0.570963, 0.636368, 0.570963, 0.412386, 0.239763, 0.112208],
[0.155356, 0.331961, 0.570963, 0.790520, 0.881075, 0.790520, 0.570963, 0.331961, 0.155356],
[0.173152, 0.369987, 0.636368, 0.881075, 0.982004, 0.881075, 0.636368, 0.369987, 0.173152],
[0.155356, 0.331961, 0.570963, 0.790520, 0.881075, 0.790520, 0.570963, 0.331961, 0.155356],
[0.112208, 0.239763, 0.412386, 0.570963, 0.636368, 0.570963, 0.412386, 0.239763, 0.112208],
[0.065238, 0.139399, 0.239763, 0.331961, 0.369987, 0.331961, 0.239763, 0.139399, 0.065238],
[0.030531, 0.065238, 0.112208, 0.155356, 0.173152, 0.155356, 0.112208, 0.065238, 0.030531]
])
# SExtractor 'extract' detection parameters
sex_params = {
'extract': {
#'thresh': 1.5,# *bkg.globalrms,
#'err':,
#'minarea': 5, # default
'conv': conv_filter,
#'deblend_nthresh': 32, #default
'deblend_cont': 0.001,
#'clean': True, #default
#'clean_param': 1 #default
},
'kron_k': 2.5,
'kron_min_radius': 3.5,
'filter': conv_filter,
'thresh': 1.5
}
idx_connect = 'sqlite:////media/data-beta/users/fmooleka/decam/decam.db'
exp_connect = 'sqlite:////media/data-beta/users/fmooleka/2016decam/{0}.{1}.db'
aper_radius = 8
gain=4.
min_flux = 1000
min_amplitude = 500
subsampling = 5
good_amplitude = 100
calibrate_amplitude = 200
max_offset=3
% matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
import sep
conv_filter = np.array([
[0.030531, 0.065238, 0.112208, 0.155356, 0.173152, 0.155356, 0.112208, 0.065238, 0.030531],
[0.065238, 0.139399, 0.239763, 0.331961, 0.369987, 0.331961, 0.239763, 0.139399, 0.065238],
[0.112208, 0.239763, 0.412386, 0.570963, 0.636368, 0.570963, 0.412386, 0.239763, 0.112208],
[0.155356, 0.331961, 0.570963, 0.790520, 0.881075, 0.790520, 0.570963, 0.331961, 0.155356],
[0.173152, 0.369987, 0.636368, 0.881075, 0.982004, 0.881075, 0.636368, 0.369987, 0.173152],
[0.155356, 0.331961, 0.570963, 0.790520, 0.881075, 0.790520, 0.570963, 0.331961, 0.155356],
[0.112208, 0.239763, 0.412386, 0.570963, 0.636368, 0.570963, 0.412386, 0.239763, 0.112208],
[0.065238, 0.139399, 0.239763, 0.331961, 0.369987, 0.331961, 0.239763, 0.139399, 0.065238],
[0.030531, 0.065238, 0.112208, 0.155356, 0.173152, 0.155356, 0.112208, 0.065238, 0.030531]
])
# SExtractor 'extract' detection parameters
sex_params = {
'extract': {
#'thresh': 50, #*bkg.globalrms,
#'err':,
#'minarea': 5, # default
'conv': conv_filter,
#'deblend_nthresh': 32, #default
'deblend_cont': 0.001,
#'clean': True, #default
#'clean_param': 1 #default
},
'kron_k': 2.5,
'kron_min_radius': 3.5,
'filter': conv_filter,
'thresh': 1.5
}
expnum = 442493
sql = 'select * from decam_obs where EXPNUM={0}'.format(expnum)
exp_info = astropyp.db_utils.index.query(sql, idx_connect)
filename = exp_info[exp_info['PRODTYPE']=='image']['filename'][0]
img = fits.open(filename)
filename = exp_info[exp_info['PRODTYPE']=='dqmask']['filename'][0]
dqmask = fits.open(filename)
img_data = img[1].data
dqmask_data = dqmask[1].data
ccd = astropyp.phot.phot.SingleImage(img=img_data, dqmask=dqmask_data, gain=4., exptime=30, aper_radius=8)
ccd.detect_sources(sex_params, subtract_bkg=True)
ccd.select_psf_sources(min_flux, min_amplitude, badpix_flags=[], edge_dist=aper_radius+max_offset)
psf_array = ccd.create_psf()
#ccd.show_psf()
good_idx = ccd.catalog.sources['peak']>calibrate_amplitude
good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0)
result = ccd.perform_psf_photometry(indices=good_idx)
print 'good sources', np.sum(good_idx)
print 'good psf_err', np.sum(np.isfinite(ccd.catalog.sources['psf_mag_err']))
good_idx = ccd.catalog.sources['peak']>calibrate_amplitude
good_idx = good_idx & (ccd.catalog.sources['pipeline_flags']==0)
good_idx = good_idx & np.isfinite(ccd.catalog.sources['psf_mag'])
good_sources = ccd.catalog.sources[good_idx]
print 'rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))
print 'mean', np.mean(good_sources['psf_mag_err'])
print 'median', np.median(good_sources['psf_mag_err'])
print 'stddev', np.std(good_sources['psf_mag_err'])
bad_count = np.sum(good_sources['psf_mag_err']>.05)
print 'bad psf error: {0}, or {1}%'.format(bad_count, bad_count/len(good_sources)*100)
print 'Better than 5%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.05), len(good_sources))
print 'Better than 2%: {0} of {1}'.format(np.sum(good_sources['psf_mag_err']<=.02), len(good_sources))
good_sources['aper_flux','psf_flux','peak','psf_mag_err'][good_sources['psf_mag_err']>.05]
flux_diff = (good_sources['aper_flux']-good_sources['psf_flux'])/good_sources['aper_flux']
plt.scatter(good_sources['psf_mag'],flux_diff)
plt.xlabel('psf mag')
plt.ylabel('(aper-psf)/aper flux ratio')
plt.show()
plt.scatter(flux_diff, good_sources['psf_mag_err'],)
plt.xlabel('(aper-psf)/aper flux ratio')
plt.ylabel('psf mag error')
plt.show()
# We see that all sources with high psf error are actually bad detections
# or sources with potential unresolved companions
from astropy.nddata.utils import extract_array
bad_idx = good_sources['psf_mag_err']>.05
for src in good_sources[bad_idx]:
subset = extract_array(img_data, (20,20), (src['y'],src['x']))
plt.imshow(subset, interpolation='none')
plt.title(src['psf_mag_err'])
plt.show()
good_idx = ((ccd.catalog.sources['peak']>calibrate_amplitude)
& (ccd.catalog.sources['pipeline_flags']==0)
& np.isfinite(ccd.catalog.sources['psf_mag'])
& (ccd.catalog.sources['psf_mag_err']<.05))
good_sources = ccd.catalog.sources[good_idx]
print 'rms', np.sqrt(np.sum(good_sources['psf_mag_err']**2/len(good_sources)))
print 'mean', np.mean(good_sources['psf_mag_err'])
print 'median', np.median(good_sources['psf_mag_err'])
print 'stddev', np.std(good_sources['psf_mag_err'])
```
| github_jupyter |

# Append Columns and Rows
Copyright (c) Microsoft Corporation. All rights reserved.<br>
Licensed under the MIT License.<br>
Often the data we want does not come in a single dataset: they are coming from different locations, have features that are separated, or are simply not homogeneous. Unsurprisingly, we typically want to work with a single dataset at a time.
Azure ML Data Prep allows the concatenation of two or more dataflows by means of column and row appends.
We will demonstrate this by defining a single dataflow that will pull data from multiple datasets.
## Table of Contents
[append_columns(dataflows)](#append_columns)<br>
[append_rows(dataflows)](#append_rows)
<a id="append_columns"></a>
## `append_columns(dataflows)`
We can append data width-wise, which will change some or all existing rows and potentially adding rows (based on an assumption that data in two datasets are aligned on row number).
However we cannot do this if the reference dataflows have clashing schema with the target dataflow. Observe:
```
from azureml.dataprep import auto_read_file
dflow = auto_read_file(path='../data/crime-dirty.csv')
dflow.head(5)
dflow_chicago = auto_read_file(path='../data/chicago-aldermen-2015.csv')
dflow_chicago.head(5)
from azureml.dataprep import ExecutionError
try:
dflow_combined_by_column = dflow.append_columns([dflow_chicago])
dflow_combined_by_column.head(5)
except ExecutionError:
print('Cannot append_columns with schema clash!')
```
As expected, we cannot call `append_columns` with target dataflows that have clashing schema.
We can make the call once we rename or drop the offending columns. In more complex scenarios, we could opt to skip or filter to make rows align before appending columns. Here we will choose to simply drop the clashing column.
```
dflow_combined_by_column = dflow.append_columns([dflow_chicago.drop_columns(['Ward'])])
dflow_combined_by_column.head(5)
```
Notice that the resultant schema has more columns in the first N records (N being the number of records in `dataflow` and the extra columns being the width of the schema of our reference dataflow, chicago, minus the `Ward` column). From the N+1th record onwards, we will only have a schema width matching that of the `Ward`-less chicago set.
Why is this? As much as possible, the data from the reference dataflow(s) will be attached to existing rows in the target dataflow. If there are not enough rows in the target dataflow to attach to, we simply append them as new rows.
Note that these are appends, not joins (for joins please reference [Join](join.ipynb)), so the append may not be logically correct, but will take effect as long as there are no schema clashes.
```
# Ward-less data after we skip the first N rows
dflow_len = dflow.row_count
dflow_combined_by_column.skip(dflow_len).head(5)
```
<a id="append_rows"></a>
## `append_rows(dataflows)`
We can append data length-wise, which will only have the effect of adding new rows. No existing data will be changed.
```
from azureml.dataprep import auto_read_file
dflow = auto_read_file(path='../data/crime-dirty.csv')
dflow.head(5)
dflow_spring = auto_read_file(path='../data/crime-spring.csv')
dflow_spring.head(5)
dflow_chicago = auto_read_file(path='../data/chicago-aldermen-2015.csv')
dflow_chicago.head(5)
dflow_combined_by_row = dflow.append_rows([dflow_chicago, dflow_spring])
dflow_combined_by_row.head(5)
```
Notice that neither schema nor data has changed for the target dataflow.
If we skip ahead, we will see our target dataflows' data.
```
# chicago data
dflow_len = dflow.row_count
dflow_combined_by_row.skip(dflow_len).head(5)
# crimes spring data
dflow_chicago_len = dflow_chicago.row_count
dflow_combined_by_row.skip(dflow_len + dflow_chicago_len).head(5)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
pd.set_option('display.float_format', lambda x: '%.3f' % x)
pd.options.mode.chained_assignment = None
%matplotlib inline
import matplotlib
#matplotlib.use('agg')
matplotlib.style.use('ggplot')
from matplotlib import pyplot as plt
from functools import reduce
import pickle as pkl
compounds=pd.read_csv("/data/dharp/compounding/datasets/compounds_reduced.csv",sep="\t",index_col=0)
compounds=compounds.query('decade != 2000')
compounds=compounds.reindex()
compounds=compounds.groupby(['modifier','head','context'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds
heads=pd.read_csv("/data/dharp/compounding/datasets/heads_reduced.csv",sep="\t")
heads=heads.query('decade != 2000')
heads=heads.reindex()
heads=heads.groupby(['head','context'])['count'].sum().to_frame()
heads.reset_index(inplace=True)
heads
modifiers=pd.read_csv("/data/dharp/compounding/datasets/modifiers_reduced.csv",sep="\t")
modifiers=modifiers.query('decade != 2000')
modifiers=modifiers.reindex()
modifiers=modifiers.groupby(['modifier','context'])['count'].sum().to_frame()
modifiers.reset_index(inplace=True)
modifiers
XY=compounds.groupby(['modifier','head'])['count'].sum().to_frame()
XY.columns=['a']
X_star=compounds.groupby(['modifier'])['count'].sum().to_frame()
X_star.columns=['x_star']
Y_star=compounds.groupby(['head'])['count'].sum().to_frame()
Y_star.columns=['star_y']
merge1=pd.merge(XY.reset_index(),X_star.reset_index(),on=['modifier'])
information_feat=pd.merge(merge1,Y_star.reset_index(),on=['head'])
information_feat['b']=information_feat['x_star']-information_feat['a']
information_feat['c']=information_feat['star_y']-information_feat['a']
information_feat['N']=np.sum(compounds['count'])
#information_feat=pd.merge(information_feat,compound_decade_counts.reset_index(),on=['decade'])
information_feat['d']=information_feat['N']-(information_feat['a']+information_feat['b']+information_feat['c'])
information_feat['x_bar_star']=information_feat['N']-information_feat['x_star']
information_feat['star_y_bar']=information_feat['N']-information_feat['star_y']
#information_feat['LR']=-2*np.sum(information_feat['a']*np.log2((information_feat['a']*information_feat['N'])/(information_feat['x_star']*information_feat['star_y'])))
information_feat.set_index(['modifier','head'],inplace=True)
information_feat.replace(0,0.001,inplace=True)
information_feat['log_ratio']=2*(information_feat['a']*np.log((information_feat['a']*information_feat['N'])/(information_feat['x_star']*information_feat['star_y']))+\
information_feat['b']*np.log((information_feat['b']*information_feat['N'])/(information_feat['x_star']*information_feat['star_y_bar']))+\
information_feat['c']*np.log((information_feat['c']*information_feat['N'])/(information_feat['x_bar_star']*information_feat['star_y']))+\
information_feat['d']*np.log((information_feat['d']*information_feat['N'])/(information_feat['x_bar_star']*information_feat['star_y_bar'])))
information_feat['ppmi']=np.log2((information_feat['a']*information_feat['N'])/(information_feat['x_star']*information_feat['star_y']))
information_feat['local_mi']=information_feat['a']*information_feat['ppmi']
information_feat.ppmi.loc[information_feat.ppmi<=0]=0
information_feat.drop(['a','x_star','star_y','b','c','d','N','d','x_bar_star','star_y_bar'],axis=1,inplace=True)
information_feat
modifier_denom=modifiers.groupby(['modifier'])['count'].agg(lambda x: np.sqrt(np.sum(np.square(x)))).to_frame()
modifier_denom.columns=['modifier_denom']
modifier_denom
head_denom=heads.groupby(['head'])['count'].agg(lambda x: np.sqrt(np.sum(np.square(x)))).to_frame()
head_denom.columns=['head_denom']
head_denom
compound_denom=compounds.groupby(['modifier','head'])['count'].agg(lambda x: np.sqrt(np.sum(np.square(x)))).to_frame()
compound_denom.columns=['compound_denom']
compound_denom
mod_cols=modifiers.columns.tolist()
mod_cols[-1]="mod_count"
modifiers.columns=mod_cols
#compounds.drop(['comp_count'],axis=1,inplace=True)
comp_cols=compounds.columns.tolist()
comp_cols[-1]="comp_count"
compounds.columns=comp_cols
compound_modifier_sim=pd.merge(compounds,modifiers,on=["modifier","context"])
compound_modifier_sim['numerator']=compound_modifier_sim['comp_count']*compound_modifier_sim['mod_count']
compound_modifier_sim=compound_modifier_sim.groupby(['modifier','head'])['numerator'].sum().to_frame()
compound_modifier_sim=pd.merge(compound_modifier_sim.reset_index(),compound_denom.reset_index(),on=["modifier","head"])
compound_modifier_sim=pd.merge(compound_modifier_sim,modifier_denom.reset_index(),on=['modifier'])
compound_modifier_sim['sim_with_modifier']=compound_modifier_sim['numerator']/(compound_modifier_sim['compound_denom']*compound_modifier_sim['modifier_denom'])
compound_modifier_sim.set_index(['modifier','head'],inplace=True)
compound_modifier_sim.drop(['numerator','compound_denom'],axis=1,inplace=True)
compound_modifier_sim
compound_modifier_sim.sim_with_modifier.describe()
_=compound_modifier_sim.hist(column ='sim_with_modifier', figsize=(10, 10),bins=100,sharex=True,sharey=True,density=True,range=(-0.1,1.1))
head_cols=heads.columns.tolist()
head_cols[-1]="head_count"
heads.columns=head_cols
compound_head_sim=pd.merge(compounds,heads,on=["head","context"])
compound_head_sim['numerator']=compound_head_sim['comp_count']*compound_head_sim['head_count']
compound_head_sim=compound_head_sim.groupby(['modifier','head'])['numerator'].sum().to_frame()
compound_head_sim=pd.merge(compound_head_sim.reset_index(),compound_denom.reset_index(),on=["modifier","head"])
compound_head_sim=pd.merge(compound_head_sim,head_denom.reset_index(),on=['head'])
compound_head_sim['sim_with_head']=compound_head_sim['numerator']/(compound_head_sim['compound_denom']*compound_head_sim['head_denom'])
compound_head_sim.set_index(['modifier','head'],inplace=True)
compound_head_sim.drop(['numerator','compound_denom'],axis=1,inplace=True)
compound_head_sim
compound_head_sim.sim_with_head.describe()
_=compound_head_sim.hist(column ='sim_with_head', figsize=(10, 10),bins=100,sharex=True,sharey=True,density=True,range=(-0.1,1.1))
constituent_sim=pd.merge(heads,compounds,on=["head","context"])
#constituent_sim.drop('comp_count',axis=1,inplace=True)
constituent_sim=pd.merge(constituent_sim,modifiers,on=["modifier","context"])
constituent_sim['numerator']=constituent_sim['head_count']*constituent_sim['mod_count']
constituent_sim=constituent_sim.groupby(['modifier','head'])['numerator'].sum().to_frame()
constituent_sim=pd.merge(constituent_sim.reset_index(),head_denom.reset_index(),on=["head"])
constituent_sim=pd.merge(constituent_sim,modifier_denom.reset_index(),on=["modifier"])
constituent_sim['sim_bw_constituents']=constituent_sim['numerator']/(constituent_sim['head_denom']*constituent_sim['modifier_denom'])
constituent_sim.set_index(['modifier','head'],inplace=True)
constituent_sim.drop(['numerator','modifier_denom','head_denom'],axis=1,inplace=True)
constituent_sim
constituent_sim.sim_bw_constituents.describe()
_=constituent_sim.hist(column ='sim_bw_constituents', figsize=(10, 10),bins=100,sharex=True,sharey=True,density=True,range=(-0.1,1.1))
dfs = [constituent_sim.reset_index(), compound_head_sim.reset_index(), compound_modifier_sim.reset_index(), information_feat.reset_index()]
compounds_final = reduce(lambda left,right: pd.merge(left,right,on=['modifier','head']), dfs)
compounds_final.drop(['head_denom','modifier_denom'],axis=1,inplace=True)
compounds_final.set_index(['modifier','head'],inplace=True)
compounds_final.fillna(0,inplace=True)
compounds_final -= compounds_final.min()
compounds_final /= compounds_final.max()
compounds_final
compounds_final.to_csv("/data/dharp/compounding/datasets/DFM_Contextual_Non_Temporal.csv",sep='\t')
pkl.dump( compounds_final.index.tolist(), open( '/data/dharp/compounding/datasets/total_compounds_list.pkl', "wb" ) )
```
| github_jupyter |
```
import numpy as np
from collections import Counter
from graphviz import Digraph
class Node:
def __init__(self, frequency, letter=None):
self.left=None
self.right=None
self.parent=None
self.frequency = frequency
self.letter = letter if letter is not None else None
def find_top(self):
while(True):
if self.parent is not None:
self = self.parent
else:
break
return self
def visualize_tree(self):
def add_nodes_edges(self, dot=None):
# Create Digraph object
if dot is None:
dot = Digraph()
dot.node(name=str(self), label=str(self.frequency))
# Add nodes
if self.left:
if self.left.letter == None:
dot.node(name=str(self.left) ,label=str(self.left.frequency))
else:
dot.node(name=str(self.left) ,label=str(self.left.letter))
dot.edge(str(self), str(self.left))
dot = add_nodes_edges(self.left, dot=dot)
if self.right:
if self.right.letter == None:
dot.node(name=str(self.right) ,label=str(self.right.frequency))
else:
dot.node(name=str(self.right) ,label=str(self.right.letter))
dot.edge(str(self), str(self.right))
dot = add_nodes_edges(self.right, dot=dot)
return dot
dot = add_nodes_edges(self)
# Visualize the graph
display(dot)
return dot
def display(self):
lines, *_ = self._display_aux()
for line in lines:
print(line)
def _display_aux(self):
"""Returns list of strings, width, height, and horizontal coordinate of the root."""
# No child.
if self.right is None and self.left is None:
line = '%s' % self.frequency
width = len(line)
height = 1
middle = width // 2
return [line], width, height, middle
# Only left child.
if self.right is None:
lines, n, p, x = self.left._display_aux()
s = '%s' % self.frequency
u = len(s)
first_line = (x + 1) * ' ' + (n - x - 1) * '_' + s
second_line = x * ' ' + '/' + (n - x - 1 + u) * ' '
shifted_lines = [line + u * ' ' for line in lines]
return [first_line, second_line] + shifted_lines, n + u, p + 2, n + u // 2
# Only right child.
if self.left is None:
lines, n, p, x = self.right._display_aux()
s = '%s' % self.frequency
u = len(s)
first_line = s + x * '_' + (n - x) * ' '
second_line = (u + x) * ' ' + '\\' + (n - x - 1) * ' '
shifted_lines = [u * ' ' + line for line in lines]
return [first_line, second_line] + shifted_lines, n + u, p + 2, u // 2
# Two children.
left, n, p, x = self.left._display_aux()
right, m, q, y = self.right._display_aux()
s = '%s' % self.frequency
u = len(s)
first_line = (x + 1) * ' ' + (n - x - 1) * '_' + s + y * '_' + (m - y) * ' '
second_line = x * ' ' + '/' + (n - x - 1 + u + y) * ' ' + '\\' + (m - y - 1) * ' '
if p < q:
left += [n * ' '] * (q - p)
elif q < p:
right += [m * ' '] * (p - q)
zipped_lines = zip(left, right)
lines = [first_line, second_line] + [a + u * ' ' + b for a, b in zipped_lines]
return lines, n + m + u, max(p, q) + 2, n + u // 2
'''
notes: the root index is zero
for every node with index i the left can be found by 2*i + 1
for the right it can be found by 2*i + 2
'''
class MinHeap:
def __init__(self,start_size):
self.array = np.empty([start_size], dtype='object')
self.size = 0
def __len__(self):
return len(self.size)
def __str__(self):
mylist =[]
for i in range(self.size):
freq = self.array[i].frequency
if self.array is not None:
letter = self.array[i].letter
mylist.append((freq,letter))
else:
mylist.append((freq,None))
return 'pairlist of frequency&letter: {0}'.format( mylist)
def resize(self):
new_size = 2*len(self.array)
new_array = np.empty([new_size],dtype='object')
for index in range(self.size):
new_array[index] = self.array[index]
self.array = new_array.copy()
def swap(self, index1, index2):
temp_node = self.array[index1]
self.array[index1] = self.array[index2]
self.array[index2] = temp_node
def insert_node(self, node): # inserts at the bottom
if self.size >= len(self.array):
self.resize()
index = self.size
self.array[index] = node
self.size +=1
while (index > 0 ):
if self.array[index].frequency < self.array[int((index-1)/2)].frequency :
self.swap(index , int((index-1)/2 )) # switchig parent
index = int((index - 1)/2)
else:
break
def pop_node(self): # removes root and returns it also takes leaf and makes it root
root = self.array[0]
index = 0
self.array[0] = self.array[self.size - 1] # move last non-None node to root
self.array[self.size-1] = None # toss out last node
self.size -= 1
while (index < self.size ):
if (2*index + 1) < self.size and (2*index + 2) < self.size: # check if left and right indices not out of range
if self.array[(2*index + 1)].frequency < self.array[(2*index + 2)].frequency \
and self.array[(2*index + 1)].frequency < self.array[(index)].frequency :
self.swap((2*index + 1) , index ) # switching left child with parent
index = (2*index + 1)
elif self.array[(2*index + 1)].frequency > self.array[(2*index + 2)].frequency \
and self.array[(2*index + 2)].frequency < self.array[(index)].frequency :
self.swap((2*index + 2) , index ) # switching right child with parent
index = (2*index + 2)
else:
break
elif (2*index + 1) < self.size :
if self.array[(2*index + 1)].frequency < self.array[(index)].frequency :
self.swap((2*index + 1), index ) # switching left child with parent
index = (2*index + 1)
else:
break
elif (2*index + 2) < self.size :
if self.array[(2*index + 2)].frequency < self.array[(index)].frequency :
self.swap((2*index + 2), index ) # switching right child with parent
index = (2*index + 2)
else:
break
else:
break
return root
#inserts nodes from using values from the dictionary
def heapify(self, new_dictionary):
for letter, frequency in new_dictionary.items():
node = Node(frequency, letter)
self.insert_node(node)
#self.array = self.array[self.array is not np.array(None)]
class Huffman:
def __init__(self, letter_list=None):
self.letter_list = letter_list if letter_list is not None else None
self.bin_dict = {}
self.bin_tree_list = []
def encoding(self,root,bits):
if root is None:
return
if root.letter is not None:
self.bin_dict.update({root.letter: bits})
self.encoding(root.left, bits + '0')
self.encoding(root.right, bits + '1')
def bin_tree(self,root):
if root is None:
return
if root.letter is not None:
self.bin_tree_list.append('1')
self.bin_tree_list.append(root.letter)
elif root.letter is None:
self.bin_tree_list.append('0')
self.bin_tree(root.left)
self.bin_tree(root.right)
def huffing(self):
frequency_dict = Counter(self.letter_list)
print(frequency_dict)
heap = MinHeap(len(frequency_dict))
heap.heapify(frequency_dict)
merge_heap = MinHeap(len(frequency_dict))
for count in range(heap.size-1):
if heap.size > 1:
lowest = heap.pop_node()
next_lowest = heap.pop_node()
node = Node(lowest.frequency + next_lowest.frequency)
node.left = lowest
node.right = next_lowest
merge_heap.insert_node(node)
else:
if heap.size > 0:
lowest = heap.pop_node()
merge_heap.insert_node(lowest)
for next_count in range(merge_heap.size-1):
lowest = merge_heap.pop_node()
next_lowest = merge_heap.pop_node()
print(lowest.frequency)
node = Node(lowest.frequency + next_lowest.frequency)
node.left = lowest
node.right = next_lowest
merge_heap.insert_node(node)
dot = merge_heap.array[0].visualize_tree()
self.bin_tree(merge_heap.array[0])
self.encoding(merge_heap.array[0],'')
def write_code(self):
with open('huffman.txt', 'w') as writer:
for value in self.bin_tree_list:
writer.write(value)
writer.write('|')
for letter in self.letter_list:
code = self.bin_dict[letter]
writer.write(code)
self.bin_tree_list = []
self.bin_dict = {}
def decode(self):
root = Node(42)
iter_code = iter(self.bin_tree_list)
value = next(iter_code)
try:
for i in range(len(self.bin_tree_list)):
value = next(iter_code)
if root.left is not None and root.right is not None:
root = root.parent
if value == '0':
node = Node(1)
if root.left is None:
root.left = node
else:
root.right = node
node.parent = root
root = node
elif value == '1':
value = next(iter_code)
node = Node(2,value)
if root.left is None:
root.left = node
else:
root.right = node
node.parent = root
except StopIteration:
pass
root = root.find_top()
return root
def read_code(self):
tree = []
bin = []
letters = []
with open('huffman.txt', 'r') as reader:
contents = list(reader.read())
copy_contents = contents.copy()
for tree_value in contents:
if (tree_value) == '|':
copy_contents.pop(0)
break
else:
self.bin_tree_list.append(tree_value)
copy_contents.pop(0)
new_tree = self.decode()
#new_tree.display()
for bit in copy_contents:
if bit == '0' :
new_tree = new_tree.left
if new_tree.left is None and new_tree.right == None:
letters.append(new_tree.letter)
new_tree = new_tree.find_top()
elif bit == '1':
new_tree = new_tree.right
if new_tree.left is None and new_tree.right == None:
letters.append(new_tree.letter)
new_tree = new_tree.find_top()
print(letters)
return letters
#main
with open('message.txt', 'r') as reader:
contents = list(reader.read())
#print(contents)
huffman = Huffman(contents)
huffman.huffing()
print(huffman.bin_dict)
huffman.write_code()
letters = huffman.read_code()
with open('decoded.txt', 'w') as writer:
for value in letters:
writer.write(value)
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# Histograms, Binnings, and Density
A simple histogram can be a great first step in understanding a dataset.
Earlier, we saw a preview of Matplotlib's histogram function (see [Comparisons, Masks, and Boolean Logic](02.06-Boolean-Arrays-and-Masks.ipynb)), which creates a basic histogram in one line, once the normal boiler-plate imports are done:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
data = np.random.randn(1000)
plt.hist(data);
```
The ``hist()`` function has many options to tune both the calculation and the display;
here's an example of a more customized histogram:
```
plt.hist(data, bins=30, normed=True, alpha=0.5,
histtype='stepfilled', color='steelblue',
edgecolor='none');
```
The ``plt.hist`` docstring has more information on other customization options available.
I find this combination of ``histtype='stepfilled'`` along with some transparency ``alpha`` to be very useful when comparing histograms of several distributions:
```
x1 = np.random.normal(0, 0.8, 1000)
x2 = np.random.normal(-2, 1, 1000)
x3 = np.random.normal(3, 2, 1000)
kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40)
plt.hist(x1, **kwargs)
plt.hist(x2, **kwargs)
plt.hist(x3, **kwargs);
```
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the ``np.histogram()`` function is available:
```
counts, bin_edges = np.histogram(data, bins=5)
print(counts)
```
## Two-Dimensional Histograms and Binnings
Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.
We'll take a brief look at several ways to do this here.
We'll start by defining some data—an ``x`` and ``y`` array drawn from a multivariate Gaussian distribution:
```
mean = [0, 0]
cov = [[1, 1], [1, 2]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
```
### ``plt.hist2d``: Two-dimensional histogram
One straightforward way to plot a two-dimensional histogram is to use Matplotlib's ``plt.hist2d`` function:
```
plt.hist2d(x, y, bins=30, cmap='Blues')
cb = plt.colorbar()
cb.set_label('counts in bin')
```
Just as with ``plt.hist``, ``plt.hist2d`` has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.
Further, just as ``plt.hist`` has a counterpart in ``np.histogram``, ``plt.hist2d`` has a counterpart in ``np.histogram2d``, which can be used as follows:
```
counts, xedges, yedges = np.histogram2d(x, y, bins=30)
```
For the generalization of this histogram binning in dimensions higher than two, see the ``np.histogramdd`` function.
### ``plt.hexbin``: Hexagonal binnings
The two-dimensional histogram creates a tesselation of squares across the axes.
Another natural shape for such a tesselation is the regular hexagon.
For this purpose, Matplotlib provides the ``plt.hexbin`` routine, which will represents a two-dimensional dataset binned within a grid of hexagons:
```
plt.hexbin(x, y, gridsize=30, cmap='Blues')
cb = plt.colorbar(label='count in bin')
```
``plt.hexbin`` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).
### Kernel density estimation
Another common method of evaluating densities in multiple dimensions is *kernel density estimation* (KDE).
This will be discussed more fully in [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb), but for now we'll simply mention that KDE can be thought of as a way to "smear out" the points in space and add up the result to obtain a smooth function.
One extremely quick and simple KDE implementation exists in the ``scipy.stats`` package.
Here is a quick example of using the KDE on this data:
```
from scipy.stats import gaussian_kde
# fit an array of size [Ndim, Nsamples]
data = np.vstack([x, y])
kde = gaussian_kde(data)
# evaluate on a regular grid
xgrid = np.linspace(-3.5, 3.5, 40)
ygrid = np.linspace(-6, 6, 40)
Xgrid, Ygrid = np.meshgrid(xgrid, ygrid)
Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()]))
# Plot the result as an image
plt.imshow(Z.reshape(Xgrid.shape),
origin='lower', aspect='auto',
extent=[-3.5, 3.5, -6, 6],
cmap='Blues')
cb = plt.colorbar()
cb.set_label("density")
```
KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off).
The literature on choosing an appropriate smoothing length is vast: ``gaussian_kde`` uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data.
Other KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example, ``sklearn.neighbors.KernelDensity`` and ``statsmodels.nonparametric.kernel_density.KDEMultivariate``.
For visualizations based on KDE, using Matplotlib tends to be overly verbose.
The Seaborn library, discussed in [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb), provides a much more terse API for creating KDE-based visualizations.
<!--NAVIGATION-->
< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
| github_jupyter |
# QGS model: MAOSOAM
## Coupled ocean-atmosphere channel model version
This model version is a 2-layer channel QG atmosphere truncated at wavenumber 2 coupled, both by friction and heat exchange, to a shallow water **channel** ocean also truncated at wavenumber 2.
More details can be found in the articles:
* Vannitsem, S., Solé‐Pomies, R. and De Cruz, L. (2019). *Routes to long‐term atmospheric predictability in reduced‐order coupled ocean–atmosphere systems ‐ Impact of the ocean basin boundary conditions.* Quarterly Journal of the Royal Meteorological Society, **145**: 2791– 2805. [doi.org/10.1002/qj.3594](https://doi.org/10.1002/qj.3594)
* Vannitsem, S., Demaeyer, J., De Cruz, L., & Ghil, M. (2015). *Low-frequency variability and heat transport in a low-order nonlinear coupled ocean–atmosphere model*. Physica D: Nonlinear Phenomena, **309**, 71-85. [doi:10.1016/j.physd.2015.07.006](https://doi.org/10.1016/j.physd.2015.07.006)
* De Cruz, L., Demaeyer, J. and Vannitsem, S. (2016). *The Modular Arbitrary-Order Ocean-Atmosphere Model: MAOOAM v1.0*, Geosci. Model Dev., **9**, 2793-2808. [doi:10.5194/gmd-9-2793-2016](https://doi.org/10.5194/gmd-9-2793-2016)
or in the documentation and on [readthedocs](https://qgs.readthedocs.io/en/latest/files/model/maooam_model.html).
## Modules import
First, setting the path and loading of some modules
```
import sys, os
sys.path.extend([os.path.abspath('../')])
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
Initializing the random number generator (for reproducibility). -- Disable if needed.
```
np.random.seed(210217)
```
Importing the model's modules
```
from qgs.params.params import QgParams
from qgs.basis.fourier import contiguous_channel_basis
from qgs.integrators.integrator import RungeKuttaIntegrator
from qgs.functions.tendencies import create_tendencies
```
## Systems definition
General parameters
```
# Time parameters
dt = 0.1
# Saving the model state n steps
write_steps = 100
number_of_trajectories = 1
```
Setting some model parameters
```
# Model parameters instantiation with some non-default specs
model_parameters = QgParams({'n': 1.5})
# Mode truncation at the wavenumber 2 in both x and y spatial
# coordinates for the atmosphere
model_parameters.set_atmospheric_channel_fourier_modes(2, 2, mode="symbolic")
# Mode truncation at the wavenumber 2 in the x and at the
# wavenumber 4 in the y spatial coordinates for the ocean
ocean_basis = contiguous_channel_basis(2, 2, 1.5)
model_parameters.set_oceanic_modes(ocean_basis)
# Setting MAOOAM parameters according to the publication linked above
model_parameters.set_params({'phi0_npi': 0.3056, 'kd': 0.026778245344758034, 'kdp': 0.026778245344758034, 'r': 1.e-8,
'h': 1000.0, 'd': 1.6e-8, 'f0': 1.195e-4, 'sigma': 0.14916, 'n':1.7})
model_parameters.atemperature_params.set_params({'eps': 0.76, 'T0': 270.,
'hlambda': 16.064})
model_parameters.gotemperature_params.set_params({'gamma': 4e9, 'T0': 285.})
```
Setting the short-wave radiation component as in the publication above: $C_{\text{a},1}$ and $C_{\text{o},1}$
```
model_parameters.atemperature_params.set_insolation(350/3., 0)
model_parameters.gotemperature_params.set_insolation(350, 0)
```
Printing the model's parameters
```
model_parameters.print_params()
```
Creating the tendencies function
```
%%time
## Might take several minutes, depending on the number of cpus you have.
f, Df = create_tendencies(model_parameters)
```
## Time integration
Defining an integrator
```
integrator = RungeKuttaIntegrator()
integrator.set_func(f)
```
Start from an initial condition on the attractors obtained after a long transient integration time
```
ic = np.array([ 2.34980646e-02, -5.91652353e-03, 3.20923307e-03, -1.08916714e-03,
-1.13188144e-03, -5.14454554e-03, 1.50294902e-02, -2.20518843e-04,
4.55325496e-03, -1.18748859e-03, 2.27043688e-02, 4.29437410e-04,
3.74041445e-03, -1.78681895e-03, -1.71853500e-03, 3.68921542e-04,
-6.42748591e-04, -2.81188015e-03, -2.14109639e-03, -1.41736652e-03,
3.24489725e-09, 3.97502699e-05, -7.47489713e-05, 9.89194512e-06,
5.52902699e-06, 6.43875197e-05, -6.95990073e-05, 1.21618381e-04,
7.08494425e-05, -1.11255308e-04, 4.13406579e-02, -7.90716982e-03,
1.33752621e-02, 1.66742520e-02, 6.29900201e-03, 1.76761107e-02,
-5.40207169e-02, 1.29814807e-02, -4.35142923e-02, -7.62511906e-03])
```
Now integrate to obtain a trajectory on the attractor
```
%%time
integrator.integrate(0., 1000000., dt, ic=ic, write_steps=write_steps)
time, traj = integrator.get_trajectories()
```
Plotting the result in 3D and 2D
```
varx = 20
vary = 30
varz = 0
fig = plt.figure(figsize=(10, 8))
axi = fig.gca(projection='3d')
axi.scatter(traj[varx], traj[vary], traj[varz], s=0.2);
axi.set_xlabel('$'+model_parameters.latex_var_string[varx]+'$')
axi.set_ylabel('$'+model_parameters.latex_var_string[vary]+'$')
axi.set_zlabel('$'+model_parameters.latex_var_string[varz]+'$');
varx = 30
vary = 10
plt.figure(figsize=(10, 8))
plt.plot(traj[varx], traj[vary], marker='o', ms=0.1, ls='')
plt.xlabel('$'+model_parameters.latex_var_string[varx]+'$')
plt.ylabel('$'+model_parameters.latex_var_string[vary]+'$');
var = 30
plt.figure(figsize=(10, 8))
plt.plot(model_parameters.dimensional_time*time, traj[var])
plt.xlabel('time (days)')
plt.ylabel('$'+model_parameters.latex_var_string[var]+'$');
var = 10
plt.figure(figsize=(10, 8))
plt.plot(model_parameters.dimensional_time*time, traj[var])
plt.xlabel('time (days)')
plt.ylabel('$'+model_parameters.latex_var_string[var]+'$');
var = 20
plt.figure(figsize=(10, 8))
plt.plot(model_parameters.dimensional_time*time, traj[var])
plt.xlabel('time (days)')
plt.ylabel('$'+model_parameters.latex_var_string[var]+'$');
```
| github_jupyter |
```
## Here I am removing all the protected features to see the difference from normal
import pandas as pd
import random,time
import numpy as np
import math,copy
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn import tree
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
import sklearn.metrics as metrics
from Measure import measure_final_score,calculate_recall,calculate_far,calculate_precision,calculate_accuracy
## Load dataset
dataset_orig = pd.read_csv('dataset/bank.csv')
## Drop categorical features
dataset_orig = dataset_orig.drop(['job', 'marital', 'default',
'housing', 'loan', 'contact', 'month', 'day',
'poutcome'],axis=1)
## Drop NULL values
dataset_orig = dataset_orig.dropna()
# mean = dataset_orig.loc[:,"age"].mean()
# dataset_orig['age'] = np.where(dataset_orig['age'] >= mean, 1, 0)
dataset_orig['age'] = np.where(dataset_orig['age'] >= 25, 1, 0)
dataset_orig['Probability'] = np.where(dataset_orig['Probability'] == 'yes', 1, 0)
## Chaneg symbolic to numeric column
from sklearn.preprocessing import LabelEncoder
gle = LabelEncoder()
genre_labels = gle.fit_transform(dataset_orig['education'])
genre_mappings = {index: label for index, label in enumerate(gle.classes_)}
dataset_orig['education'] = genre_labels
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
dataset_orig = pd.DataFrame(scaler.fit_transform(dataset_orig),columns = dataset_orig.columns)
# dataset_orig
```
# Results with protected attributes
```
np.random.seed(0)
## Divide into train,validation,test
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
# --- LSR
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100)
# --- CART
# clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
cnf_matrix_test = confusion_matrix(y_test,y_pred)
print(cnf_matrix_test)
TN, FP, FN, TP = confusion_matrix(y_test,y_pred).ravel()
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'accuracy'))
print("aod age:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'aod'))
print("eod age:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'age', 'eod'))
print("Precision", metrics.precision_score(y_test,y_pred))
print("Recall", metrics.recall_score(y_test,y_pred))
print(X_train.columns)
print(clf.coef_)
import matplotlib.pyplot as plt
y = np.arange(len(dataset_orig.columns)-1)
plt.barh(y,clf.coef_[0])
plt.yticks(y,dataset_orig_train.columns)
plt.show()
```
# Results without protected attributes
```
## Drop race and sex
dataset_orig = dataset_orig.drop(['age'],axis=1)
## Divide into train,validation,test
np.random.seed(0)
## Divide into train,validation,test
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
# --- LSR
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100)
# --- CART
# clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
cnf_matrix_test = confusion_matrix(y_test,y_pred)
print(cnf_matrix_test)
TN, FP, FN, TP = confusion_matrix(y_test,y_pred).ravel()
print("recall :", calculate_recall(TP,FP,FN,TN))
print("far :",calculate_far(TP,FP,FN,TN))
print("precision :", calculate_precision(TP,FP,FN,TN))
print("accuracy :",calculate_accuracy(TP,FP,FN,TN))
print(X_train.columns)
print(clf.coef_)
```
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/48_folium_legend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) if needed.
```
# !pip install geemap
```
# How to add a draggable legend to the map
```
import ee
import geemap.foliumap as geemap
# geemap.update_package()
```
## Add a builtin legend to the map
```
legends = geemap.builtin_legends
for legend in legends:
print(legend)
```
### Available Land Cover Datasets in Earth Engine
https://developers.google.com/earth-engine/datasets/tags/landcover
### National Land Cover Database (NLCD)
https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
```
Map = geemap.Map()
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map.addLayerControl()
Map
```
### MODIS Land Cover Type Yearly Global 500m
https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MCD12Q1
```
Map = geemap.Map()
landcover = ee.Image('MODIS/006/MCD12Q1/2013_01_01').select('LC_Type1')
igbpLandCoverVis = {
'min': 1.0,
'max': 17.0,
'palette': [
'05450a',
'086a10',
'54a708',
'78d203',
'009900',
'c6b044',
'dcd159',
'dade48',
'fbff13',
'b6ff05',
'27ff87',
'c24f44',
'a5a5a5',
'ff6d4c',
'69fff8',
'f9ffa4',
'1c0dff',
],
}
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, igbpLandCoverVis, 'MODIS Land Cover')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, igbpLandCoverVis, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/006/MCD12Q1')
Map.addLayerControl()
Map
```
## Add customized legends for Earth Engine data
There are three ways you can add customized legends for Earth Engine data
1. Define legend labels and colors
2. Define legend dictionary
3. Convert Earth Engine class table to legend dictionary
### Define legend keys and colors
```
Map = geemap.Map()
labels = ['One', 'Two', 'Three', 'Four', 'ect']
# color can be defined using either hex code or RGB (0-255, 0-255, 0-255)
colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68, 123)]
Map.add_legend(title='Legend', labels=labels, colors=colors)
Map
```
### Define a legend dictionary
```
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map.addLayerControl()
Map
```
### Convert an Earth Engine class table to legend
For example: MCD12Q1 Land Cover Type Yearly Global 500m
https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MCD12Q1
```
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/006/MCD12Q1/2013_01_01').select('LC_Type1')
igbpLandCoverVis = {
'min': 1.0,
'max': 17.0,
'palette': [
'05450a',
'086a10',
'54a708',
'78d203',
'009900',
'c6b044',
'dcd159',
'dade48',
'fbff13',
'b6ff05',
'27ff87',
'c24f44',
'a5a5a5',
'ff6d4c',
'69fff8',
'f9ffa4',
'1c0dff',
],
}
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, igbpLandCoverVis, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(title="MODIS Global Land Cover", legend_dict=legend_dict)
Map.addLayerControl()
Map
```
| github_jupyter |
## Logistic Regression in plain Python
In logistic regression, we are trying to model the outcome of a **binary variable** given a **linear combination of input features**. For example, we could try to predict the outcome of an election (win/lose) using information about how much money a candidate spent campaigning, how much time she/he spent campaigning, etc.
### Model
Logistic regression works as follows.
**Given:**
- dataset $\{(\boldsymbol{x}^{(1)}, y^{(1)}), ..., (\boldsymbol{x}^{(m)}, y^{(m)})\}$
- with $\boldsymbol{x}^{(i)}$ being a $d-$dimensional vector $\boldsymbol{x}^{(i)} = (x^{(i)}_1, ..., x^{(i)}_d)$
- $y^{(i)}$ being a binary target variable, $y^{(i)} \in \{0,1\}$
The logistic regression model can be interpreted as a very **simple neural network:**
- it has a real-valued weight vector $\boldsymbol{w}= (w^{(1)}, ..., w^{(d)})$
- it has a real-valued bias $b$
- it uses a sigmoid function as its activation function

### Training
Different to [linear regression](linear_regression.ipynb), logistic regression has no closed form solution. But the cost function is convex, so we can train the model using gradient descent. In fact, **gradient descent** (or any other optimization algorithm) is guaranteed to find the global minimum (if the learning rate is small enough and enough training iterations are used).
Training a logistic regression model has different steps. In the beginning (step 0) the parameters are initialized. The other steps are repeated for a specified number of training iterations or until convergence of the parameters.
* * *
**Step 0: ** Initialize the weight vector and bias with zeros (or small random values).
* * *
**Step 1: ** Compute a linear combination of the input features and weights. This can be done in one step for all training examples, using vectorization and broadcasting:
$\boldsymbol{a} = \boldsymbol{X} \cdot \boldsymbol{w} + b $
where $\boldsymbol{X}$ is a matrix of shape $(n_{samples}, n_{features})$ that holds all training examples, and $\cdot$ denotes the dot product.
* * *
**Step 2: ** Apply the sigmoid activation function, which returns values between 0 and 1:
$\boldsymbol{\hat{y}} = \sigma(\boldsymbol{a}) = \frac{1}{1 + \exp(-\boldsymbol{a})}$
* * *
** Step 3: ** Compute the cost over the whole training set. We want to model the probability of the target values being 0 or 1. So during training we want to adapt our parameters such that our model outputs high values for examples with a positive label (true label being 1) and small values for examples with a negative label (true label being 0). This is reflected in the cost function:
$J(\boldsymbol{w},b) = - \frac{1}{m} \sum_{i=1}^m \Big[ y^{(i)} \log(\hat{y}^{(i)}) + (1 - y^{(i)}) \log(1 - \hat{y}^{(i)}) \Big]$
* * *
** Step 4: ** Compute the gradient of the cost function with respect to the weight vector and bias. A detailed explanation of this derivation can be found [here](https://stats.stackexchange.com/questions/278771/how-is-the-cost-function-from-logistic-regression-derivated).
The general formula is given by:
$ \frac{\partial J}{\partial w_j} = \frac{1}{m}\sum_{i=1}^m\left[\hat{y}^{(i)}-y^{(i)}\right]\,x_j^{(i)}$
For the bias, the inputs $x_j^{(i)}$ will be given 1.
* * *
** Step 5: ** Update the weights and bias
$\boldsymbol{w} = \boldsymbol{w} - \eta \, \nabla_w J$
$b = b - \eta \, \nabla_b J$
where $\eta$ is the learning rate.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
np.random.seed(123)
%matplotlib inline
```
## Dataset
```
# We will perform logistic regression using a simple toy dataset of two classes
X, y_true = make_blobs(n_samples= 1000, centers=2)
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y_true)
plt.title("Dataset")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
# Reshape targets to get column vector with shape (n_samples, 1)
y_true = y_true[:, np.newaxis]
# Split the data into a training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y_true)
print(f'Shape X_train: {X_train.shape}')
print(f'Shape y_train: {y_train.shape}')
print(f'Shape X_test: {X_test.shape}')
print(f'Shape y_test: {y_test.shape}')
```
## Logistic regression class
```
class LogisticRegression:
def __init__(self):
pass
def sigmoid(self, a):
return 1 / (1 + np.exp(-a))
def train(self, X, y_true, n_iters, learning_rate):
"""
Trains the logistic regression model on given data X and targets y
"""
# Step 0: Initialize the parameters
n_samples, n_features = X.shape
self.weights = np.zeros((n_features, 1))
self.bias = 0
costs = []
for i in range(n_iters):
# Step 1 and 2: Compute a linear combination of the input features and weights,
# apply the sigmoid activation function
y_predict = self.sigmoid(np.dot(X, self.weights) + self.bias)
# Step 3: Compute the cost over the whole training set.
cost = (- 1 / n_samples) * np.sum(y_true * np.log(y_predict) + (1 - y_true) * (np.log(1 - y_predict)))
# Step 4: Compute the gradients
dw = (1 / n_samples) * np.dot(X.T, (y_predict - y_true))
db = (1 / n_samples) * np.sum(y_predict - y_true)
# Step 5: Update the parameters
self.weights = self.weights - learning_rate * dw
self.bias = self.bias - learning_rate * db
costs.append(cost)
if i % 100 == 0:
print(f"Cost after iteration {i}: {cost}")
return self.weights, self.bias, costs
def predict(self, X):
"""
Predicts binary labels for a set of examples X.
"""
y_predict = self.sigmoid(np.dot(X, self.weights) + self.bias)
y_predict_labels = [1 if elem > 0.5 else 0 for elem in y_predict]
return np.array(y_predict_labels)[:, np.newaxis]
```
## Initializing and training the model
```
regressor = LogisticRegression()
w_trained, b_trained, costs = regressor.train(X_train, y_train, n_iters=600, learning_rate=0.009)
fig = plt.figure(figsize=(8,6))
plt.plot(np.arange(600), costs)
plt.title("Development of cost over training")
plt.xlabel("Number of iterations")
plt.ylabel("Cost")
plt.show()
```
## Testing the model
```
y_p_train = regressor.predict(X_train)
y_p_test = regressor.predict(X_test)
print(f"train accuracy: {100 - np.mean(np.abs(y_p_train - y_train)) * 100}%")
print(f"test accuracy: {100 - np.mean(np.abs(y_p_test - y_test))}%")
```
| github_jupyter |
```
%pylab inline
rc("image", cmap="gray", interpolation="nearest")
figsize(7, 7)
```
# PyTorch
"Tensors and Dynamic neural networks in Python with strong GPU acceleration"
- like Matlab or Numpy, but with GPU support
- automatic, dynamic differentiation and gradient descent
- some frameworks for neural networks
# SIMPLE COMPUTATIONS
```
# simple "tensors" (multidimensional numerical arrays) in Torch
import torch
data = torch.rand(256, 256)
print(data)
# a pointless GPU computation
if torch.has_cuda:
data = torch.rand(256, 256).cuda()
out = data
for i in range(100):
out = torch.mm(data, out)
out /= out.norm()
imshow(out.cpu().numpy())
else:
print("CUDA not available")
```
# PyTorch
- standard set of numerical operations
- similar primitives to Numpy, Matlab
- operations run on CPU and GPU
- GPU operations use CUDA, cuDNN, some third party kernels
- direct binding: you call a numerical function, it calls the kernel
# AUTOMATIC DIFFERENTIATION
```
def tshow(image, **kw):
if image.ndimension()==2:
imshow(image.detach().cpu().numpy(), **kw)
elif image.ndimension()==3:
if image.shape[0]==1:
tshow(image[0])
else:
imshow(image.detach().cpu().permute(1, 2, 0).numpy(), **kw)
elif image.ndimension()==4:
tshow(image[0])
def generate_pair():
image = (torch.rand((128, 128)) > 0.99).type(torch.float)
import scipy.ndimage as ndi
target = torch.tensor(ndi.gaussian_filter(image.numpy(), 2.0))
return image, target
image, target = generate_pair()
subplot(121); tshow(image); subplot(122); tshow(target)
```
# 2D Convolutions
```
import torch.nn.functional as F
x = image[None, None, :, :]
y = target[None, None, :, :]
w = torch.randn(1, 1, 7, 7)
w.requires_grad = True
x.requires_grad = True
y_predicted = F.conv2d(x, w, padding=3)
tshow(y_predicted)
w
w.grad
```
# Computing the Error
```
err = ((y_predicted - y)**2).sum()
print(err)
err.backward(retain_graph=True)
print(w.grad)
from graphviz import Digraph
def make_dot(var, params):
""" Produces Graphviz representation of PyTorch autograd graph
Blue nodes are the Variables that require grad, orange are Tensors
saved for backward in torch.autograd.Function
Args:
var: output Variable
params: dict of (name, Variable) to add names to node that
require grad (TODO: make optional)
"""
param_map = {id(v): k for k, v in params.items()}
print(param_map)
node_attr = dict(style='filled',
shape='box',
align='left',
fontsize='9',
ranksep='0.1',
height='0.2')
dot = Digraph(node_attr=node_attr, graph_attr=dict(size="12,12", rankdir="LR"))
seen = set()
def size_to_str(size):
return '('+(', ').join(['%d'% v for v in size])+')'
def add_nodes(var):
if var not in seen:
if torch.is_tensor(var):
dot.node(str(id(var)), size_to_str(var.size()), fillcolor='orange')
elif hasattr(var, 'variable'):
u = var.variable
node_name = '%s\n %s' % (param_map.get(id(u)), size_to_str(u.size()))
dot.node(str(id(var)), node_name, fillcolor='lightblue')
else:
dot.node(str(id(var)), str(type(var).__name__))
seen.add(var)
if hasattr(var, 'next_functions'):
for u in var.next_functions:
if u[0] is not None:
dot.edge(str(id(u[0])), str(id(var)))
add_nodes(u[0])
if hasattr(var, 'saved_tensors'):
for t in var.saved_tensors:
dot.edge(str(id(t)), str(id(var)))
add_nodes(t)
add_nodes(var.grad_fn)
return dot
```
# Computation Graph
```
make_dot(err, dict(w=w, x=x))
```
# PyTorch Autograd Summary
- Tensors $\rightarrow$ Variables
- PyTorch keeps track of derivatives
- computation graphs can be completely dynamic
- propagate derivatives backwards using `x.backward()`
- access gradients using `x.grad`
# SIMPLE LEARNING
```
from torch import optim
import torch.nn.functional as F
x, y = generate_pair()
w = torch.randn(1, 1, 7, 7)
w.requires_grad = True
for i in range(5000):
if w.grad is not None: w.grad.fill_(0)
y_predicted = F.conv2d(x[None, None, :, :], w, padding=3)
err = ((y_predicted - y[None, None, :, :])**2).sum()
err.backward()
w.data -= 1e-5 * w.grad
if i%1000==0: print(f"{i:6d} {err.item():.3f}")
# input, desired output, learned output via gradient descent
subplot(131); tshow(x); subplot(132); tshow(y); subplot(133); tshow(y_predicted)
subplot(121); tshow(w); subplot(122); plot(w[0, 0, 3, :].detach().numpy())
```
# Learning a Linear Filter with Layers
- above example used gradient descent using completely functional computations
- a `Conv2d` layer is the same as a linear filter
- let's see whether we can learn this using PyTorch
- this uses all the components we need to train more complex models in PyTorch
```
from torch import nn, optim
# the "model" is just a single convolutional layer
model = nn.Conv2d(1, 1, (17, 17), padding=8)
# the loss is MSE loss
criterion = nn.MSELoss()
x, y = generate_pair()
for i in range(5000):
optimizer = optim.SGD(model.parameters(), lr=0.1)
optimizer.zero_grad()
y_predicted = model(x[None, None, :, :])
loss = criterion(y[None, None, :, :], y_predicted)
if i%1000==0: print(i, loss.item())
loss.backward()
optimizer.step()
# display the learned kernel
parameters = list(model.parameters())
imshow(parameters[0].data[0,0].cpu().numpy())
```
# Torch "Modules" / "Layers"
```
import torch.functional as F
class Linear(nn.Module):
def __init__(self, ninput, noutput):
self.weights = Parameter(torch.randn(noutput, ninput))
def forward(self, x):
return F.linear(x, self.weights)
# NB: no "backward" method needed
```
# Composition of Torch Layers
```
model = nn.Sequential(
nn.Conv2d(1, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(16, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2, 2)
)
model
```
# SHAPE INFERENCE, INPUT, REORDER
# Manual Sizes
Keeping track of depths can be complicated in the presence of padding and reshaping.
```
model = nn.Sequential(
nn.Conv2d(1, 8, 3),
nn.ReLU(),
nn.Conv2d(8, 16, 3),
nn.ReLU(),
nn.Flatten(),
nn.Linear(2304, 10)
)
model(torch.randn(1, 1, 16, 16)).shape
```
# Shape Inference
Shape inference simplifies this.
```
from torchmore import layers, flex
model = nn.Sequential(
flex.Conv2d(8, 3),
nn.ReLU(),
flex.Conv2d(16, 3),
nn.ReLU(),
nn.Flatten(),
flex.Linear(10)
)
model(torch.randn(1, 1, 16, 16)).shape
```
# How `flex` works
`flex` simply delays allocation of layers until the first forward pass.
```
model = nn.Sequential(
flex.Conv2d(16, 3),
nn.ReLU(),
nn.Flatten(),
flex.Linear(10)
)
model
with torch.no_grad():
model.forward(torch.randn(1, 1, 64, 64))
model
```
# Converting to Standard Models
`flex.freeze` turns a flex model into a standard model.
```
flex.freeze(model)
model
```
# Generic `Flex` Modules
```
model = nn.Sequential(flex.Flex(lambda x: nn.Conv2d(x.size(1), 10, 3)))
print(model)
model.forward(torch.rand(1, 1, 8, 8))
print(model)
flex.freeze(model)
print(model)
```
# Building Abstractions
Clever use of Python syntax makes it easy to build complex models out of building blocks.
```
def conv2d(d, r=3, mp=None):
result = [
flex.Conv2d(16, r, padding=r//2),
flex.BatchNorm2d(),
nn.ReLU()
]
if mp is not None: result += [nn.MaxPool2d(mp)]
return result
model = nn.Sequential(
*conv2d(16),
*conv2d(32, mp=2)
)
flex.shape_inference(model, (1, 1, 64, 64))
model
```
# Input Layers
Inputs to models need to have a particular order of the axes, size requirements, device requirements, and value ranges. The `Input` layer helps keeping track of these and enforcing them.
```
model = nn.Sequential(
layers.Input(assume="BDHW", sizes=((1, 256), 3, None, None), dtype=torch.float32),
*conv2d(16, 7, mp=2),
*conv2d(32, mp=2)
)
flex.shape_inference(model, (1, 3, 64, 64))
model
```
# `Fun` and `Reorder`
- `Fun` allows us to put simple functional expressions inside a network
- `Reorder` allows axis permutation with more symbolic names
```
model = nn.Sequential(
layers.Input("BDHW", sizes=((1, 256), 3, None, None), dtype=torch.float32),
*conv2d(16, 7, mp=2),
*conv2d(32, mp=2),
layers.Fun("lambda x: x.sum(1)"),
layers.Reorder("BDL", "LBD")
)
flex.shape_inference(model, (1, 3, 64, 64))
model
```
# Wrapper-Style Modules
Some common modules "wrap around" others; these occur in Resnet, U-net, etc.
```
model = nn.Sequential(
layers.Parallel(
flex.Conv2d(8, 11, padding=5),
flex.Conv2d(64, 3, padding=1)
),
nn.ReLU(),
flex.Conv2d(32, 3, padding=1)
)
flex.shape_inference(model, (1, 1, 64, 64))
model
```
# Wrapper Style Modules
- `Parallel(*args)` -- run modules in parallel and concatenate results
- `Additive(*args)` -- run modules in parallel and add results
- `KeepSize(*args)` -- run contained module and resize output to input size
- `UnetLayer(...,sub=...)` -- run `MaxPool2d` down, down module, `TransposeConv2d` up
# Summary of Torch "Modules" and Training
- training uses modules, criteria, and optimizers
- modules (nn.Module) keep track of parameters and compute in the forward method
- criteria compute the differences between two outputs and return a scalar loss
- optimizers initiate gradient computation and then update the model parameters
- `torchmore` provides shape inference and some convenience modules
# PYTORCH VS OTHERS
# Common Deep Learning Frameworks
Primary:
- TensorFlow
- PyTorch (old: Torch)
- Caffe 2 (old: Caffe)
- mxnet
- Chainer
Derived:
- Keras
- Theano
# TensorFlow
- superficially like PyTorch, but internally very different
- core is a dataflow language, completely separate from Python
Issues:
- tons of code between you and the kernels
- many custom written Google kernels
- memory hungry
- RNN support is worse
# PyTorch
Issues:
- Python multithreading is poor, therefore...
- limited ability to write high performance multi-GPU code
Potential Solutions:
- port to IronPython (C#, .NET)
- future, better Python JIT/compilers
# Future?
- both PyTorch and Tensorflow have serious limitations
- either they will substantially, or new frameworks will come around
| github_jupyter |
# Introduction to JumpStart - Image Classification
---
Welcome to Amazon [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)! You can use JumpStart to solve many Machine Learning tasks through one-click in SageMaker Studio, or through [SageMaker JumpStart API](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart).
In this demo notebook, we demonstrate how to use the JumpStart API for Image Classification. Image Classification refers to classifying an image to one of the class labels of the training dataset. We demonstrate two use cases of Image Classification models:
* How to use a model pre-trained on ImageNet dataset to classify an image. [ImageNetLabels](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).
* How to fine-tune a pre-trained model to a custom dataset, and then run inference on the fine-tuned model.
Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.
---
1. [Set Up](#1.-Set-Up)
2. [Select a pre-trained model](#2.-Select-a-pre-trained-model)
3. [Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model)
* [Retrieve JumpStart Artifacts & Deploy an Endpoint](#3.1.-Retrieve-JumpStart-Artifacts-&-Deploy-an-Endpoint)
* [Download example images for inference](#3.2.-Download-example-images-for-inference)
* [Query endpoint and parse response](#3.3.-Query-endpoint-and-parse-response)
* [Clean up the endpoint](#3.4.-Clean-up-the-endpoint)
4. [Fine-tune the pre-trained model on a custom dataset](#4.-Fine-tune-the-pre-trained-model-on-a-custome-dataset)
* [Retrieve JumpStart Training artifacts](#4.1.-Retrieve-JumpStart-Training-artifacts)
* [Set Training parameters](#4.2.-Set-Training-parameters)
* [Start Training](#4.3.-Start-Training)
* [Deploy & run Inference on the fine-tuned model](#4.4.-Deploy-&-run-Inference-on-the-fine-tuned-model)
## 1. Set Up
***
Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.
***
```
!pip install sagemaker ipywidgets --upgrade --quiet
```
---
To train and host on Amazon Sagemaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.
---
```
import sagemaker, boto3, json
from sagemaker import get_execution_role
aws_role = get_execution_role()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()
```
## 2. Select a pre-trained model
***
You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of JumpStart models can also be accessed at [JumpStart Models](https://sagemaker.readthedocs.io/en/stable/doc_utils/jumpstart.html#).
***
```
model_id, model_version, = (
"pytorch-ic-mobilenet-v2",
"*",
)
```
***
[Optional] Select a different JumpStart model. Here, we download jumpstart model_manifest file from the jumpstart s3 bucket, filter-out all the Image Classification models and select a model for inference.
***
```
import IPython
from ipywidgets import Dropdown
# download JumpStart model_manifest file.
boto3.client("s3").download_file(
f"jumpstart-cache-prod-{aws_region}", "models_manifest.json", "models_manifest.json"
)
with open("models_manifest.json", "rb") as json_file:
model_list = json.load(json_file)
# filter-out all the Image Classification models from the manifest list.
ic_models_all_versions, ic_models = [
model["model_id"] for model in model_list if "-ic-" in model["model_id"]
], []
[ic_models.append(model) for model in ic_models_all_versions if model not in ic_models]
# display the model-ids in a dropdown, for user to select a model.
dropdown = Dropdown(
options=ic_models,
value=model_id,
description="JumpStart Image Classification Models:",
style={"description_width": "initial"},
layout={"width": "max-content"},
)
display(IPython.display.Markdown("## Select a JumpStart pre-trained model from the dropdown below"))
display(dropdown)
```
## 3. Run inference on the pre-trained model
***
Using JumpStart, we can perform inference on the pre-trained model, even without fine-tuning it first on a custom dataset. For this example, that means on an input image, predicting the class label from one of the 1000 classes of the ImageNet dataset.
[ImageNetLabels](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).
***
### 3.1. Retrieve JumpStart Artifacts & Deploy an Endpoint
***
We retrieve the deploy_image_uri, deploy_source_uri, and base_model_uri for the pre-trained model. To host the pre-trained base-model, we create an instance of [`sagemaker.model.Model`](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html) and deploy it.
***
```
from sagemaker import image_uris, model_uris, script_uris
from sagemaker.model import Model
from sagemaker.predictor import Predictor
from sagemaker.utils import name_from_base
# model_version="*" fetches the latest version of the model.
infer_model_id, infer_model_version = dropdown.value, "*"
endpoint_name = name_from_base(f"jumpstart-example-{infer_model_id}")
inference_instance_type = "ml.m5.xlarge"
# Retrieve the inference docker container uri.
deploy_image_uri = image_uris.retrieve(
region=None,
framework=None,
image_scope="inference",
model_id=infer_model_id,
model_version=infer_model_version,
instance_type=inference_instance_type,
)
# Retrieve the inference script uri.
deploy_source_uri = script_uris.retrieve(
model_id=infer_model_id, model_version=infer_model_version, script_scope="inference"
)
# Retrieve the base model uri.
base_model_uri = model_uris.retrieve(
model_id=infer_model_id, model_version=infer_model_version, model_scope="inference"
)
# Create the SageMaker model instance. Note that we need to pass Predictor class when we deploy model through Model class,
# for being able to run inference through the sagemaker API.
model = Model(
image_uri=deploy_image_uri,
source_dir=deploy_source_uri,
model_data=base_model_uri,
entry_point="inference.py",
role=aws_role,
predictor_cls=Predictor,
name=endpoint_name,
)
# deploy the Model.
base_model_predictor = model.deploy(
initial_instance_count=1,
instance_type=inference_instance_type,
endpoint_name=endpoint_name,
)
```
### 3.2. Download example images for inference
***
We download example images from the JumpStart S3 bucket.
***
```
s3_bucket = f"jumpstart-cache-prod-{aws_region}"
key_prefix = "inference-notebook-assets"
def download_from_s3(images):
for filename, image_key in images.items():
boto3.client("s3").download_file(s3_bucket, f"{key_prefix}/{image_key}", filename)
images = {"img1.jpg": "cat.jpg", "img2.jpg": "dog.jpg"}
download_from_s3(images)
```
### 3.3. Query endpoint and parse response
***
Input to the endpoint is a single image in binary format. Response from the endpoint is a dictionary containing the top-1 predicted class label, and a list of class probabilities.
***
```
from IPython.core.display import HTML
def predict_top_k_labels(probabilities, labels, k):
topk_prediction_ids = sorted(
range(len(probabilities)), key=lambda index: probabilities[index], reverse=True
)[:k]
topk_class_labels = ", ".join([labels[id] for id in topk_prediction_ids])
return topk_class_labels
for image_filename in images.keys():
with open(image_filename, "rb") as file:
img = file.read()
query_response = base_model_predictor.predict(
img, {"ContentType": "application/x-image", "Accept": "application/json;verbose"}
)
model_predictions = json.loads(query_response)
labels, probabilities = model_predictions["labels"], model_predictions["probabilities"]
top5_class_labels = predict_top_k_labels(probabilities, labels, 5)
display(
HTML(
f'<img src={image_filename} alt={image_filename} align="left" style="width: 250px;"/>'
f"<figcaption>Top-5 predictions: {top5_class_labels} </figcaption>"
)
)
```
### 3.4. Clean up the endpoint
```
# Delete the SageMaker endpoint and the attached resources
base_model_predictor.delete_model()
base_model_predictor.delete_endpoint()
```
## 4. Fine-tune the pre-trained model on a custom dataset
***
Previously, we saw how to run inference on a pre-trained model. Next, we discuss how a model can be finetuned to a custom dataset with any number of classes.
The model available for fine-tuning attaches a classification layer to the corresponding feature extractor model available on TensorFlow/PyTorch hub, and initializes the layer parameters to random values. The output dimension of the classification layer
is determined based on the number of classes in the input data. The fine-tuning step fine-tunes the model parameters. The objective is to minimize classification error on the input data. The model returned by fine-tuning can be further deployed for inference. Below are the instructions for how the training data should be formatted for input to the model.
- **Input:** A directory with as many sub-directories as the number of classes.
- Each sub-directory should have images belonging to that class in .jpg format.
- **Output:** A trained model that can be deployed for inference.
- A label mapping file is saved along with the trained model file on the s3 bucket.
The input directory should look like below if
the training data contains images from two classes: roses and dandelion. The s3 path should look like
`s3://bucket_name/input_directory/`. Note the trailing `/` is required. The names of the folders and 'roses', 'dandelion', and the .jpg filenames
can be anything. The label mapping file that is saved along with the trained model on the s3 bucket maps the
folder names 'roses' and 'dandelion' to the indices in the list of class probabilities the model outputs.
The mapping follows alphabetical ordering of the folder names. In the example below, index 0 in the model output list
would correspond to 'dandelion' and index 1 would correspond to 'roses'.
input_directory
|--roses
|--abc.jpg
|--def.jpg
|--dandelion
|--ghi.jpg
|--jkl.jpg
We provide tf_flowers dataset as a default dataset for fine-tuning the model.
tf_flower comprises images of five types of flowers.
The dataset has been downloaded from [TensorFlow](https://www.tensorflow.org/datasets/catalog/tf_flowers).
[Apache 2.0 License](https://jumpstart-cache-prod-us-west-2.s3-us-west-2.amazonaws.com/licenses/Apache-License/LICENSE-2.0.txt).
Citation:
<sub><sup>
@ONLINE {tfflowers,
author = "The TensorFlow Team",
title = "Flowers",
month = "jan",
year = "2019",
url = "http://download.tensorflow.org/example_images/flower_photos.tgz" }
</sup></sub> source: [TensorFlow Hub](model_url).
***
### 4.1. Retrieve JumpStart Training artifacts
***
Here, for the selected model, we retrieve the training docker container, the training algorithm source, the pre-trained base model, and a python dictionary of the training hyper-parameters that the algorithm accepts with their default values. Note that the model_version="*" fetches the lates model. Also, we do need to specify the training_instance_type to fetch train_image_uri.
***
```
from sagemaker import image_uris, model_uris, script_uris, hyperparameters
model_id, model_version = dropdown.value, "*"
training_instance_type = "ml.g4dn.xlarge"
# Retrieve the docker image
train_image_uri = image_uris.retrieve(
region=None,
framework=None,
model_id=model_id,
model_version=model_version,
image_scope="training",
instance_type=training_instance_type,
)
# Retrieve the training script
train_source_uri = script_uris.retrieve(
model_id=model_id, model_version=model_version, script_scope="training"
)
# Retrieve the pre-trained model tarball to further fine-tune
train_model_uri = model_uris.retrieve(
model_id=model_id, model_version=model_version, model_scope="training"
)
```
### 4.2. Set Training parameters
***
Now that we are done with all the setup that is needed, we are ready to fine-tune our Image Classification model. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job.
There are two kinds of parameters that need to be set for training.
The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training. We defined the training instance type above to fetch the correct train_image_uri.
The second set of parameters are algorithm specific training hyper-parameters.
***
```
# Sample training data is available in this bucket
training_data_bucket = f"jumpstart-cache-prod-{aws_region}"
training_data_prefix = "training-datasets/tf_flowers/"
training_dataset_s3_path = f"s3://{training_data_bucket}/{training_data_prefix}"
output_bucket = sess.default_bucket()
output_prefix = "jumpstart-example-ic-training"
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
```
***
For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.
***
```
from sagemaker import hyperparameters
# Retrieve the default hyper-parameters for fine-tuning the model
hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)
# [Optional] Override default hyperparameters with custom values
hyperparameters["epochs"] = "5"
print(hyperparameters)
```
### 4.3. Start Training
***
We start by creating the estimator object with all the required assets and then launch the training job.
***
```
from sagemaker.estimator import Estimator
from sagemaker.utils import name_from_base
training_job_name = name_from_base(f"jumpstart-example-{model_id}-transfer-learning")
# Create SageMaker Estimator instance
ic_estimator = Estimator(
role=aws_role,
image_uri=train_image_uri,
source_dir=train_source_uri,
model_uri=train_model_uri,
entry_point="transfer_learning.py",
instance_count=1,
instance_type=training_instance_type,
max_run=360000,
hyperparameters=hyperparameters,
output_path=s3_output_location,
)
# Launch a SageMaker Training job by passing s3 path of the training data
ic_estimator.fit({"training": training_dataset_s3_path}, logs=True)
```
## 4.4. Deploy & run Inference on the fine-tuned model
***
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label of an image. We follow the same steps as in [3. Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model). We start by retrieving the jumpstart artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the `ic_estimator` that we fine-tuned.
***
```
inference_instance_type = "ml.m5.xlarge"
# Retrieve the inference docker container uri
deploy_image_uri = image_uris.retrieve(
region=None,
framework=None,
image_scope="inference",
model_id=model_id,
model_version=model_version,
instance_type=inference_instance_type,
)
# Retrieve the inference script uri
deploy_source_uri = script_uris.retrieve(
model_id=model_id, model_version=model_version, script_scope="inference"
)
endpoint_name = name_from_base(f"jumpstart-example-FT-{model_id}-")
# Use the estimator from the previous step to deploy to a SageMaker endpoint
finetuned_predictor = ic_estimator.deploy(
initial_instance_count=1,
instance_type=inference_instance_type,
entry_point="inference.py",
image_uri=deploy_image_uri,
source_dir=deploy_source_uri,
endpoint_name=endpoint_name,
)
```
---
Next, we download example images of a rose and a sunflower from the S3 bucket for inference.
---
```
s3_bucket = f"jumpstart-cache-prod-{aws_region}"
key_prefix = "training-datasets/tf_flowers"
def download_from_s3(images):
for filename, image_key in images.items():
boto3.client("s3").download_file(s3_bucket, f"{key_prefix}/{image_key}", filename)
flower_images = {
"img1.jpg": "roses/10503217854_e66a804309.jpg",
"img2.jpg": "sunflowers/1008566138_6927679c8a.jpg",
}
download_from_s3(flower_images)
```
---
Next, we query the finetuned model, parse the response and display the predictions.
---
```
from IPython.core.display import HTML
for image_filename in flower_images.keys():
with open(image_filename, "rb") as file:
img = file.read()
query_response = finetuned_predictor.predict(
img, {"ContentType": "application/x-image", "Accept": "application/json;verbose"}
)
model_predictions = json.loads(query_response)
predicted_label = model_predictions["predicted_label"]
display(
HTML(
f'<img src={image_filename} alt={image_filename} align="left" style="width: 250px;"/>'
f"<figcaption>Predicted Label: {predicted_label}</figcaption>"
)
)
```
---
Next, we clean up the deployed endpoint.
---
```
# Delete the SageMaker endpoint and the attached resources
finetuned_predictor.delete_model()
finetuned_predictor.delete_endpoint()
```
| github_jupyter |
# Object Oriented Programming
Object Oriented Programming (OOP) tends to be one of the major obstacles for beginners when they are first starting to learn Python.
There are many, many tutorials and lessons covering OOP so feel free to Google search other lessons, and I have also put some links to other useful tutorials online at the bottom of this Notebook.
For this lesson we will construct our knowledge of OOP in Python by building on the following topics:
* Objects
* Using the *class* keyword
* Creating class attributes
* Creating methods in a class
* Learning about Inheritance
* Learning about Polymorphism
* Learning about Special Methods for classes
Lets start the lesson by remembering about the Basic Python Objects. For example:
```
lst = [1,2,3]
```
Remember how we could call methods on a list?
```
lst.count(2)
```
What we will basically be doing in this lecture is exploring how we could create an Object type like a list. We've already learned about how to create functions. So let's explore Objects in general:
## Objects
In Python, *everything is an object*. Remember from previous lectures we can use type() to check the type of object something is:
```
print(type(1))
print(type([]))
print(type(()))
print(type({}))
```
So we know all these things are objects, so how can we create our own Object types? That is where the <code>class</code> keyword comes in.
## class
User defined objects are created using the <code>class</code> keyword. The class is a blueprint that defines the nature of a future object. From classes we can construct instances. An instance is a specific object created from a particular class. For example, above we created the object <code>lst</code> which was an instance of a list object.
Let see how we can use <code>class</code>:
```
# Create a new object type called Sample
class Sample:
pass
# Instance of Sample
x = Sample()
print(type(x))
```
By convention we give classes a name that starts with a capital letter. Note how <code>x</code> is now the reference to our new instance of a Sample class. In other words, we **instantiate** the Sample class.
Inside of the class we currently just have pass. But we can define class attributes and methods.
An **attribute** is a characteristic of an object.
A **method** is an operation we can perform with the object.
For example, we can create a class called Dog. An attribute of a dog may be its breed or its name, while a method of a dog may be defined by a .bark() method which returns a sound.
Let's get a better understanding of attributes through an example.
## Attributes
The syntax for creating an attribute is:
self.attribute = something
There is a special method called:
__init__()
This method is used to initialize the attributes of an object. For example:
```
class Dog:
def __init__(self,breed):
self.breed = breed
sam = Dog(breed='Lab')
frank = Dog(breed='Huskie')
```
Lets break down what we have above.The special method
__init__()
is called automatically right after the object has been created:
def __init__(self, breed):
Each attribute in a class definition begins with a reference to the instance object. It is by convention named self. The breed is the argument. The value is passed during the class instantiation.
self.breed = breed
Now we have created two instances of the Dog class. With two breed types, we can then access these attributes like this:
```
sam.breed
frank.breed
```
Note how we don't have any parentheses after breed; this is because it is an attribute and doesn't take any arguments.
In Python there are also *class object attributes*. These Class Object Attributes are the same for any instance of the class. For example, we could create the attribute *species* for the Dog class. Dogs, regardless of their breed, name, or other attributes, will always be mammals. We apply this logic in the following manner:
```
class Dog:
# Class Object Attribute
species = 'mammal'
def __init__(self,breed,name):
self.breed = breed
self.name = name
sam = Dog('Lab','Sam')
sam.name
```
Note that the Class Object Attribute is defined outside of any methods in the class. Also by convention, we place them first before the init.
```
sam.species
```
## Methods
Methods are functions defined inside the body of a class. They are used to perform operations with the attributes of our objects. Methods are a key concept of the OOP paradigm. They are essential to dividing responsibilities in programming, especially in large applications.
You can basically think of methods as functions acting on an Object that take the Object itself into account through its *self* argument.
Let's go through an example of creating a Circle class:
```
class Circle:
pi = 3.14
# Circle gets instantiated with a radius (default is 1)
def __init__(self, radius=1):
self.radius = radius
self.area = radius * radius * Circle.pi
# Method for resetting Radius
def setRadius(self, new_radius):
self.radius = new_radius
self.area = new_radius * new_radius * self.pi
# Method for getting Circumference
def getCircumference(self):
return self.radius * self.pi * 2
c = Circle(3)
print('Radius is: ',c.radius)
print('Area is: ',c.area)
print('Circumference is: ',c.getCircumference())
```
In the \__init__ method above, in order to calculate the area attribute, we had to call Circle.pi. This is because the object does not yet have its own .pi attribute, so we call the Class Object Attribute pi instead.<br>
In the setRadius method, however, we'll be working with an existing Circle object that does have its own pi attribute. Here we can use either Circle.pi or self.pi.<br><br>
Now let's change the radius and see how that affects our Circle object:
```
c.setRadius(2)
print('Radius is: ',c.radius)
print('Area is: ',c.area)
print('Circumference is: ',c.getCircumference())
```
Great! Notice how we used self. notation to reference attributes of the class within the method calls. Review how the code above works and try creating your own method.
## Inheritance
Inheritance is a way to form new classes using classes that have already been defined. The newly formed classes are called derived classes, the classes that we derive from are called base classes. Important benefits of inheritance are code reuse and reduction of complexity of a program. The derived classes (descendants) override or extend the functionality of base classes (ancestors).
Let's see an example by incorporating our previous work on the Dog class:
```
class Animal:
def __init__(self):
print("Animal created")
def whoAmI(self):
print("Animal")
def eat(self):
print("Eating")
class Dog(Animal):
def __init__(self):
Animal.__init__(self)
print("Dog created")
def whoAmI(self):
print("Dog")
def bark(self):
print("Woof!")
d = Dog()
d.whoAmI()
d.eat()
d.bark()
```
In this example, we have two classes: Animal and Dog. The Animal is the base class, the Dog is the derived class.
The derived class inherits the functionality of the base class.
* It is shown by the eat() method.
The derived class modifies existing behavior of the base class.
* shown by the whoAmI() method.
Finally, the derived class extends the functionality of the base class, by defining a new bark() method.
## Polymorphism
We've learned that while functions can take in different arguments, methods belong to the objects they act on. In Python, *polymorphism* refers to the way in which different object classes can share the same method name, and those methods can be called from the same place even though a variety of different objects might be passed in. The best way to explain this is by example:
```
class Dog:
def __init__(self, name):
self.name = name
def speak(self):
return self.name+' says Woof!'
class Cat:
def __init__(self, name):
self.name = name
def speak(self):
return self.name+' says Meow!'
niko = Dog('Niko')
felix = Cat('Felix')
print(niko.speak())
print(felix.speak())
```
Here we have a Dog class and a Cat class, and each has a `.speak()` method. When called, each object's `.speak()` method returns a result unique to the object.
There a few different ways to demonstrate polymorphism. First, with a for loop:
```
for pet in [niko,felix]:
print(pet.speak())
```
Another is with functions:
```
def pet_speak(pet):
print(pet.speak())
pet_speak(niko)
pet_speak(felix)
```
In both cases we were able to pass in different object types, and we obtained object-specific results from the same mechanism.
A more common practice is to use abstract classes and inheritance. An abstract class is one that never expects to be instantiated. For example, we will never have an Animal object, only Dog and Cat objects, although Dogs and Cats are derived from Animals:
```
class Animal:
def __init__(self, name): # Constructor of the class
self.name = name
def speak(self): # Abstract method, defined by convention only
raise NotImplementedError("Subclass must implement abstract method")
class Dog(Animal):
def speak(self):
return self.name+' says Woof!'
class Cat(Animal):
def speak(self):
return self.name+' says Meow!'
fido = Dog('Fido')
isis = Cat('Isis')
print(fido.speak())
print(isis.speak())
```
Real life examples of polymorphism include:
* opening different file types - different tools are needed to display Word, pdf and Excel files
* adding different objects - the `+` operator performs arithmetic and concatenation
## Special Methods
Finally let's go over special methods. Classes in Python can implement certain operations with special method names. These methods are not actually called directly but by Python specific language syntax. For example let's create a Book class:
```
class Book:
def __init__(self, title, author, pages):
print("A book is created")
self.title = title
self.author = author
self.pages = pages
def __str__(self):
return "Title: %s, author: %s, pages: %s" %(self.title, self.author, self.pages)
def __len__(self):
return self.pages
def __del__(self):
print("A book is destroyed")
book = Book("Python Rocks!", "Jose Portilla", 159)
#Special Methods
print(book)
print(len(book))
del book
```
The __init__(), __str__(), __len__() and __del__() methods
These special methods are defined by their use of underscores. They allow us to use Python specific functions on objects created through our class.
**Great! After this lecture you should have a basic understanding of how to create your own objects with class in Python. You will be utilizing this heavily in your next milestone project!**
For more great resources on this topic, check out:
[Jeff Knupp's Post](https://jeffknupp.com/blog/2014/06/18/improve-your-python-python-classes-and-object-oriented-programming/)
[Mozilla's Post](https://developer.mozilla.org/en-US/Learn/Python/Quickly_Learn_Object_Oriented_Programming)
[Tutorial's Point](http://www.tutorialspoint.com/python/python_classes_objects.htm)
[Official Documentation](https://docs.python.org/3/tutorial/classes.html)
| github_jupyter |
<img src="https://ucfai.org/groups/supplementary/sp20/02-06-stats-intro/stats-intro/banner.png">
<div class="col-12">
<span class="btn btn-success btn-block">
Meeting in-person? Have you signed in?
</span>
</div>
<div class="col-12">
<h1> Introduction to Statistics, Featuring Datascience </h1>
<hr>
</div>
<div style="line-height: 2em;">
<p>by:
<strong> None</strong>
(<a href="https://github.com/calvinyong">@calvinyong</a>)
<strong> None</strong>
(<a href="https://github.com/jordanstarkey95">@jordanstarkey95</a>)
on 2020-02-06</p>
</div>
## Purpose
The goal of this workshop is to provide the essential statistical knowledge required for data science.
To demonstrate these essentials, we'll look at a
This workshop assumes you have reviewed the supplementary [Python3 workshop](https://ucfai.org/supplementary/sp20/math-primer-python-bootcamp) and core [Linear Regression workshop](https://ucfai.org/core/sp20/linear-regression).
## Introduction
Lets look at how statistical methods are used in an applied machine learning project:
* Problem Framing: Requires the use of exploratory data analysis and data mining.
* Data Understanding: Requires the use of summary statistics and data visualization.
* Data Cleaning: Requires the use of outlier detection, imputation and more.
* Data Selection: Requires the use of data sampling and feature selection methods.
* Data Preparation: Requires the use of data transforms, scaling, encoding and much more.
* Model Evaluation: Requires experimental design and resampling methods.
* Model Configuration: Requires the use of statistical hypothesis tests and estimation statistics.
* Model Selection: Requires the use of statistical hypothesis tests and estimation statistics.
* Model Presentation: Requires the use of estimation statistics such as confidence intervals.
* Model Predictions: Requires the use of estimation statistics such as prediction intervals
[Source: https://machinelearningmastery.com/statistics_for_machine_learning/]
## Descriptive and Inferential Statistics
**Descriptive statistics** identify patterns in the data, but they don't allow for making hypotheses about the data.
Within descriptive statistics, there are three measures used to describe the data: *central tendency* and *deviation*.
* Central tendency tells you about the centers of the data. Useful measures include the mean, median, and mode.
* Variability tells you about the spread of the data. Useful measures include variance and standard deviation.
* Correlation or joint variability tells you about the relation between a pair of variables in a dataset. Useful measures include covariance and the correlation coefficient.
**Inferential statistics** allow us to make hypotheses (or inferences) about a sample that can be applied to the population.
In statistics, the **population** is a set of all elements or items that you’re interested in. Populations are often vast, which makes them inappropriate for collecting and analyzing data. That’s why statisticians usually try to make some conclusions about a population by choosing and examining a representative subset of that population.
This subset of a population is called a **sample**. Ideally, the sample should preserve the essential statistical features of the population to a satisfactory extent. That way, you’ll be able to use the sample to glean conclusions about the population.
```
import pandas as pd
import numpy as np
from sklearn.datasets import load_boston
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
## Load the Boston dataset into a variable called boston
boston = load_boston()
## Separate the features from the target
x = boston.data
y = boston.target
```
To view the dataset in a standard tabular format with the all the feature names, you will convert this into a pandas dataframe.
```
## Take the columns separately in a variable
columns = boston.feature_names
## Create the Pandas dataframe from the sklearn dataset
boston_df = pd.DataFrame(boston.data)
boston_df.columns = columns
```
## Descriptive Statistics
This portion serves as a very basic primer on Descriptive statistics and will explain concepts which are fundamental to understanding Inferential Statistics, its tools and techniques. We will be using Boston House Price dataset:
https://www.kaggle.com/c/boston-housing
Here is the Dataset description:
* crim
* per capita crime rate by town.
* zn
* proportion of residential land zoned for lots over 25,000 sq.ft.
* indus
* proportion of non-retail business acres per town.
* chas
* Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
* nox
* nitrogen oxides concentration (parts per 10 million).
* rm
* average number of rooms per dwelling.
* age
* proportion of owner-occupied units built prior to 1940.
* dis
* weighted mean of distances to five Boston employment centres.
* rad
* index of accessibility to radial highways.
* tax
* full-value property-tax rate per \$10,000.
* ptratio
* pupil-teacher ratio by town.
* black
* 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
* lstat
* lower status of the population (percent).
* medv
* median value of owner-occupied homes in \$1000s.
### Summary Statistics
To begin learning about the sample, we uses pandas' `describe` method, as seen below. The column headers in bold text represent the variables we will be exploring. Each row header represents a descriptive statistic about the corresponding column.
```
boston_df.describe()
```
`describe` isnt particularly enlightening on the distributions of our data
but can help use figure out how to approach our visualization techniques. Before we explore essential graphs for exploring our data, lets use a few more important pandas methods to aid in our exploratory data analysis task.
```
print ("Rows : " , boston_df.shape[0])
print ("Columns : " , boston_df.shape[1])
print ("\nFeatures : \n" , boston_df.columns.tolist())
print ("\nMissing values : ", boston_df.isnull().sum().values.sum())
print ("\nUnique values : \n",boston_df.nunique())
print('\n')
print(boston_df.head())
```
We first show the shape of our dataset. We have 506 rows for our 13 features (columns). This is a relatively nice dataset in that there arent many missing values. A future supplementary lecture in preprocessing will cover techniques in dealing with missing values.
We can see that there is a feature (CHAS) which has 2 unique values. This could indicate that it is a catgeorical variables. There are three types of statistical data we may be dealing with:
* Numerical (Quantitative) data have meaning as a measurement, such as a person’s height, weight, IQ, or blood pressure; or they’re a count, such as the number of stock shares a person owns or how many teeth a dog has. Numerical data can be further broken into two types: discrete and continuous.
* Discrete data represent items that can be counted; they take on possible values that can be listed out. The list of possible values may be fixed (also called finite); or it may go from 0, 1, 2, on to infinity (making it countably infinite). For example, the number of heads in 100 coin flips takes on values from 0 through 100 (finite case), but the number of flips needed to get 100 heads takes on values from 100 (the fastest scenario) on up to infinity (if you never get to that 100th heads).
* Continuous data represent measurements; their possible values cannot be counted and can only be described using intervals on the real number line. For example, the exact amount of gas purchased at the pump for cars with 20-gallon tanks would be continuous data from 0 gallons to 20 gallons, represented by the interval [0, 20], inclusive. Continuous data can be thought of as being uncountably infinite.
* Categorical (Qualitative) data represent characteristics such as a person’s gender, marital status, hometown, or the types of movies they like. Categorical data can take on numerical values (such as “1” indicating married and “2” indicating unmarried), but those numbers don’t have mathematical meaning. The process of giving these mathematical meaning for our model to understand is variable encoding. This will be covered in the preprocessing supplementary lecture.
* Ordinal data mixes numerical and categorical data. The data fall into categories, but the numbers placed on the categories have meaning. For example, rating a restaurant on a scale from 0 (lowest) to 4 (highest) stars gives ordinal data. Ordinal data are often treated as categorical, where the groups are ordered when graphs and charts are made. However, unlike categorical data, the numbers do have mathematical meaning. For example, if you survey 100 people and ask them to rate a restaurant on a scale from 0 to 4, taking the average of the 100 responses will have meaning. This would not be the case with categorical data.
### Central Tendencies
The central tendencies are values which represent the central or 'typical' value of the given distribution. The three most popular central tendency estimates are the mean, median and mode. Typically, in most cases, we resort to using mean (for normal distributions) and median (for skewed distributions) to report central tendency values.
A good rule of thumb is to use mean when outliers don't affect its value and median when it does (Bill Gates joke, anyone?).
Calculating the mean and median are extremely trivial with Pandas. In the following cell, we have calculated the mean and median of the average number of rooms per dwelling. As we can see below, the mean and the median are almost equal.
```
rooms = boston_df['RM']
rooms.mean(), rooms.median()
```
If the mean, median and the mode of a set of numbers are equal, it means, the distribution is symmetric. The more skewed is the distribution, greater is the difference between the median and mean, and we should lay greater emphasis on using the median as opposed to the mean
### Measures of Spread
Apart from the central or typical value of the data, we are also interested in knowing how much the data spreads. That is, how far from the mean do values tend to go. Statistics equips us with two measures to quantitatively represent the spread: the variance and the standard deviation. They are dependent quantities, with the standard deviation being defined as the square root of variance.
```
rooms.std(), rooms.var()
```
The mean and the standard deviation are often the best quantities to summarize the data for distributions with symmetrical histograms without too many outliers. As we can see from the histogram below, this indeed is the case for RM feature. Therefore, the mean and the standard deviation measures are sufficient information and other tendencies such as the median does not add too much of extra information.
```
sns.distplot(rooms)
```
This is an example of a normal (Gaussian) distribution. It is ideal that our continuous variables folllow this distribution because of the central limit theorem. See [here](https://towardsdatascience.com/why-data-scientists-love-gaussian-6e7a7b726859) for an explanation on why the Gaussian is ideal for machine learning models.
```
stats.normaltest(rooms)
```
`normaltest` returns a 2-tuple of the chi-squared statistic, and the associated p-value. Given the null hypothesis that x came from a normal distribution, the p-value represents the probability that a chi-squared statistic that large (or larger) would be seen. If the p-val is very small, it means it is unlikely that the data came from a normal distribution.
Here is an example of a skewed dsitribution and how to fix it in order to fit a normal distribution.
```
age = boston_df['AGE']
print(age.std(), age.mean())
sns.distplot(age)
```
There are many ways to transform skewed data in order to fit a normal distribution. This will transform the data into a normal distribution. Moreover, you can also try Box-Cox transformation which calculates the best power transformation of the data that reduces skewness although a simpler approach which can work in most cases would be applying the natural logarithm. More details about Box-Cox transformation can be found here and here
```
log_age = np.log(age)
print(log_age.std(), log_age.mean())
sns.distplot(log_age)
```
Although there is a long left tail, the log transformation reduces the deviation of the data. Can we measure normalcy? Yes! Rather than read from a Histogram, we can perform the Normal Test. This comes in the Scipy package and that lets us calculate the probability that the distrbution is normal, by chance.
### Univariate Analysis
It is a common practice to start with univariate outlier analysis where you consider just one feature at a time. Often, a simple box-plot of a particular feature can give you good starting point. You will make a box-plot using `seaborn` and you will use the `DIS` feature.
```
sns.boxplot(x=boston_df['DIS'])
plt.show()
```
A box-and-whisker plot is helpful for visualizing the distribution of the data from the mean. Understanding the distribution allows us to understand how far spread out her data is from the mean. Check out [how to read and use a Box-and-Whisker plot](https://flowingdata.com/2008/02/15/how-to-read-and-use-a-box-and-whisker-plot/).
The above plot shows three points between 10 to 12, these are **outliers** as they're are not included in the box of other observations. Here you analyzed univariate outlier, i.e., you used DIS feature only to check for the outliers.
An outlier is considered an observation that appears to deviate from other observations in the sample. We can spot outliers in plots like this or scatterplots.
Many machine learning algorithms are sensitive to the range and distribution of attribute values in the input data. Outliers in input data can skew and mislead the training process of machine learning algorithms resulting in longer training times and less accurate models.
A more robust way of statistically identifying outliers is by using the Z-Score.
The Z-score is the signed number of standard deviations by which the value of an observation or data point is above the mean value of what is being observed or measured. [*Source definition*](https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/z-score/).
The idea behind Z-score is to describe any data point regarding their relationship with the Standard Deviation and Mean for the group of data points. Z-score is about finding the distribution of data where the mean is 0, and the standard deviation is 1, i.e., normal distribution.
```
z = np.abs(stats.zscore(boston_df))
print(z)
threshold = 3
## The first array contains the list of row numbers and the second array contains their respective column numbers.
print(np.where(z > 3))
```
You could use Z-Score and set its threshold to detect potential outliers in the data. With this, we can remove the outliers from our dataframe. For example:
```
print(boston_df.shape)
boston_df = boston_df[(np.abs(stats.zscore(boston_df)) < 3).all(axis=1)]
print(boston_df.shape)
```
For each column, first it computes the Z-score of each value in the column, relative to the column mean and standard deviation.
Then is takes the absolute of Z-score because the direction does not matter, only if it is below the threshold.
all(axis=1) ensures that for each row, all column satisfy the constraint.
Finally, result of this condition is used to index the dataframe.
## References
* https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781784390150/2
* https://www.learndatasci.com/tutorials/data-science-statistics-using-python/
* https://www.datacamp.com/community/tutorials/demystifying-crucial-statistics-python
| github_jupyter |
# This notebook will give a first baseline estimation for the matching of entities via a random forest algorithm as multi-class classification
```
import os
import pandas as pd
import gzip
import json
import numpy as np
import nltk
from nltk.corpus import stopwords
import string
from nltk.tokenize import word_tokenize
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
trainPath = r'../../../src/data/LocalBusiness/Splitting_12.20/Train_Test/train tables' + '/'
testPath = r'../../../src/data/LocalBusiness/Splitting_12.20/Train_Test/test tables' + '/'
trainTables = os.listdir(trainPath)
testTables = os.listdir(testPath)
LBData = []
for table in trainTables:
if table != '.ipynb_checkpoints':
with gzip.open(trainPath + table, 'r') as dataFile:
for line in dataFile:
lineData = json.loads(line.decode('utf-8'))
lineData['origin'] = table
LBData.append(lineData)
trainData = pd.DataFrame(LBData)
LBData = []
for table in testTables:
if table != '.ipynb_checkpoints':
with gzip.open(testPath + table, 'r') as dataFile:
for line in dataFile:
lineData = json.loads(line.decode('utf-8'))
lineData['origin'] = table
LBData.append(lineData)
testData = pd.DataFrame(LBData)
columns = ['name']
trainData['concat'] = trainData[columns].astype(str).agg(' '.join, axis=1)
testData['concat'] = testData[columns].astype(str).agg(' '.join, axis=1)
trainData = trainData[['concat', 'cluster_id', 'origin']]
testData = testData[['concat', 'cluster_id', 'origin']]
trainData = trainData.loc[trainData['cluster_id'] > -1]
testData = testData.loc[testData['cluster_id'] > -1]
frames = [trainData, testData]
allData = pd.concat(frames)
allData['cluster_id'].nunique()
frames = [trainData, testData]
allData = pd.concat(frames)
noisy_clusters = [6426, 6050, 3978, 6430, 5022, 6051, 6428, 4164, 6424, 4217, 6425, 6421, 6429, 320, 6427, 4846, 5615, 2232, 5311, 6432, 4771, 1742, 5104, 3, 7, 8 ]
noisy_clusters.extend([35, 44, 42, 52, 63, 78, 66, 67, 102, 108, 120,129, 133, 143, 153, 154, 162, 169, 174, 180, 200, 202, 203, 206, 227, 228, 229, 231, 242, 244, 255, 263, 272, 278, 280, 324, 336, 352, 353, 372, 375, 408, 413, 444, 451, 453, 454, 458, 470, 476, 488, 489, 493, 496, 499, 504 ])
noisy_clusters.extend([26, 320, 436, 437])
noisy_clusters.extend([80, 95])
noisy_clusters.extend([6432,6430, 6429, 6428, 6427, 6426, 6425, 6424, 6423, 6422, 6421, 6397, 6396, 6393, 6381, 6365, 6360, 6351, 6097, 6094, 6092, 6086, 6078, 6049, 6048, 6004, 5998, 5909, 5822, 5616, 5615, 5571, 5564, 5563, 5562, 5508, 5410, 5311, 5289, 5279, 5278, 5269, 5418, 5417, 5104, 5022, 4925, 4846, 4843, 4790, 4789, 4779, 4774, 4771, 4710, 4234, 4230, 4217, 4198, 4164, 4048, 4033])
noisy_clusters.extend([514, 524, 526, 530, 532, 537, 547, 552, 557, 559, 583, 597, 601, 627, 628, 630, 632, 653, 664, 685, 690, 695, 707, 711, 743, 745, 751, 758, 775, 800, 802, 807, 808, 812, 866, 868, 872, 878, 881, 888, 911, 924, 931, 933, 935, 968, 998, 1002, 1006, 1028, 1066, 1071, 1075, 1082, 1094, 1097, 1099, 1109, 1128, 1133, 1142, 1143, 1144, 1159, 1160, 1174, 1181, 1184, 1189, 1193, 1195, 1207, 1211, 1214, 1218, 1221, 1244, 1245, 1259, 1261, 1266, 1274, 1276, 1277, 1286,1290, 1291, 1292, 1311, 1312, 1313, 1321, 1330, 1333, 1334, 1337, 1342, 1362, 1371, 1385, 1388, 1391, 1392, 1399, 1407, 1416, 1424, 1437, 1438, 1441, 1460, 1463, 1469, 1471, 1490, 1495, 1498, 1500, 1501, 1511, 1515, 1520, 1525, 1529, 1538, 1544, 1550, 1557, 1560, 1562, 1568, 1572, 1580, 1585, 1605, 1607, 1617, 1618, 1622, 1634, 1636, 1709, 1785, 1789, 1803, 1807, 1816, 1827, 1834, 1840, 1846, 1849, 1855, 1856, 1862, 1864, 1865, 1867, 1886, 1889, 1922, 1926, 1933, 1936, 1951, 1958, 1959, 1960, 1968, 1972, 1985, 1986, 1991, 1994, 1996, 2003, 2004, 769, 773, 823, 835, 839, 950, 658, 897, 1735, 1736, 1742, 1766, 1783, 774])
noisy_clusters.extend([2009, 2016, 2068, 2070, 2071, 2121, 2123, 2132, 2144, 2148, 2162, 2174, 2188, 2193, 2199, 2202, 2232, 2233, 2247, 2254, 2255, 2256, 2257, 2269, 2274, 2277, 2287, 2289, 2293, 2309,2324, 2332, 2354, 2403, 2416, 2417, 2431, 2434, 2444, 2446, 2485, 2489, 2500, 2506, 2599, 2600, 2605, 2611, 2616, 2618, 2637, 2654, 2669, 2678, 2679, 2681, 2687, 2709, 2710, 2713, 2717, 2729, 2737, 2740, 2751, 2754, 2786, 2787, 2789, 2791, 2802, 2807, 2808, 2809, 2821, 2826, 2846, 2875, 2876, 2878, 2880, 2887, 2920, 2922, 2925, 2930, 2965, 2973, 2976, 2980, 2986, 3004])
noisy_clusters.extend([3979, 3972, 3970, 3967, 3965, 3952, 3951, 3938, 3924, 3843, 3795, 3787, 3708, 3698, 3640, 3606, 3580, 3549, 3548, 3533, 3528, 3517, 3516, 3515, 3514, 3511, 3498, 3495, 3493, 3491, 3490, 3488, 3487, 3482, 3481, 3477, 3476, 3472, 3462, 3459, 3458, 3445, 3440, 3429, 3419, 3413, 3391, 3374, 3361, 3352, 3349, 3348, 3347, 3332, 3317, 3314, 3311, 3308, 3306, 3304, 3301, 3298, 3289, 3288, 3281, 3280, 3274, 3272, 3259, 3230, 3221, 3215, 3175, 3163, 3139, 3133, 3110, 3100, 3091, 3081, 3080, 3061, 3059, 3058, 3056, 3053, 3051, 3047, 3042, 3041, 3035, 3034, 3032, 3031, 3029, 3022, 3004])
allData = allData.loc[~allData['cluster_id'].isin(noisy_clusters)]
allData['cluster_id_mapped'] = allData.groupby('cluster_id').ngroup()
#trainData = allData.loc[allData['origin'].isin(trainData['origin'])]
#testData = allData.loc[~allData['origin'].isin(trainData['origin'])]
len(noisy_clusters)
allData['cluster_id_mapped'].nunique()
allData
trainData.to_csv(r'../../../src/data/LocalBusiness/Splitting_12.20/Train_Test/train.csv')
testData.to_csv(r'../../../src/data/LocalBusiness/Splitting_12.20/Train_Test/test.csv')
def remove_stopwords(token_vector, stopwords_list):
return token_vector.apply(lambda token_list: [word for word in token_list if word not in stopwords_list])
def remove_punctuation(token_vector):
return token_vector.apply(lambda token_list: [word for word in token_list if word not in string.punctuation])
#clusters = data.groupby(['telephoneNorm']).size().reset_index(name='counts').sort_values('counts', ascending=False)
#clusters = clusters.loc[clusters['counts'] > 1]
#clusteredData = data[data['telephoneNorm'].isin(clusters['telephoneNorm'])]
#clusteredData['ClusterID'] = clusteredData.groupby('telephoneNorm').ngroup()
#columns = ['name', 'addressregion', 'streetaddress', 'addresslocality', 'addresscountry', 'longitude', 'latitude']
#clusteredData['concat'] = clusteredData[columns].astype(str).agg(' '.join, axis=1)
allData = allData.rename(columns={'origin': 'originalSource'})
clusteredData = allData.sample(3000)
clusteredData
```
### Combine tf-idf and tf vector based features
```
#clean concated description column to use tf-idf
clusteredData['concat'] = clusteredData['concat'].apply(lambda row: row.lower())
clusteredData['tokens'] = clusteredData['concat'].apply(lambda row: word_tokenize(row))
clusteredData['tokens'] = remove_stopwords(clusteredData['tokens'],stopwords.words())
clusteredData['tokens'] = remove_punctuation (clusteredData['tokens'])
clusteredData.drop(columns=['concat'],inplace=True)
clusteredData = clusteredData[['tokens','cluster_id_mapped', 'originalSource']]
clusteredData
#define vectorizer to match preprocessed tokes for term frequency
def dummy(doc):
return doc
vectorizer = CountVectorizer(
tokenizer=dummy,
preprocessor=dummy,
max_features=5000)
tf_value = vectorizer.fit_transform(clusteredData['tokens'])
#define vectorizer to match preprocessed tokes
def dummy_fun(doc):
return doc
tfidf = TfidfVectorizer(
analyzer='word',
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None,
max_features=5000)
tfidf_value = tfidf.fit_transform(clusteredData['tokens'])
df_tf = pd.DataFrame(tf_value.toarray(), columns=vectorizer.get_feature_names())
df_tfidf = pd.DataFrame(tfidf_value.toarray(), columns=tfidf.get_feature_names())
df_prepared = pd.concat([clusteredData.reset_index(), df_tfidf], axis=1)
df_prepared
y = df_prepared[['cluster_id_mapped', 'originalSource']]
df_prepared.drop(columns=['tokens','cluster_id_mapped'], inplace=True)
y
y_train = y.loc[y['originalSource'].isin(trainData['origin'])]
y_test = y.loc[~y['originalSource'].isin(trainData['origin'])]
y_train = y_train['cluster_id_mapped']
y_test = y_test['cluster_id_mapped']
x_train = df_prepared.loc[df_prepared['originalSource'].isin(trainData['origin'])]
x_test = df_prepared.loc[~df_prepared['originalSource'].isin(trainData['origin'])]
x_train= x_train.drop(columns=['index', 'originalSource'])
x_test= x_test.drop(columns=['index', 'originalSource'])
y_train
# Baseline random forest
rf = RandomForestClassifier()
rf.fit(x_train,y_train)
prediction = rf.predict(x_test)
f1_mic = f1_score(y_test,prediction,average='micro')
f1_mac = f1_score(y_test,prediction,average='macro')
accuracy = accuracy_score(y_test,prediction)
precision = precision_score(y_test,prediction,average='micro')
recall = recall_score(y_test,prediction,average='micro')
precision_mac = precision_score(y_test,prediction,average='macro')
recall_mac = recall_score(y_test,prediction,average='macro')
print("The F1-Score micro on test set: {:.4f}".format(f1_mic))
print("The F1-Score macro on test set: {:.4f}".format(f1_mac))
print("The Precision on test set: {:.4f}".format(precision))
print("The Recall on test set: {:.4f}".format(recall))
print("The Precision macro on test set: {:.4f}".format(precision_mac))
print("The Recall macro on test set: {:.4f}".format(recall_mac))
print("The Accuracy-Score on test set: {:.4f}".format(accuracy))
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
url = '/content/drive/My Drive/Colab Notebooks/Unit 2/223 Data Modeling/london_merged.csv'
df = pd.read_csv(url)
print(df.shape)
df.head()
```
Lambda School Data Science
*Unit 2, Sprint 3, Module 1*
---
# Define ML problems
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your decisions.
- [ ] Choose your target. Which column in your tabular dataset will you predict?
- [ ] Is your problem regression or classification?
- [ ] How is your target distributed?
- Classification: How many classes? Are the classes imbalanced?
- Regression: Is the target right-skewed? If so, you may want to log transform the target.
- [ ] Choose your evaluation metric(s).
- Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?
- Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?
- [ ] Choose which observations you will use to train, validate, and test your model.
- Are some observations outliers? Will you exclude them?
- Will you do a random split or a time-based split?
- [ ] Begin to clean and explore your data.
- [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information?
If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.
Some students worry, ***what if my model isn't “good”?*** Then, [produce a detailed tribute to your wrongness. That is science!](https://twitter.com/nathanwpyle/status/1176860147223867393)
```
# Target is count
# Problem is regresstion
# Sporatic data
df['timestamp'] = pd.to_datetime(df['timestamp'], infer_datetime_format = True)
df['Date'] = df['timestamp'].dt.date
df['Time'] = df['timestamp'].dt.time
df.head()
px.scatter(df, x='Date',y='cnt')
```
Lambda School Data Science
*Unit 2, Sprint 3, Module 1*
---
# Wrangle ML datasets
- [ ] Continue to clean and explore your data.
- [ ] For the evaluation metric you chose, what score would you get just by guessing?
- [ ] Can you make a fast, first model that beats guessing?
**We recommend that you use your portfolio project dataset for all assignments this sprint.**
**But if you aren't ready yet, or you want more practice, then use the New York City property sales dataset for today's assignment.** Follow the instructions below, to just keep a subset for the Tribeca neighborhood, and remove outliers or dirty data. [Here's a video walkthrough](https://youtu.be/pPWFw8UtBVg?t=584) you can refer to if you get stuck or want hints!
- Data Source: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)
- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page)
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Traffic Light Detection
## Dependencies
```
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import pickle
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mplimg
import glob
%matplotlib inline
```
## First, I tested and debugged a couple of functions to properly detect circles so I could create a class later.
```
img = plt.imread("trafficgreen.jpeg",0)
#img = plt.imread("trafficred.jpeg",0)
#img = plt.imread("trafficorange.jpeg",0)
#img = plt.imread("camerared.jpeg",0)
#img = plt.imread("trafficgreen.jpeg",0)
#img = plt.imread("3light.jpg",0)
original = img
#
#img = cv2.medianBlur(img,9)
#cimg = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
#image brightness
def decrease_brightness(img, value=30):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
lim = 255 - value
v[v > lim] = 255
v[v <= lim] -= value
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
return img
#kernel = np.ones((5,5),np.float32)/22#/15 //22 kamerára 15 simre
#kernel = np.ones((4,4),np.float32)/6#mindenmásrajó csak kamerára nem
#img = cv2.filter2D(img,-1,kernel)
img = cv2.medianBlur(img,7)
#img = decrease_brightness(img,47)
#img = cv2.bitwise_not(img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
output = img.copy()
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1,5,param1=170,param2=10,minRadius=4,maxRadius=14)
'''
#well working
kernel = np.ones((6,6),np.float32)/15
gray = cv2.filter2D(gray,-1,kernel)
'''
#plt.imshow(gray, cmap='gray')
# ensure at least some circles were found
def orange(rgb):
return rgb[0]>225 and rgb[1]>100 and rgb[2]<160
def green(rgb):
r=int(rgb[0])
g=int(rgb[1])
b=int(rgb[2])
return (g-r > 40) and(g-b > 40)
def red(rgb):
r=int(rgb[0])
g=int(rgb[1])
b=int(rgb[2])
return (r-g > 40) and (r-b > 40)
h,w = img.shape[:2]
images=[]
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
index = 0
for (x, y, r) in circles:
rectX = (x - r)
rectY = (y - r)
#print(img[x+30,y+30])
#print(crop_img[7,7])
#print(index)
h-=1
w-=1
if x-r+3>=w or y-r+3>=h:
continue
#a = 3
rgb = original[y-r+1,x-r+3]
#rgb = original[h-((y+r)%h),w-((x+r)%w)]#simulator
rgb2 = original[y,x]
#rgb = img[y+10,x+10] #on all
#rgb = img[y+7,x-7] #on all
#rgb2 = img[y-1,x-7]
#print (rgb)
crSize = 20
if orange(rgb) or orange(rgb2):
#cv2.rectangle(output, (x+r-1, y-r+3), (x+r-1, y-r+3), (0, 0, 0), -1)
cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
crop_img = output[y-crSize:y+crSize, x-crSize:x+crSize]
#crop_img = img[y:(y+1*(r)), x:(x+1*(r))]
#ax1[index].set_title("orange")
#ax1[index].imshow(crop_img)
images.append([crop_img,"orange"])
#print(rgb)
#print(r,"radius")
#print("orange")
cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
index+=1
elif green(rgb) or green(rgb2):
cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
crop_img = output[y-crSize:y+crSize, x-crSize:x+crSize]
images.append([crop_img,"green"])
#print(original[y,x])
#ax1[index].set_title("green")
#ax1[index].imshow(crop_img)
#print("green")
#print(rgb)
cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
index+=1
elif red(rgb) or red(rgb2):
cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
crop_img = output[y-crSize:y+crSize, x-crSize:x+crSize]
images.append([crop_img,"red"])
#print(original[y,x])
#crop_img = img[y:(y+1*(r)), x:(x+1*(r))]
#print (rgb)
#ax1[index].set_title("red")
#ax1[index].imshow(crop_img)
#print(r,"radius")
#print("red")
#print(original[y,x])
cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
index+=1
else:
a=0
#print(rgb)
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
#num = 1
#cv2.rectangle(output, (x, y), (x, y), (0, 128, 255), -1)
# show the output image
if len (images)<1:
print("None")
elif(True):
#ax1[index].imshow(output)
#'''
s = 3
if len(images) > 1:
s = len(images)
#print(s)
f, ax1 = plt.subplots(s, 1, figsize=(20, 20))
f.tight_layout()
ind=0
for i in images:
ax1[ind].imshow(i[0])
ax1[ind].set_title(i[1])
ind+=1
plt.imshow(output)
```
## The final version of the Traffic Light detector
```
import time
class LightDetector():
def yellow(self,rgb):
return rgb[0]>225 and rgb[1]>100 and rgb[2]<160
def green(self,rgb):
r=int(rgb[0])
g=int(rgb[1])
b=int(rgb[2])
return (g-r > 40) and(g-b > 40)
def red(self,rgb):
r=int(rgb[0])
g=int(rgb[1])
b=int(rgb[2])
return (r-g > 40) and (r-b > 40)
def getColor(self,img):
output = img.copy()
def getCircles(self,img,camera):
original=img
kernel = None #/15 //22 kamerára 15 simre
#kernel = np.ones((4,4),np.float32)/6#mindenmásrajó csak kamerára nem
if camera:
#img = self.decrease_brightness(img,60)
img = cv2.medianBlur(img,7)
#kernel = np.ones((4,4),np.float32)/16
else:
img = cv2.medianBlur(img,7)
#kernel = np.ones((5,5),np.float32)/22
#img = cv2.bitwise_not(img)
#img = cv2.filter2D(img,-1,kernel)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
if camera:
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1,5,param1=170,param2=10,minRadius=4,maxRadius=14)
else:
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT,1,5,param1=170,param2=18,minRadius=0,maxRadius=30)
return circles,img,original
def printImages(self,images):
if len (images)<1:
print("None")
else:
s = len(images)
s+=1
f, ax1 = plt.subplots(s, 1, figsize=(20, 20))
f.tight_layout()
ind=0
for i in images:
ax1[ind].imshow(i[0])
ax1[ind].set_title(i[1])
ind+=1
def decrease_brightness(self,img, value=30):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
lim = 255 - value
v[v > lim] = 255
v[v <= lim] -= value
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
return img
def drawCirclesAndGetImages(self,img,camera=False,withImage=False):
circles,gray,original = self.getCircles(img,camera)
output=None
if withImage:
output = gray.copy()
h,w = img.shape[:2]
images=[]
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
if(y>=h or x>=w):
continue
rgb = original[y,x]
#rgb2 = original[y,x] optional
crSize = 20
if self.yellow(rgb):# or self.orange(rgb2):
#cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
if(withImage):
crop_img = img[y-crSize:y+crSize, x-crSize:x+crSize]
images.append([crop_img,"yellow"])
else:
images.append([None,"yellow"])
#cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
elif self.green(rgb):# or self.green(rgb2):
#cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
if(withImage):
crop_img = img[y-crSize:y+crSize, x-crSize:x+crSize]
images.append([crop_img,"green"])
else:
images.append([None,"green"])
#cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
elif self.red(rgb):# or self.red(rgb2):
#cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
if(withImage):
crop_img = img[y-crSize:y+crSize, x-crSize:x+crSize]
images.append([crop_img,"red"])
else:
images.append([None,"red"])
break#if red found one enough
# draw the circle in the output image, then draw a grey rectangle
#cv2.circle(output, (x, y), r+10, (150, 150, 150), 4)
#corresponding to the center of the circle
#cv2.rectangle(output, (x, y), (x, y), (0, 0, 0), -1)
return images,output
def getLightColor(self,images):
result = "green"
for (image,name) in images:
if name=="red":
return "red"
elif name=="yellow":
result="yellow"
return result
ld=LightDetector()
img = plt.imread("trafficred.jpeg",0)
startTime = time.time()
images,output = ld.drawCirclesAndGetImages(img,True)
'''
img2=images[1][0]
img2 = cv2.medianBlur(img2,5)
grey = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
c = cv2.HoughCircles(grey, cv2.HOUGH_GRADIENT,1,5,param1=170,param2=17,minRadius=0,maxRadius=30)
if c is not None:
print(len(c))
c = np.round(c[0, :]).astype("int")
for (x, y, r) in c:
cv2.circle(img2, (x, y), r+4, (150, 150, 150), 1)
plt.imshow(img2)
'''
print("Detected Light color is :",ld.getLightColor(images))
elapsedTime = time.time() - startTime
print("Elapsed Time between detection start and end:",elapsedTime, "millisec.")
print("Reference Image below")
plt.imshow(img)
```
### Test detection on every color of traffic light.
```
ld=LightDetector()
test = ['red','yellow','green']
result = []
testImages = []
testImages.append(plt.imread("trafficred.jpeg",0))
testImages.append(plt.imread("trafficorange.jpeg",0))
testImages.append(plt.imread("trafficgreen.jpeg",0))
for img in testImages:
images,output = ld.drawCirclesAndGetImages(img,True)
result.append(ld.getLightColor(images))
for i in range(len(test)):
print("Detected:",result[i],", Expected:",test[i])
```
# Another option would have been to create a neural network to locate and classify lamps.
### Process data to Image Detector and Classifier Neural Network
First you should download the Bosch's dataset from : https://hci.iwr.uni-heidelberg.de/content/bosch-small-traffic-lights-dataset
```
'''
#function to draw points on an image
def markerDotHelper(points,image):
#https://stackoverflow.com/questions/55545400/how-to-draw-a-point-in-an-image-using-given-coordinate
plt.imshow(image)
#plt.plot(640, 570, "og", markersize=10) # og:shorthand for green circle
plt.scatter(points[:, 0], points[:, 1], marker="o", color="yellow", s=10)
plt.show()
fileP = open('imagesGraySaled.p', 'wb')
fileNames = []
data=[]
with open('data/day/frameAnnotationsBOX.csv', 'r') as f:
for line in f.readlines()[1:]:
d = line.strip().split(';')
fileNames.append(d)
for currentFile in fileNames:
path, file = os.path.split(currentFile[0])
img = plt.imread("data/day/frames/"+file)
x_min = int(currentFile[2])
y_min = int(currentFile[3])
x_max = int(currentFile[4])
y_max = int(currentFile[5])
crpooedImg = img[y_min:y_max, x_min:x_max]
resized = cv2.resize(crpooedImg, (32,32), interpolation = cv2.INTER_AREA)
h,w = crpooedImg.shape[:2]
if(currentFile[1]=='go'):
data.append([crpooedImg,"green"])
else:
data.append([crpooedImg,"red"])
# dump information to that file
pickle.dump(data, fileP)
# close the file
fileP.close()
#Read the writed data
file2 = open('imagesGraySaled.p', 'rb')
f, ax1 = plt.subplots(10, 1, figsize=(24, 9))
f.tight_layout()
image2 = pickle.load(file2)
#ax1[0].imshow(image2[0][0])
#first 10 images
for i in range(0,10):
ax1[i].set_title(image2[i+600][1])
ax1[i].imshow(image2[i+600][0])
file2.close()
print("finished")
'''
```
| github_jupyter |
# Why You Should Hedge Beta and Sector Exposures (Part I)
by Jonathan Larkin and Maxwell Margenot
Part of the Quantopian Lecture Series:
* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
---
Whenever we have a trading strategy of any sort, we need to be considering the impact of systematic risk. There needs to be some risk involved in a strategy in order for there to be a return above the risk-free rate, but systematic risk poisons the well, so to speak. By its nature, systematic risk provides a commonality between the many securities in the market that cannot be diversified away. As such, we need to construct a hedge to get rid of it.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.covariance import LedoitWolf
import seaborn as sns
import statsmodels.api as sm
```
# The Fundamental Law of Asset Management
The primary driver of the value of any strategy is whether or not it provides a compelling risk-adjusted return, i.e., the Sharpe Ratio. As expressed in [The Foundation of Algo Success](https://blog.quantopian.com/the-foundation-of-algo-success/) and "The Fundamental Law of Active Management", by Richard Grinold, Sharpe Ratio can be decomposed into two components, skill and breadth, as:
$$IR = IC \sqrt{BR}$$
Technically, this is the definition of the Information Ratio (IR), but for our purposes it is equivalent to the Sharpe Ratio. The IR is the ratio of the excess return of a portfolio over its benchmark per unit active risk, i.e., the excess return of a long-only portfolio less its benchmark per unit tracking error. In the time of Grinold’s publication, however, long/short investing was a rarity. Today, in the world of hedge funds and long/short investing, there is no benchmark. We seek absolute returns so, in this case, the IR is equivalent to the Sharpe ratio.
In this equation, skill is measured by IC (Information Coefficient), calculated with [Alphalens](https://github.com/quantopian/alphalens). The IC is essentially the Spearman rank correlation, used to correlate your prediction and its realization. Breadth is measured as the number of **independent** bets in the period. The takeaway from this "law" is that, with any strategy, we need to:
1. Bet well (high IC),
2. Bet often (high number of bets), *and*
3. **Make independent bets**
If the bets are completely independent, then breadth is the total number of bets we have made for every individual asset, the number of assets times the number of periods. If the bets are not independent then the **effective breadth** can be much much less than the number of assets. Let's see precisely what beta exposure and sector exposure do to **effective breadth**.
<div class="alert alert-warning">
<b>TL;DR:</b> Beta exposure and sector exposure lead to a significant increase in correlation among bets. Portfolios with beta and sector bets have very low effective breadth. In order to have high Sharpe then, these portfolios must have very high IC. It is easier to increase effective breadth by hedging beta and sector exposure than it is to increase your IC.
</div>
# Forecasts and Bet Correlation
We define a bet as the forecast of the *residual* of a security return. This forecast can be implicit -- i.e., we buy a stock and thus implicity we forecast that the stock will go up. What though do we mean by *residual*? Without any fancy math, this simply means the return **less a hedge**. Let's work through three examples. We use the Ledoit-Wolf covariance estimator to assess our covariance in all cases. For more information on why we use Ledoit-Wolf instead of typical sample covariance, check out [Estimating Covariance Matrices](https://www.quantopian.com/lectures/estimating-covariance-matrices).
### Example 1: No Hedge!
If we go long on a set of securities, but do not hold any short positions, there is no hedge! So the *residual* is the stock return itself.
$$r_{resid,i} = r_i$$
Let's see what the correlation of our bets are in this case.
```
tickers = ['WFC', 'JPM', 'USB', 'XOM', 'BHI', 'SLB'] # The securities we want to go long on
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22') # Obtain prices
rets = historical_prices['close_price'].pct_change().fillna(0) # Calculate returns
lw_cov = LedoitWolf().fit(rets).covariance_ # Calculate Ledoit-Wolf estimator
def extract_corr_from_cov(cov_matrix):
# Linear algebra result:
# https://math.stackexchange.com/questions/186959/correlation-matrix-from-covariance-matrix
d = np.linalg.inv(np.diag(np.sqrt(np.diag(cov_matrix))))
corr = d.dot(cov_matrix).dot(d)
return corr
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
# Plot prices
left = historical_prices['close_price'].plot(ax=ax1)
# Plot covariance as a heat map
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
The result here is that we have six bets and they are all very highly correlated.
### Example 2: Beta Hedge
In this case, we will assume that each bet is hedged against the market (SPY). In this case, the residual is calculated as:
$$ r_{resid,i} = r_i - \beta_i r_i $$
where $\beta_i$ is the beta to the market of security $i$ calculated with the [CAPM](https://www.quantopian.com/lectures/the-capital-asset-pricing-model-and-arbitrage-pricing-theory) and $r_i$ is the return of security $i$.
```
tickers = ['WFC', 'JPM', 'USB', 'SPY', 'XOM', 'BHI', 'SLB' ] # The securities we want to go long on plus SPY
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22') # Obtain prices
rets = historical_prices['close_price'].pct_change().fillna(0) # Calculate returns
market = rets[symbols(['SPY'])]
stock_rets = rets.drop(symbols(['SPY']), axis=1)
residuals = stock_rets.copy()*0
for stock in stock_rets.columns:
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
residuals[stock] = results.resid
lw_cov = LedoitWolf().fit(residuals).covariance_ # Calculate Ledoit-Wolf Estimator
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
left = (1+residuals).cumprod().plot(ax=ax1)
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
The beta hedge has brought down the average correlation significanty. Theoretically, this should improve our breadth. It is obvious that we are left with two highly correlated clusters however. Let's see what happens when we hedge the sector risk.
### Example 3: Sector Hedge
The sector return and the market return are themselves highly correlated. As such, you cannot do a multivariate regression due to multicollinearity, a classic [violation of regression assumptions](https://www.quantopian.com/lectures/violations-of-regression-models). To hedge against both the market and a given security's sector, you first estimate the market beta residuals and then calculate the sector beta on *those* residuals.
$$
r_{resid,i} = r_i - \beta_i r_i \\
r_{resid_{SECTOR},i}= r_{resid,i} - \beta_{SECTOR,i}r_{resid,i}
$$
Here, $r_{resid, i}$ is the residual between the security return and a market beta hedge and $r_{resid_{SECTOR}, i}$ is the residual between *that* residual and a hedge of that residual against the relevant sector.
```
tickers = ['WFC', 'JPM', 'USB', 'XLF', 'SPY', 'XOM', 'BHI', 'SLB', 'XLE']
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22')
rets = historical_prices['close_price'].pct_change().fillna(0)
# Get market hedge ticker
mkt = symbols(['SPY'])
# Get sector hedge tickers
sector_1_hedge = symbols(['XLF'])
sector_2_hedge = symbols(['XLE'])
# Identify securities for each sector
sector_1_stocks = symbols(['WFC', 'JPM', 'USB'])
sector_2_stocks = symbols(['XOM', 'BHI', 'SLB'])
market_rets = rets[mkt]
sector_1_rets = rets[sector_1_hedge]
sector_2_rets = rets[sector_2_hedge]
stock_rets = rets.drop(symbols(['XLF', 'SPY', 'XLE']), axis=1)
residuals_market = stock_rets.copy()*0
residuals = stock_rets.copy()*0
# Calculate market beta of sector 1 benchmark
model = sm.OLS(sector_1_rets.values, market.values)
results = model.fit()
sector_1_excess = results.resid
# Calculate market beta of sector 2 benchmark
model = sm.OLS(sector_2_rets.values, market.values)
results = model.fit()
sector_2_excess = results.resid
for stock in sector_1_stocks:
# Calculate market betas for sector 1 stocks
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
# Calculate residual of security + market hedge
residuals_market[stock] = results.resid
# Calculate sector beta for previous residuals
model = sm.OLS(residuals_market[stock], sector_1_excess)
results = model.fit()
# Get final residual
residuals[stock] = results.resid
for stock in sector_2_stocks:
# Calculate market betas for sector 2 stocks
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
# Calculate residual of security + market hedge
residuals_market[stock] = results.resid
# Calculate sector beta for previous residuals
model = sm.OLS(residuals_market[stock], sector_2_excess)
results = model.fit()
# Get final residual
residuals[stock] = results.resid
# Get covariance of residuals
lw_cov = LedoitWolf().fit(residuals).covariance_
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
left = (1+residuals).cumprod().plot(ax=ax1)
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
There we go! The sector hedge brought down the correlation between our bets to close to zero.
## Calculating Effective Breadth
This section is based on "How to calculate breadth: An evolution of the fundamental law of active portfolio management", by David Buckle; Vol. 4, 6, 393-405, 2003, _Journal of Asset Management_. Buckle derives the "semi-generalised fundamental law of active management" under several weak assumptions. The key result of this paper (for us) is a closed-form calculaiton of effective breadth as a function of the correlation between bets. Buckle shows that breadth, $BR$, can be modeled as
$$BR = \frac{N}{1 + \rho(N -1)}$$
where N is the number of stocks in the portfolio and $\rho$ is the assumed single correlation of the expected variation around the forecast.
```
def buckle_BR_const(N, rho):
return N/(1 + rho*(N - 1))
corr = np.linspace(start=0, stop=1.0, num=500)
plt.plot(corr, buckle_BR_const(6, corr))
plt.title('Effective Breadth as a function of Forecast Correlation (6 Stocks)')
plt.ylabel('Effective Breadth (Number of Bets)')
plt.xlabel('Forecast Correlation');
```
Here we see that in the case of the long-only portfolio, where the average correlation is 0.56, we are *effectively making only approximately 2 bets*. When we hedge beta, with a resulting average correlation of 0.22, things get a little better, *three effective bets*. When we add the sector hedge, we get close to zero correlation, and in this case the number of bets equals the number of assets, 6.
**More independent bets with the same IC leads to higher Sharpe ratio.**
## Using this in Practice
Trading costs money due to market impact and commissions. As such, the post hoc implementation of a hedge is almost always suboptimal. In that case, you are trading purely to hedge risk. It is preferable to think about your sector and market exposure *throughout the model development process*. Sector and market risk is naturally hedged in a pairs-style strategy; in a cross-sectional strategy, consider de-meaning the alpha vector by the sector average; with an event-driven strategy, consider adding additional alphas so you can find offsetting bets in the same sector. As a last resort, hedge with a well chosen sector ETF.
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
```
# Imports
import numpy as np
from skmultiflow.trees import HoeffdingTree
from skmultiflow.data.file_stream import FileStream
val_actual_class_labels=[] #Valence Acutal class labels
val_predicted_class_labels=[] #Valence Predicted Class labels
aro_actual_class_labels =[] #Arousal Acutal class labels
aro_predicted_class_labels=[] #Arousal Predicted Class labels
#===================================================
# Valence Classification From EEG Stream
#===================================================
stream = FileStream('2021_Valence_Class_emo.csv')
stream.prepare_for_use()
# Setup Hoeffding Tree estimator
ht = HoeffdingTree()
# Setup variables to control loop and track performance
n_samples = 0
correct_cnt = 0
max_samples = 1280
val_act_prdt_class_labels =[]
# Train the estimator with the samples provided by the data stream
while n_samples < max_samples and stream.has_more_samples():
X, y = stream.next_sample()
y_pred = ht.predict(X)
val_actual_class_labels.append(y)
val_predicted_class_labels.append(y_pred)
val_act_prdt_class_labels.append([y,y_pred])
if y[0] == y_pred[0]:
correct_cnt += 1
ht = ht.partial_fit(X, y)
n_samples += 1
# # Display results
print('{} valence samples analyzed.'.format(n_samples))
#===================================================
# Valence Classification From EEG Stream
#===================================================
stream = FileStream('2021_Arousal_Class_emo.csv')
stream.prepare_for_use()
# Setup Hoeffding Tree estimator
ht = HoeffdingTree()
# Setup variables to control loop and track performance
n_samples = 0
correct_cnt = 0
max_samples = 1280
aro_act_prdt_class_labels =[]
# Train the estimator with the samples provided by the data stream
while n_samples < max_samples and stream.has_more_samples():
X, y = stream.next_sample()
y_pred = ht.predict(X)
aro_actual_class_labels.append(y)
aro_predicted_class_labels.append(y_pred)
aro_act_prdt_class_labels.append([y,y_pred])
if y[0] == y_pred[0]:
correct_cnt += 1
ht = ht.partial_fit(X, y)
n_samples += 1
# # Display results
print('{} arousal samples analyzed.'.format(n_samples))
#Reshaping 3D array to 2D array
#================================
# Valence
#================================
val_act = np.array(val_act_prdt_class_labels)
a,b,c = val_act.shape
newList_val = val_act.reshape(a,-1)
#================================
# Arousal
#================================
aro_act = np.array(aro_act_prdt_class_labels)
a,b,c = aro_act.shape
newList_aro = aro_act.reshape(a,-1)
classifier = 'Hoeffding_Tree'
fname_val = '09_JAN_2020_Valence_ALL_person'+'_' +classifier+'_results.csv'
np.savetxt(fname_val,newList_val, delimiter ="\t", fmt =['%d', '%d'],
header='y_act_discrete_emotion, y_pred_discrete_emotion')
fname_val = '09_JAN_2020_Arousal_ALL_person'+'_' +classifier+'_results.csv'
np.savetxt(fname_val,newList_aro, delimiter ="\t", fmt =['%d', '%d'],
header='y_act_discrete_emotion, y_pred_discrete_emotion')
#============================================
# Valence Classification Report
#============================================
from sklearn.metrics import confusion_matrix
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import classification_report
y_true = val_actual_class_labels #Acutal class labels
y_pred = val_predicted_class_labels #Predicted Class labels
cm = confusion_matrix(y_true, y_pred) # Confusion Matrix
target_names = ['Low','High'] # Class names
c_report = classification_report(y_true, y_pred, target_names=target_names) #Classification report
acc_score = balanced_accuracy_score(y_true, y_pred) #Balanced accuracy Score calculation
print('Confiusion matric')
print(cm)
# print('Accuracy score', acc_score)
print('Classification Report')
print(c_report)
from mlxtend.plotting import plot_confusion_matrix
import matplotlib.pyplot as plt
class_names = target_names
## Plot Confusion matric Valence
## ================================
fig1, ax1 = plot_confusion_matrix(conf_mat=cm, show_absolute=True,
show_normed=True,
colorbar=True,
class_names=class_names)
plt.figure(1)
# plt.show()
fname = 'Hoeffding Tree valence.jpeg'
plt.savefig(fname, bbox_inches='tight')
#============================================
# arousal Classification Report
#============================================
from sklearn.metrics import confusion_matrix
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import classification_report
y_true = aro_actual_class_labels #Acutal class labels
y_pred = aro_predicted_class_labels #Predicted Class labels
cm = confusion_matrix(y_true, y_pred) # Confusion Matrix
target_names = ['Low','High'] # Class names
c_report = classification_report(y_true, y_pred, target_names=target_names) #Classification report
acc_score = balanced_accuracy_score(y_true, y_pred) #Balanced accuracy Score calculation
print('Confiusion matric')
print(cm)
# print('Accuracy score', acc_score)
print('Classification Report')
print(c_report)
from mlxtend.plotting import plot_confusion_matrix
import matplotlib.pyplot as plt
class_names = target_names
## Plot Confusion matric Valence
## ================================
fig1, ax1 = plot_confusion_matrix(conf_mat=cm, show_absolute=True,
show_normed=True,
colorbar=True,
class_names=class_names)
plt.figure(1)
# plt.show()
fname = 'Hoeffding Tree arousal.jpeg'
plt.savefig(fname, bbox_inches='tight')
```
| github_jupyter |
```
import sklearn
import requests
import json
import spotipy#authentication
import spotipy.util as util#authentication
from spotipy.oauth2 import SpotifyClientCredentials#authentication
# Make sure to fill in your spotify client_secret information
cid = "049ade7215e54c63a2b628f3784dc407"
secret = "171ef0fc408745e88dd5b99b83291146"
redirect_uri = 'http://google.com/'
username = 'xxx'
#End points
sp_tracks = 'https://api.spotify.com/v1/me/tracks?limit=50'
sp_profile = 'https://api.spotify.com/v1/me'
scope = 'user-library-read'
token = util.prompt_for_user_token(username, scope, client_id=cid, client_secret=secret, redirect_uri=redirect_uri)
if token:
sp = spotipy.Spotify(auth=token)
else:
print("Can't get token for", username)
def sp_req(next_url):
try:
resp = requests.get(url=next_url,
headers={'Authorization': 'Bearer ' + token})
resp.raise_for_status()
except requests.exceptions.HTTPError as err:
print(err)
response = resp.json()
return response
class SSP:
def __init__(self):
self.all_song_info = {}
def get_user_profile():
response = sp_req(sp_profile)
key_dict = {
'uri': response['uri']
}
return key_dict
def get_user_tracks():
def get_tracks(next_url):
response = sp_req(next_url)
track_limit = (response['limit'])-1
for x in range(track_limit):
try:
track_uri = response['items'][x]['track']['uri'].split(':')
track_name = response['items'][x]['track']['name']
tracks.append({'track_uri':track_uri[2],'track_name':track_name})
except IndexError as error:
continue
try:
if (next_url is not None):
get_tracks(response['next'])
else:
print('hi')
except:
return
tracks = []
b = get_tracks(sp_tracks)
return tracks
a = SSP
b = a.get_user_profile()
b
import pandas as pd
c = a.get_user_tracks()
user_tracks = pd.DataFrame(c)
user_tracks
import boto3
from s3 import get_file
def data(data):
df = pd.read_csv(data,sep='|')
return df
s3 = boto3.resource('s3')
bucket = 's3ssp'
db_tracks = data(get_file(s3,bucket,download_file='NLP_Data/master_lyrics_audio_features.csv',rename_file = 'master_train_playlist.csv'))
dataset = user_tracks.merge(db_tracks,left_on ='track_uri',right_on='track_uri')
dataset
class Model:
def LDA(n_components):
return LDA
def get_playlists(lda_model):
# Create Document - Topic Matrix
# column names
topicnames = ["Topic" + str(i) for i in range(lda_model.n_components)]
# index names
docnames = [dataset['track_uri'].iloc[i] for i in range(len(dataset['lyrics']))]
# Make the pandas dataframe
df_document_topic = pd.DataFrame(np.array(lda_output), columns=topicnames, index=docnames)
# Get dominant topic for each document
dominant_topic = np.argmax(df_document_topic.values, axis=1)
df_document_topic['dominant_topic'] = dominant_topic
return df_document_topic
def get_playlist_elements(df_document_topic):
df_ssp = []
for col in df_document_topic.columns:
if col != 'dominant_topic':
topic_length= df_document_topic[df_document_topic[col]>=.60].nlargest(n=20, columns=col)
chosen_topic = topic_length[col]
if len(chosen_topic)>=1:
for track_uri in chosen_topic.index:
df_ssp.append({'playlist':col,'track_uri':track_uri})
topic_groupings = pd.DataFrame(df_ssp)
playlists = topic_groupings.merge(dataset,on='track_uri',how='left')
playlist_agg = playlists.groupby('playlist').median()
return [playlists,playlist_agg]
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
from yellowbrick.text import FreqDistVisualizer
import numpy as np
LDA = LatentDirichletAllocation(n_components=30, learning_method="batch",
max_iter=2, random_state=0)
vectorizer = CountVectorizer(analyzer='word',
min_df=.2, max_df=.5, # minimum reqd occurences of a word
stop_words='english', # remove stop words
lowercase=True, # convert all words to lowercase
token_pattern='[a-zA-Z0-9]{3,}', # num chars > 3
max_features=500) # max number of uniq words
data_vectorized = vectorizer.fit_transform(dataset['lyrics'].values.astype('U'))
LDA.fit(data_vectorized)
lda_output = LDA.transform(data_vectorized)
df_playlists = Model.get_playlists(LDA)
df_send = Model.get_playlist_elements(df_playlists)
df_base = df_send[0]
df_predict = df_send[1].reset_index()
df_predict_class = df_send[1].reset_index()
df_predict_class
```
## Classification
```
train_data = data(get_file(s3,bucket,download_file='Analysis_Data/master_train_playlist.csv',
rename_file = 'master_train_playlist.csv'))
test_data = data(get_file(s3,bucket,download_file='Analysis_Data/test_ssp.csv',rename_file = 'test.csv'))
df_predict_class = df_predict_class.reindex(sorted(df_predict_class.columns), axis=1)
df_train = train_data.reindex(sorted(train_data.columns), axis=1)
df_test = test_data.reindex(sorted(test_data.columns), axis=1)
df_predict_class
df_predict_class = df_predict_class.drop(columns=['playlist','valence','danceability',
'energy','acousticness',
])
df_train = df_train.drop(columns=['playlist','valence','danceability',
'energy','acousticness',
])
df_test = df_test.drop(columns=['playlist','valence','danceability',
'energy','acousticness',
])
def model(df_train,df_test,df_ssp):
#Structure
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split as tts
from sklearn.model_selection import cross_val_score as cvs
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
#Reports
from yellowbrick.classifier import confusion_matrix
#Metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score,f1_score
X = df_train[[col for col in df_train.columns if col != 'target']]
y = df_train['target']
X_train, X_test, y_train, y_test = tts(X,y, test_size=0.2)
models = [
#Standard Scaler,QuantileTransformer random_state=0
Pipeline([
('std',StandardScaler()),
('reg',LogisticRegression())
])
]
a = []
for model in models:
model.fit(X_train, y_train)
#y_pred = model.predict(df_agg_ssp)
_ = confusion_matrix(model, X_test, y_test,is_fitted=True)
y_pred = model.predict(df_predict_class)
print({'Model':model[1],'Transformer':model[0],'Model Score':cvs(model,X_test,y_test)[3],
#'F1 Score':f1_score(X_test,y_test),'Precision Score':precision_score(X_test,y_test),
#'Recall Score':recall_score(X_test,y_test)
})
return y_pred
whoopy = model(df_train,df_test,df_predict_class)
whoopy
import random
chosen_playlist = random.choice([index for index,a in enumerate(whoopy) if a == 1])
playlist_value = df_predict.iloc[chosen_playlist]
chosen_topic = playlist_value['playlist']
#chosen_topic = df_predict[df_predict['playlist']==playlist_value['playlist']]
ssp_deliverable = df_base[df_base['playlist']==chosen_topic]
ssp = ssp_deliverable.sort_values(by='valence',ascending=False)
ssp.plot.line(x='valence',y='energy')
import requests
import json
import pandas as pd
import spotipy#authentication
import spotipy.util as util#authentication
from spotipy.oauth2 import SpotifyClientCredentials#authentication
cid = '049ade7215e54c63a2b628f3784dc407'
secret = '171ef0fc408745e88dd5b99b83291146'
redirect_uri = 'http://google.com/'
username = 'name'
#Authentication
scope = 'playlist-modify-private'
token_playlist = util.prompt_for_user_token(username, scope, client_id=cid, client_secret=secret, redirect_uri=redirect_uri)
if token_playlist:
sp_playlist = spotipy.Spotify(auth=token_playlist)
else:
print("Can't get token for", username)
#Authentication
scope = 'user-read-private'
token_user = util.prompt_for_user_token(username, scope, client_id=cid, client_secret=secret, redirect_uri=redirect_uri)
if token_user:
sp_user = spotipy.Spotify(auth=token_user)
else:
print("Can't get token for", username)
def get_user_id(url):
try:
resp = requests.get(url,headers={'Authorization': 'Bearer ' + token_user},
#data={"name": "SSP"}
)
resp.raise_for_status()
except requests.exceptions.HTTPError as err:
print(err)
response = resp.json()
userid = response['id']
return userid
user_id = get_user_id('https://api.spotify.com/v1/me')
identification = user_id
identification
playlist = sp_playlist.user_playlist_create(identification,'Adam_SSP_Ideal', public=False, description="Ideal SSP")
sp_playlist.user_playlist_add_tracks(identification,playlist['id'],ssp['track_uri'], position=None)
```
| github_jupyter |
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# In this example we replicate the following workflow
# * Define temporal and spatial characteristics of a given event
# e.g. Hurricane Florence which was a very recent major hurricane
# to impact Southeastern US
# * Search for all available (whole orbit) granules for that event and
# download them from PO.DAAC Drive to the local machine
# * View files in the Panoply data viewer
# more information on Panoply can be found at
# https://www.giss.nasa.gov/tools/panoply/
# * Streamline the above data acquisition task using PO.DAAC's
# Level 2 Subsetting software to download a regional data subset for the
# event at the given space and time.
# * Finally, view the subsetted granule again in Panoply
# In this example we has chosen the MetOp-A ASCAT Level 2 Ocean
# Surface Wind Vectors Optimized for Coastal Ocean collection as our
# subject matter.
# This dataset contains operational near-real-time Level 2 coastal
# ocean surface wind vector retrievals from the Advanced Scatterometer
# (ASCAT) on MetOp-A at 12.5 km sampling resolution
# (note: the effective resolution is 25 km). It is a product of
# the European Organization for the Exploitation of Meteorological
# Satellites (EUMETSAT) Ocean and Sea Ice Satellite Application
# Facility (OSI SAF) provided through the Royal Netherlands
# Meteorological Institute (KNMI)
# More information on the MetOp mission specification can
# be found at https://podaac.jpl.nasa.gov/MetOp
from IPython.display import Image
Image(filename='ASCAT_geometry.jpg')
#First lets import the libraries we require
from pprint import pprint
from podaac import podaac as podaac
from podaac import podaac_utils as utils
from podaac import drive as drive
#Then we can create instances of the classes we will use
p = podaac.Podaac()
u = utils.PodaacUtils()
d = drive.Drive('podaac.ini', None, None)
# Let's discover PO.DAAC Wind data relating to Hurricane Florence, which
# was a very recent major hurricane to impact Southeastern US
# https://en.wikipedia.org/wiki/Hurricane_Florence
# Using specific parameters to confine the discovery space, we opt for the full
# metadata record in atom format
ds_result = p.dataset_search(keyword='ASCAT',
start_time='2018-09-12T00:00:01Z',
end_time='2018-09-14T11:59:59Z',
short_name='ASCATA-L2-Coastal',
process_level='2',
bbox='-81,28,-67,40',
pretty='True',
_format='atom',
full='True')
print(ds_result)
#Because we requested the Full response, we can actually extract the
# PO.DAAC Drive URL for all granules contained within this dataset.
search_str = 'https://podaac-tools.jpl.nasa.gov/drive/files/'
drive_path = [ str(i) for i in ds_result.strip().split() if search_str in i ][0]
print(drive_path[5:])
#Next, lets search for Granules of interest relating to the above discovery operation
#Lets execute a search for specific granules from the following dataset
# MetOp-A ASCAT Level 2 Ocean Surface Wind Vectors Optimized for Coastal Ocean
# https://podaac.jpl.nasa.gov/dataset/ASCATA-L2-Coastal
# ...based upon temporal (start and end) and spatial contraints.
result = p.granule_search(dataset_id='PODAAC-ASOP2-12C01',
start_time='2018-09-12T00:00:01Z',
end_time='2018-09-14T11:59:59Z',
bbox='-81,28,-67,40',
sort_by='timeAsc',
items_per_page='400',
_format='atom')
#print(result)
searchStr = 'totalResults'
numResultsStr = [ str(i) for i in result.strip().split() if searchStr in i ]
print(numResultsStr)
#Here's the actual granule names
pprint(u.mine_granules_from_granule_search(granule_search_response=str(result)))
#Now we simply need to reproduce the Drive URL's for the above granules.
granules = d.mine_drive_urls_from_granule_search(granule_search_response=(str(result)))
pprint(granules)
#Let's retrieve these granules from PO.DAAC Drive.
#Note that the download_granules function actually decompresses
#and removes the compressed archive files locally for us.
d.download_granules(granule_collection=granules, path='./dummy')
#Let's merge the files together
import glob, os, subprocess, shlex
nc_files = []
for file in glob.glob("*.nc"):
nc_files.append(os.path.abspath(file))
str_nc_files = ' '.join(nc_files)
# Let's open the granules within Panoply - https://www.giss.nasa.gov/tools/panoply/
# which is a netCDF, HDF and GRIB Data Viewer
# developed by NASA's Goddard Institute for Space Studies
args = shlex.split('/Applications/Panoply.app/Contents/MacOS/Panoply ' + str_nc_files)
subprocess.Popen(args)
#Finally, let's subset the granule using L2SS
#and download only the area of interest.
from podaac import l2ss as l2ss
l = l2ss.L2SS()
granule_id = 'ascat_20180913_134800_metopa_61756_eps_o_coa_2401_ovw.l2.nc'
query = {
"email": "your_email@here.com",
"query":
[
{
"compact": "true",
"datasetId": "PODAAC-ASOP2-12C01",
"bbox": "-81,28,-67,40",
"variables": ["lat", "lon", "time", "wind_speed"],
"granuleIds": ["ascat_20180913_134800_metopa_61756_eps_o_coa_2401_ovw.l2.nc"]
}
]
}
l.granule_download(query_string=query)
ss_granule = os.path.abspath('subsetted-' + granule_id)
print(ss_granule)
# Finally let's make a call to Panoply to open the subsetted granule.
args = shlex.split('/Applications/Panoply.app/Contents/MacOS/Panoply ' + ss_granule)
subprocess.Popen(args)
# A final comment and some food for thought, if you were
# to write the above python script from scratch, you would
# have to write around 400 or so lines of code.
# Less the print statements, we've achieved it above in less
# than 30 lines of code!
# What is more, the code we have used has been tested by users,
# as well as by our rich unit testing suite. Every function
# in Podaacpy has an accompanying test!
# Please report any issues with the above notebook at
# https://github.com/nasa/podaacpy/issues
```
| github_jupyter |
# Table of Contents
* [1) Large Margin Classification](#1%29-Large-Margin-Classification)
* [1) Optimization Objective](#1%29-Optimization-Objective)
* [2) Large Margin Intuition](#2%29-Large-Margin-Intuition)
* [3) Mathematics Behind Large Margin Classification](#3%29-Mathematics-Behind-Large-Margin-Classification)
* [2) Kernels](#2%29-Kernels)
* [1) Kernels I](#1%29-Kernels-I)
* [2) Kernels II](#2%29-Kernels-II)
* [3) SVMs in Practice](#3%29-SVMs-in-Practice)
* [1) Using an SVM](#1%29-Using-an-SVM)
# 1) Large Margin Classification
## 1) Optimization Objective
<img src="images/lec12_pic01.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sHfVT/optimization-objective) 1:00*
<!--TEASER_END-->
<img src="images/lec12_pic02.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sHfVT/optimization-objective) 3:00*
<!--TEASER_END-->
<img src="images/lec12_pic03.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sHfVT/optimization-objective) 9:00*
<!--TEASER_END-->
<img src="images/lec12_pic04.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sHfVT/optimization-objective) 13:54*
<!--TEASER_END-->
<img src="images/lec12_pic05.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sHfVT/optimization-objective) 13:54*
<!--TEASER_END-->
## 2) Large Margin Intuition
<img src="images/lec12_pic06.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/wrjaS/large-margin-intuition) 1:00*
<!--TEASER_END-->
<img src="images/lec12_pic07.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/wrjaS/large-margin-intuition) 2:35*
<!--TEASER_END-->
<img src="images/lec12_pic08.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/wrjaS/large-margin-intuition) 4:46*
<!--TEASER_END-->
<img src="images/lec12_pic09.png">
<img src="images/lec12_pic10.png">
<img src="images/lec12_pic11.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/wrjaS/large-margin-intuition) 7:12*
<!--TEASER_END-->
<img src="images/lec12_pic12.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/wrjaS/large-margin-intuition) 7:20*
<!--TEASER_END-->
## 3) Mathematics Behind Large Margin Classification
<img src="images/lec12_pic13.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification) 1:00*
<!--TEASER_END-->
<img src="images/lec12_pic14.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification) 6:00*
<!--TEASER_END-->
<img src="images/lec12_pic15.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification) 11:25*
<!--TEASER_END-->
** Discussion **
- Why Decision Boundary is 90 degree to theta vector?
https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification/discussions/wEa6evufEeS16yIACyoj1Q
<img src="images/lec12_pic16.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification) 19:26*
<!--TEASER_END-->
** Answer **
- https://www.coursera.org/learn/machine-learning/lecture/3eNnh/mathematics-behind-large-margin-classification/discussions/KCrp1fTgEeSkXCIAC4tJTg
We have: $p^{(1)} = 2$, $p^{(2)} = -2$, so $||p^{(1)}|| = ||p^{(2)}|| = 2$
- Consider the constraints: $||\theta||*p >= 1$ and $||\theta||*p<= -1.$
In our case we have ||p||=2. So to satisfy both of our constraints it is necessary for $||\theta||$ to be equal to 1/2. This is true for theta=[1/2 0].
# 2) Kernels
## 1) Kernels I
<img src="images/lec12_pic17.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 0:30*
<!--TEASER_END-->
<img src="images/lec12_pic18.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 2:30*
<!--TEASER_END-->
<img src="images/lec12_pic19.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 6:00*
<!--TEASER_END-->
<img src="images/lec12_pic20.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 8:30*
<!--TEASER_END-->
<img src="images/lec12_pic21.png">
<img src="images/lec12_pic22.png">
<img src="images/lec12_pic23.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 11:06*
<!--TEASER_END-->
<img src="images/lec12_pic24.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/YOMHn/kernels-i) 11:09*
<!--TEASER_END-->
## 2) Kernels II
<img src="images/lec12_pic25.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/hxdcH/kernels-ii) 0:30*
<!--TEASER_END-->
<img src="images/lec12_pic26.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/hxdcH/kernels-ii) 2:15*
<!--TEASER_END-->
<img src="images/lec12_pic27.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/hxdcH/kernels-ii) 5:30*
<!--TEASER_END-->
<img src="images/lec12_pic28.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/hxdcH/kernels-ii) 12:45*
<!--TEASER_END-->
<img src="images/lec12_pic29.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/hxdcH/kernels-ii) 15:31*
<!--TEASER_END-->
# 3) SVMs in Practice
## 1) Using an SVM
<img src="images/lec12_pic30.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 0:20*
<!--TEASER_END-->
<img src="images/lec12_pic31.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 5:20*
<!--TEASER_END-->
<img src="images/lec12_pic32.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 8:30*
<!--TEASER_END-->
<img src="images/lec12_pic33.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 12:50*
<!--TEASER_END-->
<img src="images/lec12_pic34.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 12:52*
<!--TEASER_END-->
<img src="images/lec12_pic35.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/machine-learning/lecture/sKQoJ/using-an-svm) 14:30*
<!--TEASER_END-->
| github_jupyter |
```
from google.colab import drive
drive.mount("/content/drive")
!unzip '/content/drive/My Drive/Colab_Dataset/Dataset2.zip'
pip install np_utils
import matplotlib.pyplot as plt
import tensorflow as tf
import PIL
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications import Xception
from tensorflow.keras.layers import Input, InputLayer, Dense, Flatten, Conv2D,Activation, BatchNormalization, Conv2D, Dropout,GlobalAveragePooling2D
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from keras.utils.np_utils import to_categorical
import pandas as pd
import numpy as np
import os, cv2
import random
import scipy
import torch
from torch.utils.data import Dataset, DataLoader
from torch import nn
import seaborn as sns
import math
color = sns.color_palette()
image_name = []
image_label = []
for file in os.listdir('COVID'):
filename, fileextension = os.path.splitext(file)
if(fileextension == '.png'):
file_path = 'COVID' + '/' + file
image_name.append(file_path)
image_label.append(1)
for file in os.listdir('non-COVID'):
filename, fileextension = os.path.splitext(file)
if(fileextension == '.png'):
file_path = 'non-COVID' + '/' + file
image_name.append(file_path)
image_label.append(0)
WIDTH = 224
HEIGHT = 224
def process_image():
#Return two array. One of resized images and other of array of labels
x = [] # array of images
y = [] # array of labels
for i in range(0,len(image_name)):
#Read and resize image
full_size_image = cv2.imread(image_name[i])
x.append(cv2.resize(full_size_image,(WIDTH, HEIGHT),interpolation=cv2.INTER_CUBIC));
# Labels
y.append(image_label[i])
return x,y
x,y = process_image()
x = np.asarray(x)
y = np.asarray(y)
y = to_categorical(y, 2)
print('Shape of x: ',x.shape, ' Shape of y: ', y.shape)
print('Dimension of x: ', x.ndim, ' Dimension of y: ', y.ndim)
X_train, X_test, y_train, y_test = train_test_split(x,y, test_size = 0.2, random_state = 42)
print('Shape of X_train: ',X_train.shape, ' Shape of y_train: ', y_train.shape)
print('Shape of X_test: ',X_test.shape, ' Shape of y_test: ', y_test.shape)
#FEEDING DATA INTO THE MODEL
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale = 1.0/255.0,
#zoom_range = 0.2,
horizontal_flip = True,
fill_mode="nearest"
)
test_datagen = ImageDataGenerator(
rescale = 1.0/255.0)
BATCH_SIZE = 32
train_generator = train_datagen.flow(X_train,y_train, batch_size = BATCH_SIZE)
test_generator = test_datagen.flow(X_test,y_test, batch_size = BATCH_SIZE)
image_width=224
image_height=224
no_of_channels=3
input_shape=(image_width,image_height,no_of_channels)
#VGG-16 MODEL NO. 1
#from tensorflow.keras.applications import VGG16
tmodel_base = VGG16(input_shape = input_shape,
include_top = False,
weights = 'imagenet')
for layer in tmodel_base.layers:
layer.trainable = False
#Getting desired layer output
last_layer = tmodel_base.get_layer('block5_pool')
last = last_layer.output
x = Flatten()(last)
x = Dense(1028, activation = 'relu')(x)
x = Dropout(rate = 0.15)(x)
x = Dense(512, activation = 'relu')(x)
x = Dropout(rate = 0.2)(x)
x = Dense(2, activation = 'softmax')(x)
# Modification of pretrained mode
#Compiling model
model1 = Model(inputs = tmodel_base.input, outputs = x, name = 'Predict')
opt1 = Adam(lr=5e-5, beta_1=0.9, beta_2=0.999)
#opt2 = RMSprop(learning_rate = 0.001)
model1.compile(optimizer = opt1 , loss = 'categorical_crossentropy', metrics = ['accuracy'])
model1.summary()
history1 = model1.fit_generator(train_generator, epochs = 50, validation_data = test_generator) #, callbacks=[lrate])
# "Accuracy"
plt.plot(history1.history['accuracy'])
plt.plot(history1.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history1.history['loss'])
plt.plot(history1.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
accu = history1.history['accuracy'],
print(max(accu[0]) ,"at epoch number", (accu.index(max(accu))+1))
model1.evaluate(test_generator)
#Xception MODEL NO. 1
#from tensorflow.keras.applications import Xception
tmodel_base = Xception(input_shape = input_shape,
include_top = False,
weights = 'imagenet')
for layer in tmodel_base.layers:
layer.trainable = False
#Getting desired layer output
# Modification of pretrained model
last_layer = tmodel_base.get_layer('block14_sepconv2_act')
last_output = last_layer.output
#x = GlobalMaxPooling2D()(last_output)
x = MaxPooling2D(strides=(2,2))(last_output)
x = Flatten()(x)
#x = Dense(1024,activation='relu')(x)
#x = Dropout(0.15)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.15)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.15)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.25)(x)
x = layers.Dense(2, activation='softmax')(x)
#Compiling model
model2 = Model(inputs = tmodel_base.input, outputs = x, name = 'Our_Xception')
model2.summary()
opt1 = Adam(lr=5e-5, beta_1=0.85, beta_2=0.999)
#opt2 = RMSprop(learning_rate = 0.001)
model2.compile(optimizer = opt1 , loss = 'categorical_crossentropy', metrics = ['accuracy'])
history2 = model2.fit_generator(train_generator, epochs=50, validation_data = test_generator)
# "Accuracy"
plt.plot(history2.history['accuracy'])
plt.plot(history2.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history2.history['loss'])
plt.plot(history2.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
model2.evaluate(test_generator)
accu = history2.history['accuracy'],
print(max(accu[0]) ,"at epoch number", (accu.index(max(accu))+1))
#ResNet50 MODEL NO. 3
from tensorflow.keras.applications.resnet50 import ResNet50
num_classes=1
tmodel_base = ResNet50(input_shape = input_shape,
include_top = False,
weights = 'imagenet')
for layer in tmodel_base.layers:
layer.trainable = False
#last_layer = tmodel_base.get_layer('flatten_13')
last = tmodel_base.output
x = Conv2D(1024,(2,2),strides=(1,1))(last)
#x = layers.GlobalMaxPooling2D()(last)
x = Flatten()(x)
x = Dense(512, activation = 'relu')(x)
x = Dropout(rate = 0.15)(x)
x = Dense(256, activation = 'relu')(x)
x = Dropout(rate = 0.25)(x)
x = Dense(2, activation = 'softmax')(x)
#Compiling model
model3 = Model(inputs = tmodel_base.input, outputs = x, name = 'Predict')
model3.summary()
opt1 = Adam(lr=1e-4, beta_1=0.89, beta_2=0.999)
opt2 = RMSprop(learning_rate = 0.001)
model3.compile(optimizer = opt1 , loss = 'categorical_crossentropy', metrics = ['accuracy'])
history3 = model3.fit_generator(train_generator, epochs=50, validation_data = test_generator)
# "Accuracy"
plt.plot(history3.history['accuracy'])
plt.plot(history3.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history3.history['loss'])
plt.plot(history3.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
model3.evaluate_generator(test_generator)
#InceptionV3 MODEL NO. 4
from tensorflow.keras.applications.inception_v3 import InceptionV3
num_classes=1
tmodel_base = InceptionV3(input_shape = input_shape,
include_top = False,
weights = 'imagenet')
for layer in tmodel_base.layers:
layer.trainable = False
last = tmodel_base.output
#x = Conv2D(600, (1,1), activation = 'relu')(last)
x = Flatten()(last)
#x = Dense(1024, activation = 'relu',kernel_regularizer=tf.keras.regularizers.l1(0.01))(x)
#x = Dropout(rate = 0.1)(x)
x = Dense(512, activation = 'relu')(x)
x = Dropout(rate = 0.1)(x)
x = Dense(256, activation = 'relu')(x)
x = Dropout(rate = 0.25)(x)
x = Dense(2, activation = 'softmax')(x)
#Compiling model
model4 = Model(inputs = tmodel_base.input, outputs = x, name = 'Predict')
model4.summary()
opt1 = Adam(lr=5e-6, beta_1=0.9, beta_2=0.999)
opt2 = RMSprop(learning_rate = 0.001)
model4.compile(optimizer = opt1 , loss = 'categorical_crossentropy', metrics = ['accuracy'])
history4 = model4.fit_generator(train_generator, epochs=50, validation_data = test_generator)
# "Accuracy"
plt.plot(history4.history['accuracy'])
plt.plot(history4.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history4.history['loss'])
plt.plot(history4.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
model4.evaluate_generator(test_generator)
# Save the pre-trained models
model1.save("/content/drive/My Drive/Colab Notebooks/our_vgg16.h5")
model2.save("/content/drive/My Drive/Colab Notebooks/our_xception.h5")
model3.save("/content/drive/My Drive/Colab Notebooks/our_resnet50.h5")
model4.save("/content/drive/My Drive/Colab Notebooks/our_inceptionv3.h5")
```
| github_jupyter |
## Basic usage of matplotlib
Matplotlib is the module of choice whenever you want to make a niceplot.
```
# the following two lines are required inside a python script to be run on binder. They are not needed inside the notebook.
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt # this is the only line required to make plots with matplotlibt
# this line allows you to see the plot inside the notebook
%matplotlib inline
# This is a simple example to make an scatter plot
plt.figure() # Create and empty figure
x = [3,4,5]
y = [-1,2,2]
plt.scatter(x,y)
# This is another plot where we save the figure at the end as png file.
x = np.linspace(0,5,10)
y = x**2
plt.figure() # creates an empty figure
plt.scatter(x,y) # scatter plot
plt.xlabel("x")
plt.ylabel("y")
plt.title("Simple scatter plot")
plt.savefig("thefig.png") # Saves the figure
plt.figure()
x = np.linspace(-5.0,5.0,100)
y = 1./(1.+x**2)
plt.plot(x,y) # This time a line connect the points
plt.title(u"$\lambda$") # you can use latex
# After initializing plt.figure() you can make multiple plots
plt.figure()
plt.plot(range(10),2*np.arange(10))
plt.plot(range(10),3*np.arange(10))
plt.scatter(range(10),3*np.arange(10))
# Now let's make some simple figures
# In this case: circles
theta=np.linspace(0, 2*np.pi,50) # 50 points between 0 and 2pi
plt.figure()
plt.axis("equal")# this gives the same apparent size to the x and y axis
plt.axis("off") # this removes the axis
for i in range(5):
# Elegir el centro del círculo
x = 2*np.random.random()-1 # random value for the center in x
y = 2*np.random.random()-1 # random value for the center in y
r = np.random.random() # random value for the radius
plt.plot(r * np.cos(theta) + x, r * np.sin(theta) + y)
```
## Subplots
You can produce multiple plots inside the same figure.
```
plt.figure(figsize=(10,5))
a = np.linspace(0,2*np.pi,100)
plt.subplot(2,2,1)
plt.plot(a,np.sin(a))
plt.title("sin")
plt.subplot(2,2,2)
plt.plot(a,np.cos(a))
plt.title(u"cos")
plt.subplot(2,2,3)
plt.plot(a,np.tan(a))
plt.ylim(-2,2)
plt.title(u"tan")
plt.subplot(2,2,4)
plt.plot(a,np.cos(a)**2)
plt.title(r"$\cos^2$")
plt.subplots_adjust(hspace=.5) # adjusts the horizontal space between the subplots
# this an almost empty figure, only plotting text, so that you can see what is the ordering when you call plt.subplot(5,5,k)
plt.figure(figsize=(10,10))
k = 1
for n in range(1,6):
for d in range(1,6):
plt.subplot(5,5,k)
plt.plot()
plt.text(0,0,"k="+str(k))
plt.axis("off") # esto elimina los ejes
plt.axis("equal")
k = k + 1
# Drawing some roses
# http://en.wikipedia.org/wiki/Rose_%28mathematics%29
plt.figure(figsize=(10,10))
k = 1
for n in range(1,6):
for d in range(1,6):
plt.subplot(5,5,k)
kah = n/d
a = np.linspace(0,2*d*np.pi,200)
r = np.sin(kah * a)
x = r * np.cos(a)
y = r * np.sin(a)
plt.plot(x,y)
plt.axis("off")
plt.axis("equal")
k = k + 1
plt.show()
```
## Legends
```
# Adding legends to different lines in the same plot
plt.figure()
x = np.linspace(0,2*np.pi,20)
plt.plot(x,np.sin(x),"or-",label='sin(x)')
plt.plot(x,np.cos(x),"ob-",label='cos(x)')
plt.legend()
plt.axis([0,2*np.pi,-1,1])
```
### Exercise 4.1
Make a plot of the Lissajous curve corresponding to $a=5$, $b=4$.
See https://en.wikipedia.org/wiki/Lissajous_curve
### Exercise 4.2
**The solution to this exercise must ufuncs only, i.e. `for`, `while`, `if` statements cannot be used.**
Write a function to generate `N` points homogeneously distributed over a circle of radius `1` centered
on the origin of the coordinate system. The function must take as a in input `N` and return two arrays
`x` and `y` with the cartesian coodinates of the points. Use the function and generate `1000` points to
make a plot and confirm that indeed the points are homogeneously distributed.
Write a similar function to distribute `N` points over the surface of the 3D sphere of radius `1`.
Read the documentation https://pythonprogramming.net/matplotlib-3d-scatterplot-tutorial/ to see how to
make a 3D plot.
| github_jupyter |
```
from __future__ import division, print_function, absolute_import
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
import GenSyntheticMNSITFixedWidthModule as GenDataset
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Consts
DATSET_SIZE = 10000
WIDTH_NUMS = 2
def dense_to_one_hot(labels_dense, num_classes=10):
"""Convert class labels from scalars to one-hot vectors."""
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
print (index_offset + labels_dense.ravel())
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
# Get the dataset
X, Y = GenDataset.getDataSet(WIDTH_NUMS, DATSET_SIZE)
X = X.reshape([-1, 28, 28 * WIDTH_NUMS, 1])
print (X.shape)
# Generate validation set
ratio = 0.8 # Train/Test set
randIdx = np.random.random(DATSET_SIZE) <= ratio
#print (sum(map(lambda x: int(x), randIdx)))
X_train = X[randIdx]
Y_train = Y[randIdx]
X_test = X[randIdx == False]
Y_test = Y[randIdx == False]
Y_train = [dense_to_one_hot(Y_train[:,idx]) for idx in range(Y_train.shape[1])]
Y_test = [dense_to_one_hot(Y_test[:,idx]) for idx in range(Y_test.shape[1])]
del X, Y # release some space
# Test a sample data
%matplotlib inline
idx = np.random.randint(0,X_train.shape[0])
print ([Y_train[i][idx] for i in range(len(Y_train))])
print (X_train[idx].shape)
plt.imshow(np.squeeze(X_train[idx]), cmap = 'gray')
# Building convolutional network
network = input_data(shape=[None, 28, 28 * WIDTH_NUMS, 1], name='input')
network = conv_2d(network, 32, 3, activation='relu', regularizer="L2")
network = max_pool_2d(network, 2)
network = local_response_normalization(network)
network = conv_2d(network, 64, 3, activation='relu', regularizer="L2")
network = max_pool_2d(network, 2)
network = local_response_normalization(network)
fc_1 = fully_connected(network, 128, activation='tanh')
fc_1 = dropout(fc_1, 0.8)
fc_2 = fully_connected(network, 128, activation='tanh')
fc_2 = dropout(fc_2, 0.8)
softmax1 = fully_connected(fc_1, 10, activation='softmax')
softmax2 = fully_connected(fc_2, 10, activation='softmax')
network1 = regression(softmax1, optimizer='adam', learning_rate=0.01,
loss='categorical_crossentropy', name='target1')
network2 = regression(softmax2, optimizer='adam', learning_rate=0.01,
loss='categorical_crossentropy', name='target2')
network = tflearn.merge([network1, network2], mode='elemwise_sum')
model = tflearn.DNN(network, tensorboard_verbose=1)
model.fit({'input': X_train}, {'target1': Y_train[0], 'target2': Y_train[1]},
validation_set= (X_test, [Y_test[0], Y_test[1]]), n_epoch=5, snapshot_step=100, show_metric=True, run_id='convnet_mnist_')
```
| github_jupyter |
# Deep Learning with Python
## 6.1 Working with text data
> 处理文本数据
要用深度学习的神经网络处理文本数据,和图片类似,也要把数据向量化:文本 -> 数值张量。
要做这种事情可以把每个单词变成向量,也可以把字符变成向量,还可以把多个连续单词或字符(称为 *N-grams*)变成向量。
反正不管如何划分,我们把文本拆分出来的单元叫做 *tokens*(标记),拆分文本的过程叫做 *tokenization*(分词)。
> 注:token 的中文翻译是“标记”😂。这些翻译都怪怪的,虽然 token 确实有标记这个意思,但把这里的 token 翻译成标记就没内味儿了。我觉得 token 是那种以一个东西代表另一个东西来使用的意思,这种 token 是一种有实体的东西,比如代金券。“标记”这个词在字典上作名词是「起标示作用的记号」的意思,而我觉得记号不是个很实体的东西。代金券不是一种记号、也就能说是标记,同样的,这里的 token 也是一种实体的东西,我觉得不能把它说成是“标记”。我不赞同这种译法,所以下文所有涉及 token 的地方统一写成 “token”,不翻译成“标记”。
文本的向量化就是先作分词,然后把生成出来的 token 逐个与数值向量对应起来,最后拿对应的数值向量合成一个表达了原文本的张量。其中,比较有意思的是如何建立 token 和 数值向量 的联系,下面介绍两种搞这个的方法:one-hot encoding(one-hot编码) 和 token embedding(标记嵌入),其中 token embedding 一般都用于单词,叫作词嵌入「word embedding」。

### n-grams 和词袋(bag-of-words)
n-gram 是能从一个句子中提取出的 ≤N 个连续单词的集合。例如:「The cat sat on the mat.」
这个句子分解成 2-gram 是:
```
{"The", "The cat", "cat", "cat sat", "sat",
"sat on", "on", "on the", "the", "the mat", "mat"}
```
这个集合被叫做 bag-of-2-grams (二元语法袋)。
分解成 3-gram 是:
```
{"The", "The cat", "cat", "cat sat", "The cat sat",
"sat", "sat on", "on", "cat sat on", "on the", "the",
"sat on the", "the mat", "mat", "on the mat"}
```
这个集合被叫做 bag-of-3-grams (三元语法袋)。
把这东西叫做「袋」是因为它只是 tokens 组成的集合,没有原来文本的顺序和意义。把文本分成这种袋的分词方法叫做「词袋(bag-of-words)」。
由于词袋是不保存顺序的(分出来是集合,不是序列),所以一般不在深度学习里面用。但在轻量级的浅层文本处理模型里面,n-gram 和词袋还是很重要的方法的。
### one-hot 编码
one-hot 是比较基本、常用的。其做法是将每个 token 与一个唯一整数索引关联, 然后将整数索引 i 转换为长度为 N 的二进制向量(N 是词表大小),这个向量只有第 i 个元素为 1,其余元素都为 0。
下面给出两个玩具版本的 one-hot 编码示例:
```
# 单词级的 one-hot 编码
import numpy as np
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
token_index = {}
for sample in samples:
for word in sample.split():
if word not in token_index:
token_index[word] = len(token_index) + 1
# 对样本进行分词。只考虑每个样本前 max_length 个单词
max_length = 10
results = np.zeros(shape=(len(samples),
max_length,
max(token_index.values()) + 1))
for i, sample in enumerate(samples):
for j, word in list(enumerate(sample.split()))[:max_length]:
index = token_index.get(word)
results[i, j, index] = 1.
print(results)
# 字符级的 one-hot 编码
import string
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
characters = string.printable # 所有可打印的 ASCII 字符
token_index = dict(zip(range(1, len(characters) + 1), characters))
max_length = 50
results = np.zeros((len(samples), max_length, max(token_index.keys()) + 1))
for i, sample in enumerate(samples):
for j, character in enumerate(sample):
index = token_index.get(character)
results[i, j, index] = 1.
print(results)
```
Keras 内置了比刚才写的这种玩具版本强大得多的 one-hot 编码工具,在现实使用中,你应该使用这种方法,而不是使用刚才的玩具版本:
```
from tensorflow.keras.preprocessing.text import Tokenizer
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
tokenizer = Tokenizer(num_words=1000) # 只考虑前 1000 个最常见的单词
tokenizer.fit_on_texts(samples)
sequences = tokenizer.texts_to_sequences(samples) # 将字符串转换为整数索引组成的列表
print('sequences:', sequences)
one_hot_results = tokenizer.texts_to_matrix(samples, mode='binary') # 直接得到 one-hot 二进制表示
word_index = tokenizer.word_index # 单词索引,就是词表字典啦,用这个就可以还原数据
print(f'one_hot_results: shape={one_hot_results.shape}:\n', one_hot_results, )
print(f'Found {len(word_index)} unique tokens.', 'word_index:', word_index)
```
这种 one-hot 编码还有一种简单的变种叫做 *one-hot hashing trick*(one-hot 散列技巧),这个方法的思想是不对每个 token 关联唯一的整数索引,而是用哈希函数去作用,把文本直接映射成一个固定长度的向量。
用这种方法可以节省维护单词索引的内存开销,还可以实现在线编码(来一个编码一个,不影响之、之后的);但也有一些弊端:可能出现散列冲突,编码后的数据也不能够还原。
```
# 使用散列技巧的单词级的 one-hot 编码,玩具版本
samples = ['The cat sat on the mat.', 'The dog ate my homework.']
dimensionality = 1000 # 将单词保存为长度为 1000 的向量,单词越多这个值就要越大,不然散列冲突可能会加大
max_length = 10
results = np.zeros((len(samples), max_length, dimensionality))
for i, sample in enumerate(samples):
for j, word in list(enumerate(sample.split()))[:max_length]:
index = abs(hash(word)) % dimensionality # 将单词散列到 0~dimensionality 范围内的一个随机整数索引
results[i, j, index] = 1.
print(results.shape)
print(results)
```
### 词嵌入
从前面的例子中也可以看到 one-hot 的这种硬编码得到的结果向量十分稀疏,并且维度比较高。
词嵌入(word embedding)是另一种将单词与向量相关联的常用方法。这种方法可以得到比 one-hot 更加密集、低维的编码。词嵌入的结果是要从数据中学习得到的。

运用词嵌入有两种方法:
1. 利用 Embedding 层学习词嵌入:在完成着手进行的主要任务(比如文档分类或情感预测)的同时学习词嵌入:一开始使用随机的词向量,然后对词向量用与学习神经网络的权重相同的方法进行学习。
2. 利用预训练词嵌入(pretrained word embedding):在不同于待解决问题的机器学习任务上预训练好词嵌入,然后将其加载到模型中。
#### 利用 Embedding 层学习词嵌入
一个理想的词嵌入空间应该是可以比较完美地映射人类语言的。它是有符合现实的结构的,相近的词在空间中就应该比较接近,并且词嵌入空间中的方向也是要有意义的。例如一个比较简单的例子:

在这个词嵌入空间中,宠物都在靠下的位置,野生动物都在靠上的位置,所以一个从下到上方向的向量就应该是表示从宠物到野生动物的,这个向量从 cat 到 tiger 或者 dog -> wolf。类似的,一个从左到右的向量可以解释为从犬科到猫科,这个向量可以从 dog 到 cat,或者从 wolf 到 tiger。
再复杂一点的,比如要表示词的性别关系,将 king 向量加上 female 向量,应该得到的是 queen 向量,还有复数关系:king + plural == kings......
所以说,要有一个这样完美的词嵌入空间是很难的,现在还没有。但利用深度学习,我们还是可以得到对于特定问题来说比较好的词嵌入空间的。在 Keras 使中,我们只需要让模型学习一个 Embedding 层的权重就可以得到对当前任务的词嵌入空间:
```
from tensorflow.keras.layers import Embedding
embedding_layer = Embedding(1000, 64) # Embedding(可能的token个数, 嵌入的维度)
```
Embedding 层其实就相当于是一个字典,它将一个表示特定单词的整数索引映射到一个词向量。

Embedding 层的输入是形状为 `(samples, sequence_length)` 的二维整数张量。这个输入张量中的一个元素是一个代表一个文本序列的整数序列,应该保持输入的所有序列长度相同,较短的序列应该用 0 填充,较长的序列应该被截断。
Embedding 层的输出是形状为 `(samples, sequence_length, embedding_dimensionality)` 的三维浮点数张量。这个输出就可以用 RNN 或者 Conv1D 去处理做其他事情了。
Embedding 层一开始也是被随机初始化的,在训练过程中,会利用反向传播来逐渐调节词向量、改变空间结构,一步步接近我们之前提到的那种理想的状态。
实例:用 Embedding 层处理 IMDB 电影评论情感预测。
```
# 加载 IMDB 数据,准备用于 Embedding 层
from tensorflow.keras.datasets import imdb
from tensorflow.keras import preprocessing
max_features = 10000 # 作为特征的单词个数
maxlen = 20 # 在 maxlen 个特征单词后截断文本
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# 将整数列表转换成形状为 (samples, maxlen) 的二维整数张量
x_train = preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = preprocessing.sequence.pad_sequences(x_test, maxlen=maxlen)
# 在 IMDB 数据上使用 Embedding 层和分类器
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(10000, 8, input_length=maxlen)) # (samples, maxlen, 8)
model.add(Flatten()) # (samles, maxlen*8)
model.add(Dense(1, activation='sigmoid')) # top classifier
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_split=0.2)
```
这里我们只把词嵌入序列展开之后用一个 Dense 层去完成分类,会导致模型对输入序列中的每个词单独处理,而不去考虑单词之间的关系和句子结构。这会导致模型认为「this movie is a bomb(这电影超烂)」和「this movie is the bomb(这电影超赞)」都是负面评价。要学习句子整体的话就要用到 RNN 或者 Conv1D 了,这些之后再介绍。
#### 使用预训练的词嵌入
和我们在做计算机视觉的时候使用预训练网络一样,在手头数据少的情况下,使用预训练的词嵌入,借用人家训练出来的可复用的模型里的通用特征。
通用的词嵌入通常是利用词频统计计算得出的,现在也有很多可供我们选用的了,比如 word2vec、GloVe 等等,具体的原理都比较复杂,我们先会用就行了。
我们会在下文的例子中尝试使用 GloVe。
### 从原始文本到词嵌入
我们尝试从原始的 IMDB 数据(就是一大堆文本啦)开始,处理数据,做词嵌入。
#### 下载 IMDB 数据的原始文本
原始的 IMDB 数据集可以从 [http://mng.bz/0tIo](http://mng.bz/0tIo) 下载(最后是跳转到从s3下的 http://s3.amazonaws.com/text-datasets/aclImdb.zip ,不科学上网很慢哦)。
下载的数据集解压后是这样的:
```
aclImdb
├── test
│ ├── neg
│ └── pos
└── train
├── neg
└── pos
```
在每个 neg/pos 目录下面就是一大堆 `.txt` 文件了,每个里面是一条评论。
下面,我们将 train 评论转换成字符串列表,一个字符串一条评论,并把对应的标签(neg/pos)写到 labels 列表。
```
# 处理 IMDB 原始数据的标签
import os
imdb_dir = '/Volumes/WD/Files/dataset/aclImdb'
train_dir = os.path.join(imdb_dir, 'train')
texts = []
labels = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(train_dir, label_type)
for fname in os.listdir(dir_name):
if fname.endswith('.txt'):
with open(os.path.join(dir_name, fname)) as f:
texts.append(f.read())
labels.append(0 if label_type == 'neg' else 1)
print(labels[0], texts[0], sep=' --> ')
print(labels[-1], texts[-1], sep=' --> ')
print(len(texts), len(labels))
```
#### 对数据进行分词
现在来分词,顺便划分一下训练集和验证集。为了体验预训练词嵌入,我们再把训练集搞小一点,只留200条数据用来训练。
```
# 对 IMDB 原始数据的文本进行分词
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
maxlen = 100 # 只看每条评论的前100个词
training_samples = 200
validation_samples = 10000
max_words = 10000
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print(f'Found {len(word_index)} unique tokens.')
data = pad_sequences(sequences, maxlen=maxlen)
labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# 打乱数据
indices = np.arange(labels.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
# 划分训练、验证集
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]
```
#### 下载 GloVe 词嵌入
下载预训练好的 GloVe 词嵌入: [http://nlp.stanford.edu/data/glove.6B.zip](http://nlp.stanford.edu/data/glove.6B.zip)
写下来把它解压,里面用纯文本保存了训练好的 400000 个 tokens 的 100 维词嵌入向量。
#### 对嵌入进行预处理
解析解压后的文件:
```
glove_dir = '/Volumes/WD/Files/glove.6B'
embeddings_index = {}
with open(os.path.join(glove_dir, 'glove.6B.100d.txt')) as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print(f'Found {len(embeddings_index)} word vectors.')
```
然后,我们要构建一个可以加载进 Embedding 层的嵌入矩阵,其形状为 `(max_words, embedding_dim)`。
```
embedding_dim = 100
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
if i < max_words:
embedding_vector = embeddings_index.get(word) # 有的就用 embeddings_index 里的词向量
if embedding_vector is not None: # 没有就用全零
embedding_matrix[i] = embedding_vector
print(embedding_matrix)
```
#### 定义模型
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
```
#### 把 GloVe 词嵌入加载进模型
```
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
```
#### 训练与评估模型
```
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
model.save_weights('pre_trained_glove_model.h5')
# 绘制结果
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo-', label='Training acc')
plt.plot(epochs, val_acc, 'rs-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo-', label='Training loss')
plt.plot(epochs, val_loss, 'rs-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
只用 200 个训练样本还是太难了,但用预训练词嵌入还是得到了不错的成果的。作为对比,看看如果不使用预训练,会是什么样的:
```
# 构建模型
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
# 不使用 GloVe 词嵌入
# 训练
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
# 绘制结果
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo-', label='Training acc')
plt.plot(epochs, val_acc, 'rs-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo-', label='Training loss')
plt.plot(epochs, val_loss, 'rs-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
可以看到,在这个例子中,预训练词嵌入的性能要优于与任务一起学习的词嵌入。但如果有大量的可用数据,用一个 Embedding 层去与任务一起训练,通常比使用预训练词嵌入更加强大。
最后,再来看一下测试集上的结果:
```
# 对测试集数据进行分词
test_dir = os.path.join(imdb_dir, 'test')
texts = []
labels = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(test_dir, label_type)
for fname in sorted(os.listdir(dir_name)):
if fname.endswith('.txt'):
with open(os.path.join(dir_name, fname)) as f:
texts.append(f.read())
labels.append(0 if label_type == 'neg' else 1)
sequences = tokenizer.texts_to_sequences(texts)
x_test = pad_sequences(sequences, maxlen=maxlen)
y_test = np.asarray(labels)
# 在测试集上评估模型
model.load_weights('pre_trained_glove_model.h5')
model.evaluate(x_test, y_test)
```
emmm,最后的进度是令人惊讶的 50%+ !只用如此少的数据来训练还是难啊。
| github_jupyter |
O objetivo desta lista de exercício é instigar que você resolva problemas simples usando o básico do python, sem necessitar importar pacotes ainda.
Alguns exercícios são aplicados à oceanografia e outros gerais, mas todos com a intenção de que você fortaleça o conhecimento em alguns pontos chaves que servirão de base para os próximos dias.
Em caso de dúvidas use o slack, mas recomendo fortemente que gaste um tempo tentando pesquisar seu problema e como resolvê-lo. Isso será de vital importância no futuro, caso continue a resolver seus problemas científicos com python.
#### Exercício (para casa)
Para sairmos um pouco deste monte de teoria, vamos realizar alguns exercícios utilizando o que aprendemos até aqui.
**Exercício 1**: vamos exercitar um pouco a lógica de programação calculando quantos números pares temos dentro de uma sequência de números.
Começamos pelo mais simples:
1. crie uma lista de números inteiros aleatórios que você quiser
2. selecione a melhor estrutura que aprendemos para iterar nessa lista
3. faça testes para checar se o número em questão é par
4. caso seja, concatene +1 a uma variável auxiliar
Podemos agorar selecionar uma lista maior. Utilize a função ```range``` para iterar em uma lista de 0 a 100, descobrindo quantos pares temos nesta lista.
**dica**:
Um número será par quando o resto de sua divisão por 2 for 0. Temos um operador específico para calcular o resto de uma divisão (%), use-o.
```
# parte 1:
numeros_para_checar = [0, 15, 1405866, 1]
contador_auxiliar = 0
for i in numeros_para_checar:
if i%2 == 0:
contador_auxiliar = contador_auxiliar+1
print(f"Na lista: {numeros_para_checar} foram identificados {contador_auxiliar} números pares")
# parte 2:
# inicializar uma variável auxiliar como um contador (por isso cont)
contador_auxiliar = 0
# criamos uma lista de 0 a 100 usando o range e iteramos em cada elemento dentro dela com o for
for i in range(100):
# testamos se a divisão do elemento por 2 é zero
if i%2 == 0:
# se sim, então é um par e adicionamos um ao nosso contador
contador_auxiliar = contador_auxiliar + 1
else:
# se não, apenas continuar o código
pass
# exibe na tela uma mensagem para ficar mais educativo
print(f"De 0 a 100, temos um total de {contador_auxiliar} números pares")
```
-------------------
**Exercício 2**
Dada uma lista de organismos, selecione um que te agrade mais e conte quantas ocorrências ele tem na lista.
*Bônus 1*: escolha mais de um e armazene em contadores diferentes
*Bônus 2*: faça um código que identifique os organismos na lista, crie um dicionário, conte e armazene as ocorrências neste dicionário
Fonte: Dados adaptados a partir de "Comparative community structure of surf-zone fishes in the Chesapeake Bight and southern Brazil", Monteiro-Neto (1990)
```
# lista aleatória de organismos
# vamos abrir um arquivo txt usando a função nativa do python open(). Caso tenha dúvidas de como essa função trabalha, abra uma célula nova e digite open? e rode. Um
# pequeno manual da função abrirá para você
lista_especies = open('../lista_de_especies.txt', 'r').read()
lista_especies = lista_especies.split('\n')
lista_especies
# Solução 1:
especie_escolhida = 'Trachinotus marginatus'
contador = 0
for especie in lista_especies:
if especie == especie_escolhida:
contador += 1
print(f"Existem {contador} ocorrência da espécie {especie_escolhida}")
# Solução 2: com duas espécies e usando dicionário
especies_escolhidas = {
'Trachinotus marginatus': 0,
'Menticirrhus littoralis': 0
}
# iteração nos items de um dicionário
for organismo,cont in especies_escolhidas.items():
# iteração na lista bagunçada de espécies
for especie in lista_especies:
# checar se é a espécie que queremos contabilizar
if especie == organismo:
# se sim, adicionamos um
cont += 1
else:
# se não, continuamos
continue
# atualizamos o valor do contador no nosso dicionário
especies_escolhidas[organismo] = cont
print(especies_escolhidas)
# Solução 3: solução mais elaborada usando métodos disponíveis nativamente no python
# pegamos as ocorrências únicas, ou seja, o nome de cada espécie que aparece na lista
todas_especies = set(lista_especies)
# criamos um dicionario vazio que será preenchido mais pra frente
dicionario = {}
# iteramos pelas especies
for especie in todas_especies:
# podemos obter a ocorrência de uma variavel em um lista usando o método count()
ocorrencias = lista_especies.count(especie)
# adicionamos um novo registro ao dicionário
dicionario[especie] = ocorrencias
print(dicionario)
```
-------------------
**~Exercício 3~**
**Prática de Lógica**
Acompanhamento individual.
-------------------
| github_jupyter |
```
import re
import docx2txt
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
```
## Extract programming language from Knowledge Graph
```
file_name_1 = 'Mathew Elliot.docx'
file_name_2 = 'John Guy.docx'
file_name_3 = 'Max Payne.docx'
def extract_programming_languages(file_name):
# read in word file
result = docx2txt.process(file_name)
programming_languages_pattern = re.search(r'Programming Languages:[A-Za-z,\s0-9]*\.',result)
programming_languages_line = programming_languages_pattern.group(0)
languages = re.sub("Programming Languages: ","", programming_languages_line)
languages = re.sub("\.","",languages)
languages_clean = languages.split(', ')
print(languages_clean)
return languages_clean
name_1 = file_name_1.split('.')[0]
name_2 = file_name_2.split('.')[0]
name_3 = file_name_3.split('.')[0]
languages_mathew = extract_programming_languages(file_name_1)
languages_john = extract_programming_languages(file_name_2)
languages_max = extract_programming_languages(file_name_3)
```
## Create and Visualize a Knowledge Graph
```
names = [name_1,name_2,name_3]
def draw_graph(e_dict):
# create a directed-graph from a dataframe
G=nx.from_dict_of_lists(e_dict,create_using=nx.MultiDiGraph())
plt.figure(figsize=(12,12))
pos = nx.spring_layout(G)
nx.draw(G, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos, node_size = 4500, font_size = 18)
plt.show()
```
### Knowledge Graph - Single Candidate
```
edge_dict = {}
edge_dict[names[0]] = languages_mathew
draw_graph(edge_dict)
edge_dict = {}
edge_dict[names[1]] = languages_john
draw_graph(edge_dict)
edge_dict = {}
edge_dict[names[2]] = languages_max
draw_graph(edge_dict)
```
### Knowledge Graph for Multiple Candidates
```
edge_dict = {}
edge_dict[names[0]] = languages_mathew
edge_dict[names[1]] = languages_john
edge_dict[names[2]] = languages_max
G=nx.from_dict_of_lists(edge_dict,
create_using=nx.MultiDiGraph())
plt.figure(figsize=(12,12))
pos = nx.circular_layout(G) # k regulates the distance between nodes
nx.draw(G, with_labels=True, node_color='skyblue', node_size=4500, edge_cmap=plt.cm.Blues, pos = pos, font_size=18)
plt.show()
```
## Traversing a Knowledge Graph
```
def get_max_degree_node(list_of_nodes_to_eliminate, G):
max_degree=0
all_remaining_nodes = [x for x in G.nodes() if x not in list_of_nodes_to_eliminate]
max_node=all_remaining_nodes[0]
for node in all_remaining_nodes:
degree = G.degree(node)
if degree>max_degree:
max_degree = degree
max_node = node
return max_degree, max_node
max_skill_degree, max_skill_node = get_max_degree_node(names, G)
print(max_skill_node)
print(max_skill_degree)
skill_list = languages_mathew+languages_john+languages_max
max_languages_degree, max_languages_node = get_max_degree_node(skill_list,G)
print(max_languages_node)
print(max_languages_degree)
```
| github_jupyter |
# PIG, Beginner’s Version:
* Players take turns rolling a die as many times as they like.
* If a roll is a 2, 3, 4, 5, or 6, the player adds that many points to their score for the turn.
* A player may choose to end their turn at any time and “bank” their points.
* If a player rolls a 1, they lose all their unbanked points and their turn is over.
* The first player to score 100 or more points wins.
Note: **Do not put more than one funtion in any given cell**
Also: **Test each function in the cell directly beolow the one in which it is created.**
If you put your work in the cells that ask for each thing, you will accomplish the above directions.
---
First, let's import the random library for dice rolls and Counter to test our code as we go.
```
import random
from collections import Counter
```
Next, let's think about the game as shells of things. The smallest piece is rolling a die, so let's make a helper function for that.
```
# Write your helper function here.
```
Each time we write a function, we want to test it. We will use the Counter to test functions over 1000 trials. For some functions we will need to test multiple scenarios, but for rolling the die, just one test should be sufficient. See the **code block** below for how I did this unit test. This may help you see how to use the counter function.
For help with lists and appending to lists see the basic notebooks.
Code blocks display formated code in Markdown.
~~~
rolls = []
for i in range(1000):
rolls.append(roll_die())
Counter(rolls)
~~~
```
# Write your test here. Use a thousand rolls to test whether your distribution of rolls makes sense.
```
If your test shows results that you would expect, then go on to the next part. If your results suggest something isn't behaving the way you expected, go back and check over your code carefully.
Rolling the die is part of taking a turn, in which we may roll once, or many times. We will need to have human played turns and computer simulated turns. In addition, we will need to select a strategy for computer simulated turns.
There are three common strategies:
* Roll up to a specific value and then stop.
* Roll a certain number of times and then stop.
* Roll until your total will beat your opponents’ and then stop.
The strategy that you choose for your computer simulation will dictate how you write that helper function. If you choose a specific value or number of rolls strategy, then you can set that as an argument passed into the function.
Your function should return the points earned this turn, either the sum of rolls or zero if a one was rolled. (You can end the turn by returning zero!)
***In the cells below, tell me what strategy you are going to use on your initial write of this.
**The strategy for this computer simulation is:**
```
# write a function for your computer turn. This is just a single turn played by the computer.
# Be sure to let the player know what is happening with print statements. PLayers want to see rolls!
```
Be sure to test your function. You may need to test it with different values to be sure that it always works! Some strategies may require you to pass values to your function. For these you may want to add cells and test multiple scenarios.
```
# write code to test you player turns here. Simulate a thousand turns and see the distribution of results with a Counter.
```
Again, do not go forward until you are sure that this function is behaving as you would expect.
Next, we need to write a function to allow the human player to play a turn. This time it needs to let the human player know the value of each roll and ask if the player wishes to roll again as long as a one has not been rolled. This function should take the players name as an argument and return the players score for the turn.
```
# write a player turn function that takes the players name as an argument and returns the players score
# be sure that your player can;t roll again if a one is rolled
# be sure to have a default response if teh player hits enter without typing to the roll again? prompt
```
Test your player turn below. You can;t run this 1000 ties as it takes player input, but make sure that it doesn't get errors no matter what your players enters at the prompt and gives your player lots of information.
```
# test your player turn here
```
Now we need to assemble the game. It's a good idea to make the points to play to default to 100, but be an argument that you can set. After each turn, it should report the current scores. We may need to go back and make the computer turns stop rolling if the computer wins the game. This may require passing more arguments to those functions. Then, the simulated turn will need to take the current score as an argument.
```
# write your final game function here and test it
```
Once your version of the game works, consider whether it provides enough information to the human player. Let your human keep track of their total within turn themselves, but provide them end of turn summaries that them keep track of the game total scores. Is there any other information you want from the game?
Add in an option to provide rules for the game to the start of your game function.
Once you have a working version of the game that gives plenty of information to the human player, consider the following extension possibilities.
* make computer players for other strategies
* set up a helper function that plays the computer turn based on a strategy passed to it (think about what other arguments this function will need to take so that it can pass them to the computer player functions) and modify your game to use this function for the computer turn
* make a "chaos" option for your computer player helper function that chooses a random function and sets random values for value based and number of rolls based strategies.
* think about what you could keep and what would need to change to make an advanced, two dice version of the game (The Advanced Game Worksheet with rules and some step by step function layout is available for this.)
| github_jupyter |
<a href="https://colab.research.google.com/github/TomFrederik/lucent/blob/dev/notebooks/Lucent_%2B_torchvision.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Licensed under the Apache License, Version 2.0 (the "License");
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Lucent + torchvision
A demonstration of how Lucent works with different pretrained models and provides some samples of visualizations on other models.
## Install, Import
```
!pip install --quiet git+https://github.com/TomFrederik/lucent.git
import torch
from lucent.optvis import render, param, transform, objectives
from lucent.model_utils import get_model_layers
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
```
## InceptionV1
In nearly all of the notebooks here, we will be using InceptionV1, which has been noted to have ["unusually semantically meaningful"](https://distill.pub/2018/building-blocks/) neurons.
```
from torchvision.models import googlenet
inceptionv1_model = googlenet(pretrained=True)
_ = inceptionv1_model.to(device).eval()
```
Use `get_model_layers` to retrieve all the model layers.
```
# Print the first 20 layer names
get_model_layers(googlenet_model)[0][:20]
_ = render.render_vis(googlenet_model, "inception4a:107", show_inline=True)
```
We can also directly optimize for any particular output label. Just make sure that your final layer outputs logits and not probabilities (i.e. your final layer should be a linear layer and not a softmax):
```
# Try optimizing for the strawberry label
_ = render.render_vis(googlenet_model, "labels:949", show_inline=True)
# Optimizing for strawberries with CPPN
_ = render.render_vis(googlenet_model, "labels:949", cppn_param_f, cppn_opt, show_inline=True)
```
## ResNet50
```
from torchvision.models import resnet50
resnet50_model = resnet50(pretrained=True).to(device).eval()
# Print the first 20 layer names
get_model_layers(resnet50_model)[0][:20]
# render_vis works in the same way! Just substitute the appropriate layer name!
_ = render.render_vis(resnet50_model, "layer4_1_conv1:121", show_inline=True)
```
Again we can optimize directly for the strawberry label, because we have direct access to the logits! You can check to see that the last layer in the model is a linear layer and not a softmax.
```
# Try to activate the strawberry label (949 for usual ImageNet mapping)
_ = render.render_vis(resnet50_model, "labels:949", show_inline=True)
# Try to activate the strawberry label using CPPN parameterization
cppn_param_f = lambda: param.cppn(224)
cppn_opt = lambda params: torch.optim.Adam(params, 5e-3)
_ = render.render_vis(resnet50_model, "labels:949", cppn_param_f, cppn_opt, show_inline=True)
```
## MobileNetV2
Let's pick a lightweight one for the last example!
```
from torchvision.models import mobilenet_v2
mobilenet_v2_model = mobilenet_v2(pretrained=True).to(device).eval()
# Again print out first 20 layers
get_model_layers(mobilenet_v2_model)[0][:20]
_ = render.render_vis(mobilenet_v2_model, "features_15:45", show_inline=True)
# More strawberries!
_ = render.render_vis(mobilenet_v2_model, "labels:949", show_inline=True)
# One last strawberry
_ = render.render_vis(mobilenet_v2_model, "labels:949", cppn_param_f, cppn_opt, show_inline=True)
```
## Try it with other models from `torchvision`!
| github_jupyter |
# Finch usage
Finch is a WPS server for climate indicators, but also has a few utilities to facilitate data handling. To get started, first instantiate the client. Here, the client will try to connect to a local or remote finch instance, depending on whether the environment variable `WPS_URL` is defined.
```
import os
import xarray as xr
from birdy import WPSClient
pavics_url = 'https://pavics.ouranos.ca/twitcher/ows/proxy/finch/wps'
url = os.environ.get('WPS_URL', pavics_url)
verify_ssl = True if 'DISABLE_VERIFY_SSL' not in os.environ else False
wps = WPSClient(url, verify=verify_ssl)
```
The list of available processes can be displayed using the magic ? command (`wps?`). Similarly, help about any individual process is available using ? or the `help` command.
```
help(wps.frost_days)
```
To actually compute an indicator, we need to specify the path to the netCDF file used as input for the calculation of the indicator. To compute `frost_days`, we need a time series of daily minimum temperature. Here we'll use a small test file. Note that here we're using an OPeNDAP link, but it could also be an url to a netCDF file, or the path to a local file on disk. We then simply call the indicator. The response is an object that can poll the server to inquire about the status of the process. This object can use two modes:
- synchronous: it will wait for the server's response before returning; or
- asynchronous: it will return immediately, but without the actual output from the process.
Here, since we're applying the process on a small test file, we're using the default synchronous mode. For long computations, use the asynchronous mode to avoid time-out errors. The asynchronous mode is activated by setting the `progress` attribute of the WPS client to True.
```
tasmin = "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/testdata/flyingpigeon/cmip3/tasmin.sresa2.miub_echo_g.run1.atm.da.nc"
resp = wps.frost_days(tasmin)
print("Process status: ", resp.status)
urls = resp.get()
print("Link to process output: ", urls.output_netcdf)
```
The `get` method returns a `NamedTuple` object with all the WPS outputs, either as references to files or actual content. To copy the file to the local disk, use the `getOutput` method.
```
import tempfile
fn = tempfile.NamedTemporaryFile()
resp.getOutput(fn.name, identifier="output_netcdf")
ds = xr.open_dataset(fn.name, decode_timedelta=False)
ds.frost_days
```
birdy's get function has a more user-friendly solution: setting the `asobj` argument to True will directly download all the output files and return outputs as python objects. This mechanims however does not allow for additional keyword arguments, such as the `decode_timedelta` needed here.
```
# NBVAL_IGNORE_OUTPUT
out = resp.get(asobj=True)
out.output_netcdf
```
| github_jupyter |
```
# Change directory to the root so that relative path loads work correctly
import os
try:
os.chdir(os.path.join(os.getcwd(), ".."))
print(os.getcwd())
except:
pass
import glob
import sys
import matplotlib.pyplot as plt
import numpy as np
import torch
from experiments.A_constrained_training.main import build_model_and_optimizer, get_data
from experiments.A_constrained_training.visualize import (
plot_constraints_distribution,
plot_loss,
plot_model_predictions,
plot_parameters_distribution,
plot_time,
)
def get_model_predictions(checkpoints):
# Retrieve the testing data of these parameterizations
parameterizations = {
"amplitudes": [1.0],
"frequencies": [1.0],
"phases": [0.0],
"num_points": 500,
"sampling": "uniform",
}
# parameterizations = {
# "amplitudes": [5.0],
# "frequencies": [5.0],
# "phases": [-0.5],
# "num_points": 500,
# "sampling": "uniform",
# }
# Retrieve the data and equation of the first checkpoint
train_dl, test_dl = get_data(checkpoints[0]["configuration"])
# Get the models
inputs = list()
outputs = list()
predictions = list()
empty = True # for retrieving a copy of the data
for checkpoint in checkpoints:
model, opt = build_model_and_optimizer(checkpoint["configuration"])
# load and ensure on cpu
model.load_state_dict(checkpoint["model_state_dict"])
model.to(torch.device("cpu"))
modified_configuration = checkpoint["configuration"].copy()
modified_configuration["testing_parameterizations"] = parameterizations
__, test_dl = get_data(modified_configuration)
model.eval()
preds = list()
for (xb, params), yb in test_dl:
if empty:
inputs.extend(xb.squeeze().detach().numpy())
outputs.extend(yb.squeeze().detach().numpy())
preds.extend(model(xb, params).squeeze().detach().numpy())
empty = False
predictions.append(preds)
# sort
idxs = np.argsort(inputs)
inputs = np.array(inputs)[idxs]
outputs = np.array(outputs)[idxs]
predictions = np.array(predictions)[:, idxs]
return inputs, outputs, predictions
def convert_name_to_filename(model_name):
filename = model_name.replace(" ", "-").lower()
return filename
def get_model_name(checkpoint):
config = checkpoint["configuration"]
method = config["method"]
model_act = config["model_act"]
epoch = checkpoint["epoch"]
return f"{method} {str(model_act)[:-2]} ({epoch})"
def get_special_model_name(checkpoint, filename):
epoch = checkpoint["epoch"]
suffix = "Before" if epoch <= 200 else "After"
if "2019-08-05-15-07" in filename:
return f"Unconstrained --> Constrained ({suffix})"
elif "2019-08-05-15-05" in filename:
return f"Unconstrained --> Reduction ({suffix})"
elif "2019-08-05-14-38" in filename:
return f"Unconstrained --> No-loss ({suffix})"
# elif "proof-of-constraint_2019-08-06-13-49" in filename:
# return "Reduction"
# elif "proof-of-constraint_2019-08-06-13-15" in filename:
# return "Unconstrained"
elif "proof-of-constraint_2019-08-13-13-24-54" in filename:
return "cpu"
elif "proof-of-constraint_2019-08-13-13-38-16" in filename:
return "gpu"
else:
return get_model_name(checkpoint)
# Files to load
experiment_name = "A_constrained_training"
save_directory = f"results/{experiment_name}/"
load_directory = os.path.expandvars(f"$SCRATCH/results/checkpoints/{experiment_name}")
checkpoint_patterns = ["constrained-training_2019-08-14-12-58-23_00005.pth"]
# Load files
files = list()
for pattern in checkpoint_patterns:
files.extend(glob.glob(f"{load_directory}/{pattern}"))
files.sort()
print(files)
checkpoints = [torch.load(f, map_location=torch.device("cpu")) for f in files]
# model_names = [get_model_name(checkpoint) for checkpoint in checkpoints]
model_names = [
get_special_model_name(checkpoint, filename)
for checkpoint, filename in zip(checkpoints, files)
]
# Make sure directory to save exists
os.makedirs(save_directory, exist_ok=True)
# Do some plotting
max_epoch = max([checkpoint["epoch"] for checkpoint in checkpoints])
final_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["epoch"] == max_epoch
]
approximation_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["configuration"]["method"] == "approximate"
]
reduction_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["configuration"]["method"] == "reduction"
]
unconstrained_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["configuration"]["method"] == "unconstrained"
]
constrained_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["configuration"]["method"] == "constrained"
]
soft_constrained_checkpoints = [
(checkpoint, model_name)
for (checkpoint, model_name) in zip(checkpoints, model_names)
if checkpoint["configuration"]["method"] == "soft-constrained"
]
tasks = [
("Final Models", final_checkpoints),
("Unconstrained", unconstrained_checkpoints),
("Soft-Constrained", soft_constrained_checkpoints),
("Approximation", approximation_checkpoints),
("Reduction", reduction_checkpoints),
("Constrained", constrained_checkpoints),
]
for task_name, task in tasks:
print(task_name)
if len(task) == 0:
print(f"Nothing for task {task_name}")
continue
task_checkpoints = [x[0] for x in task]
task_model_names = [x[1] for x in task]
task_monitors = [checkpoint["monitors"] for checkpoint in task_checkpoints]
time_keys = set()
for monitors in task_monitors:
time_keys.update(
[
key
for key in monitors[0].timing_keys
if ("multipliers" in key or "total" in key or "compute" in key)
and "error" not in key
and "recomputed" not in key
]
)
time_keys = list(time_keys)
task_filename = convert_name_to_filename(task_name)
if "final" in task_filename:
fig = plot_time(
[monitors[0] for monitors in task_monitors],
task_model_names,
f"{task_filename}_compute-time",
time_keys=time_keys,
log=True,
directory=save_directory,
)
fig = plot_loss(
[[monitors[1]] for monitors in task_monitors],
task_model_names,
f"{task_filename}_constrained-loss",
title="Total Loss",
log=True,
directory=save_directory,
constrained=True,
)
fig = plot_loss(
[[monitors[1]] for monitors in task_monitors],
task_model_names,
f"{task_filename}_loss",
title="Data Loss",
log=True,
directory=save_directory,
)
# fig = plot_constraints(
# [[monitors[1]] for monitors in task_monitors],
# task_model_names,
# f"{task_filename}_constraint-value",
# title="Constraint Residual",
# log=False,
# directory=save_directory,
# )
# fig = plot_constraints(
# [[monitors[1]] for monitors in task_monitors],
# task_model_names,
# f"{task_filename}_constraint-magnitude",
# title="Absolute Value of Constraint Residual",
# ylabel="Average constraint magnitude",
# log=True,
# directory=save_directory,
# absolute_value=True,
# )
fig = plot_constraints_distribution(
[[monitors[1]] for monitors in task_monitors],
task_model_names,
f"{task_filename}_constraint-distribution",
title="Distribution of Constraint Residual",
ylabel="Constraint value",
log=False,
directory=save_directory,
absolute_value=False,
)
fig = plot_constraints_distribution(
[[monitors[1]] for monitors in task_monitors],
task_model_names,
f"{task_filename}_constraint-distribution-magnitude",
title="Distribution of Magnitude of Constraint Residual",
ylabel="Magnitude of constraint value",
log=True,
directory=save_directory,
absolute_value=True,
)
inputs, outputs, predictions = get_model_predictions(task_checkpoints)
fig = plot_model_predictions(
inputs,
outputs,
predictions,
task_model_names,
f"{task_filename}_predictions",
title="Model Predictions",
directory=save_directory,
)
if (
"approximation" in task_filename
or "reduction" in task_filename
or "constrained" in task_filename
):
idxs = np.argsort([checkpoint["epoch"] for checkpoint in task_checkpoints])
task_checkpoints = np.array(task_checkpoints)[idxs]
task_model_names = np.array(task_model_names)[idxs]
task_monitors = np.array(task_monitors)[idxs]
fig = plot_loss(
[monitors[1:] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_constrained-loss",
title=f"Total Loss for {task_name}",
directory=save_directory,
constrained=True,
)
fig = plot_loss(
[monitors[1:] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_loss",
title=f"Data Loss for {task_name}",
directory=save_directory,
)
fig = plot_constraints_distribution(
[monitors[1:] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_constraint-distribution",
title=f"Distribution of Constraint Residual for {task_name}",
ylabel="Constraint value",
log=False,
directory=save_directory,
absolute_value=False,
)
fig = plot_constraints_distribution(
[monitors[1:] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_constraint-distribution-magnitude",
title=f"Distribution of Magnitude of Constraint Residual for {task_name}",
ylabel="Magnitude of constraint value",
log=True,
directory=save_directory,
absolute_value=True,
)
fig = plot_parameters_distribution(
[monitors[:1] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_parameter-distribution",
title=f"Distribution of Parameter Values for {task_name}",
ylabel="Parameter values",
log=False,
directory=save_directory,
gradients=False,
)
fig = plot_parameters_distribution(
[monitors[:1] for monitors in task_monitors[-1:]],
[task_name],
f"{task_filename}_gradient-distribution",
title=f"Distribution of Parameter Gradients for {task_name}",
ylabel="Parameter gradients",
log=False,
directory=save_directory,
gradients=True,
)
# for idx, diagnostic_name in enumerate(
# ["LHS", "RHS", r"$\nabla_{\mathrm{inputs}}(\mathrm{outputs})$"]
# ):
# fig = plot_constraints_diagnostics(
# [monitors[1:] for monitors in task_monitors[-1:]],
# [task_name],
# f"{task_filename}_constraint-diagnostics{idx}",
# diagnostics_index=idx,
# title=f"{task_name} {diagnostic_name}",
# ylabel=f"Average {diagnostic_name}",
# directory=save_directory,
# )
extra_idxs = np.power(1.5, np.arange(len(task_checkpoints))).astype(int)
extra_idxs = extra_idxs[extra_idxs < len(task_checkpoints)]
limited_idxs = np.unique(np.r_[0, extra_idxs, len(task_checkpoints) - 1])
limited_checkpoints = task_checkpoints[limited_idxs]
inputs, outputs, predictions = get_model_predictions(limited_checkpoints)
fig = plot_model_predictions(
inputs,
outputs,
predictions,
[f'Epoch {checkpoint["epoch"]}' for checkpoint in limited_checkpoints],
f"{task_filename}_predictions",
title=task_name,
directory=save_directory,
)
# # close all those figures
# plt.close("all")
```
| github_jupyter |
## Notebook for preparing final dataset
```
import pandas as pd
import numpy as np
import re
```
## Dataset 1
```
file2 = pd.read_csv("./dataset/traindata2.csv")
file2.tail() #2 for normal 0,1 for toxic
file2.iloc[24767][-1]
tweet = file2["tweet"]
def clean_text(text):
text = text.lower()
text = re.sub(r"what's", "what is ", text)
text = re.sub(r"y'all", 'you all ', text)
text = re.sub(r"\'s", " ", text)
text = re.sub("dis", " this ", text)
text = re.sub(r"\'ve", " have ", text)
text = re.sub(r"can't", "cannot ", text)
text = re.sub(r"n't", " not ", text)
text = re.sub(r"arne't", " are not ", text)
text = re.sub("bout", " about ", text)
text = re.sub(r"i'm", "i am ", text)
text = re.sub("im", "i am ", text)
text = re.sub(r"\'re", " are ", text)
text = re.sub(r"\'d", " would ", text)
text = re.sub(r"\'ll", " will ", text)
text = re.sub(r"\'scuse", " excuse ", text)
text = re.sub("yall", "you all", text)
text = re.sub(" u", " you", text)
text = re.sub("^[u] ", "you ", text)
text = re.sub(" r", " are", text)
text = re.sub("^[r] ", "are ", text)
text = re.sub(" m", " am", text)
text = re.sub("^[m] ", "am ", text)
text = text.strip(' ')
return text
l = "u need a dick u asshole are I'm"
l=clean_text(l)
print(l)
def clean_sen(sen):
sen = clean_text(sen)
sen=re.sub("[\"*�-9;:~|?'!`]", "" , sen)
sen=re.sub("[-.]", " " , sen)
sen=sen.lower()
sen=sen.split()
sen = " ".join([i for i in sen if len(re.findall("[*@0-9/]", i))==0])
return sen
```
## 0 for normal 1 for toxic
```
def switchlabel(x):
if x==0 or x==1:
y=1
else:
y=0
return y
tweet = file2["tweet"]
comment1 = tweet.apply(clean_sen)
label = file2["class"]
label
label1=label.apply(switchlabel)
label1
a
```
## Dataset 2
```
data = pd.read_csv("./dataset/train.csv") ## path
data.tail()
cols_target = ['obscene','insult','toxic','severe_toxic','identity_hate','threat']
data['block'] =data[cols_target].sum(axis =1)
print(data['block'].value_counts())
data['block'] = data['block'] > 0
data['block'] = data['block'].astype(int)
print(data['block'].value_counts())
review = data["comment_text"]
comment2 = review.apply(clean_sen)
label2 = data["block"]
final_comment = comment1.tolist() + comment2.tolist()
final_label = label1.tolist() + label2.tolist()
final_comment[534]
final_label[534]
len(final_comment),len(final_label)
final_comment_dataframe = pd.DataFrame(final_comment,columns=["Review"])
final_label_dataframe = pd.DataFrame(final_label,columns=["Label"])
final_label_dataframe
final_comment_dataframe
dataset = final_comment_dataframe.join(final_label_dataframe) #final dataset
#saving dataset
dataset.to_csv("./dataset/final_training.csv")
```
| github_jupyter |
## Installs & Imports
```
# Select Tensorflow 2.x version in Colab
%tensorflow_version 2.x
# Import TensorFlow and tf.keras
import tensorflow as tf
keras = tf.keras
# Import helper libraries
import numpy as np
import matplotlib.pyplot as plt
# Print TensorFlow version
version = tf.__version__
print(version)
```
## Data
### 1. Load the MNIST dataset
```
# Load mnist from keras datasets
mnist = keras.datasets.mnist
# Get the training and test data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
```
### 2. Explore the MNIST dataset
```
# Inspect the training and test dataset shape
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
print("x_test shape:", x_test.shape, "y_test shape", y_test.shape)
# Take a look at one of the training images
index = 0
```
```
# First let's look at the label of the image
print(y_train[index])
plt.imshow(x_train[index], cmap="gray")
print(x_train[index])
# Check datatype of x_train
x_train.dtype
```
### 2. Data preprocessing
```
# Convert data to float32 and normalize the input data
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train.dtype
x_train[index]
# Reshape input data from (28, 28) to (28, 28, 1)
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
# Take a look at the dataset shape after conversion with keras.utils.to_categorical
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
print("x_test shape:", x_test.shape, "y_test shape", y_test.shape)
# One-hot encode the labels
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
y_train[index]
```
## Model Training
### Define the model architecture
There are 3 ways to define a model with tf.Keras:
1. Sequential API
2. Functional API
3. Model subclassing
We will create a simple Convolutional Neural Network with the Sequential model API.

```
def create_model():
# Define the model architecture
model = keras.models.Sequential([
# Must define the input shape in the first layer of the neural network
keras.layers.Conv2D(filters=32, kernel_size=3, padding='same', activation='relu', input_shape=(28,28,1)),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Conv2D(filters=64, kernel_size=3, padding='same', activation='relu'),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
return model
model = create_model()
# Take a look at the model summary
model.summary()
%%time
model.fit(x_train,
y_train,
batch_size=64,
epochs=3)
```
### Model evaluation
```
test_loss, test_accuracy = model.evaluate(x_test, y_test)
print("Test Loss", test_loss)
print('Test Accuracy:', test_accuracy)
predictions = model.predict(x_test)
index = 99
np.argmax(predictions[index])
plt.imshow(np.squeeze(x_test[index]))
```
| github_jupyter |
## _*H2 ground state energy computation using Iterative QPE*_
This notebook demonstrates computing and graphing the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using `IQPE` (Iterative Quantum Phase Estimation) algorithm. It is compared to the ground-truth energies as computed by the `ExactEigensolver`.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
First we define the `compute_energy` method, which contains the H2 molecule definition as well as the computation of its ground energy given the desired `distance` and `algorithm` (`i` is just a helper index for parallel computation to speed things up).
```
import pylab
import time
import numpy as np
import multiprocessing as mp
from qiskit import BasicAer
from qiskit.aqua import QuantumInstance, AquaError
from qiskit.aqua.algorithms.single_sample import IQPE
from qiskit.aqua.algorithms.classical import ExactEigensolver
from qiskit.chemistry import FermionicOperator
from qiskit.chemistry.aqua_extensions.components.initial_states import HartreeFock
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
def compute_energy(i, distance, algorithm):
try:
driver = PySCFDriver(
atom='H .0 .0 .0; H .0 .0 {}'.format(distance),
unit=UnitsType.ANGSTROM,
charge=0,
spin=0,
basis='sto3g'
)
except:
raise AquaError('PYSCF driver does not appear to be installed')
molecule = driver.run()
qubit_mapping = 'parity'
fer_op = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
qubit_op = fer_op.mapping(map_type=qubit_mapping, threshold=1e-10).two_qubit_reduced_operator(2)
if algorithm.lower() == 'exacteigensolver':
exact_eigensolver = ExactEigensolver(qubit_op, k=1)
result = exact_eigensolver.run()
reference_energy = result['energy']
elif algorithm.lower() == 'iqpe':
num_particles = molecule.num_alpha + molecule.num_beta
two_qubit_reduction = True
num_orbitals = qubit_op.num_qubits + (2 if two_qubit_reduction else 0)
num_time_slices = 3000
num_iterations = 12
state_in = HartreeFock(qubit_op.num_qubits, num_orbitals,
num_particles, qubit_mapping, two_qubit_reduction)
iqpe = IQPE(qubit_op, state_in, num_time_slices, num_iterations,
expansion_mode='trotter', expansion_order=1,
shallow_circuit_concat=True)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend)
result = iqpe.run(quantum_instance)
else:
raise AquaError('Unrecognized algorithm.')
return i, distance, result['energy'] + molecule.nuclear_repulsion_energy, molecule.hf_energy
```
Next we set up the experiment to compute H2 ground energies for a range of inter-atomic distances, in parallel.
```
import concurrent.futures
import multiprocessing as mp
algorithms = ['iqpe', 'exacteigensolver']
start = 0.5 # Start distance
by = 0.5 # How much to increase distance by
steps = 20 # Number of steps to increase by
energies = np.empty([len(algorithms), steps+1])
hf_energies = np.empty(steps+1)
distances = np.empty(steps+1)
start_time = time.time()
max_workers = max(4, mp.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
futures = []
for j in range(len(algorithms)):
algorithm = algorithms[j]
for i in range(steps+1):
d = start + i*by/steps
future = executor.submit(
compute_energy,
i,
d,
algorithm
)
futures.append(future)
for future in concurrent.futures.as_completed(futures):
i, d, energy, hf_energy = future.result()
energies[j][i] = energy
hf_energies[i] = hf_energy
distances[i] = d
print(' --- complete')
print('Distances: ', distances)
print('Energies:', energies)
print('Hartree-Fock energies:', hf_energies)
print("--- %s seconds ---" % (time.time() - start_time))
```
Finally we plot the results:
```
pylab.plot(distances, hf_energies, label='Hartree-Fock')
for j in range(len(algorithms)):
pylab.plot(distances, energies[j], label=algorithms[j])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('H2 Ground State Energy')
pylab.legend(loc='upper right')
pylab.show()
pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock')
pylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE')
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference from ExactEigensolver')
pylab.legend(loc='upper right')
pylab.show()
```
| github_jupyter |
# Import data and preprocess it
```
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
use_all_features = True
use_full_data = True
test_sml_size = 3000
#file paths
train_sig_path_sml = "data/train_sml_sig.csv"
train_bg_path_sml = "data/train_sml_bg.csv"
train_sig_path = "data/train_sig.csv"
train_bg_path = "data/train_bg.csv"
test_sig_path = "data/test_sig.csv"
test_bg_path = "data/test_bg.csv"
#read csv
train_sig_sml = pd.read_csv(train_sig_path_sml, header=0)
train_bg_sml = pd.read_csv(train_bg_path_sml, header=0)
train_sig = pd.read_csv(train_sig_path, header=0)
train_bg = pd.read_csv(train_bg_path, header=0)
test_sig = pd.read_csv(test_sig_path, header=0)
test_bg = pd.read_csv(test_bg_path, header=0)
#merge sig and bg dataframes
all_data_train_sml = train_bg_sml.append(train_sig_sml)
all_data_train = train_bg.append(train_sig)
all_data_test = test_bg.append(test_sig)
#one hot encode the labels -> i get two new features: Label_b and Label_s
all_data_train_sml_one_hot = pd.get_dummies(all_data_train_sml)
all_data_train_one_hot = pd.get_dummies(all_data_train)
all_data_test_one_hot = pd.get_dummies(all_data_test)
#two distinct arrays of deatures
listOfSelectedFeatures = ["DER_mass_MMC", "DER_mass_transverse_met_lep", "DER_mass_vis", "DER_pt_tot", "DER_sum_pt",
"PRI_tau_pt", "PRI_lep_pt", "PRI_met"]
listOfAllFeatures = ['DER_mass_MMC', 'DER_mass_transverse_met_lep', 'DER_mass_vis', 'DER_pt_h', 'DER_deltaeta_jet_jet',
'DER_mass_jet_jet', 'DER_prodeta_jet_jet', 'DER_deltar_tau_lep', 'DER_pt_tot', 'DER_sum_pt',
'DER_pt_ratio_lep_tau', 'DER_met_phi_centrality', 'DER_lep_eta_centrality', 'PRI_tau_pt',
'PRI_tau_eta', 'PRI_tau_phi', 'PRI_lep_pt', 'PRI_lep_eta', 'PRI_lep_phi', 'PRI_met', 'PRI_met_phi',
'PRI_met_sumet', 'PRI_jet_num', 'PRI_jet_leading_pt', 'PRI_jet_leading_eta', 'PRI_jet_leading_phi',
'PRI_jet_subleading_pt', 'PRI_jet_subleading_eta', 'PRI_jet_subleading_phi', 'PRI_jet_all_pt']
#create numpy arrays containing the data
if use_all_features is True:
X_train_raw_full = all_data_train_one_hot[listOfAllFeatures].values
X_train_raw_sml = all_data_train_sml_one_hot[listOfAllFeatures].values
X_test_raw_full = all_data_test_one_hot[listOfAllFeatures].values
#select onlya certain number of points from the test dataset
perm = np.arange(len(X_test_raw_full))
np.random.shuffle(perm)
X_test_raw_sml = all_data_test_one_hot[listOfAllFeatures].values[perm[:test_sml_size]]
else:
X_train_raw_full = all_data_train_one_hot[listOfSelectedFeatures].values
X_train_raw_sml = all_data_train_sml_one_hot[listOfSelectedFeatures].values
X_test_raw_full = all_data_test_one_hot[listOfSelectedFeatures].values
#select onlya certain number of points from the test dataset
perm = np.arange(len(X_test_raw_full))
np.random.shuffle(perm)
X_test_raw_sml = all_data_test_one_hot[listOfSelectedFeatures].values[perm[:test_sml_size]]
y_train_full = all_data_train_one_hot[["Label_b", "Label_s"]].values
y_train_sml = all_data_train_sml_one_hot[["Label_b", "Label_s"]].values
y_test_full = all_data_test_one_hot[["Label_b", "Label_s"]].values
y_test_sml = y_test_full[perm[:test_sml_size]]
```
### select which dataset to use and scale it
```
#select dataset based on preference stated before
if use_full_data is True:
X_train_to_scale = X_train_raw_full
X_test_to_scale = X_test_raw_full
y_train = y_train_full
y_test = y_test_full
else:
X_train_to_scale = X_train_raw_sml
X_test_to_scale = X_test_raw_sml
y_train = y_train_sml
y_test = y_test_sml
# preprocessing using zero mean and unit variance scaling
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.model_selection import train_test_split
# split data in training and validation set, obsolete as I've implemented it in the fit subroutine,
# I've decided to keep it as it may prove to be usefull for further analysis
split_val = False
if split_val is True:
#split training data in train and val set
X_train_to_scale, X_val_to_scale, y_train, y_val = train_test_split(X_train_to_scale, y_train, test_size=0.1)
#rescale data
#scaler = MinMaxScaler(feature_range=(0, 1))
scaler = StandardScaler() #scale the features to have 0 mean and unit variance
scaler.fit(X_train_to_scale)
X_train = scaler.transform(X_train_to_scale)
X_val = scaler.transform(X_val_to_scale)
X_test = scaler.transform(X_test_to_scale)
n_features = X_train.shape[1]
else:
#X_train_to_scale, y_train = X_train_to_scale , y_train
#rescale data
#scaler = MinMaxScaler(feature_range=(0, 1))
scaler = StandardScaler()
scaler.fit(X_train_to_scale)
X_train = scaler.transform(X_train_to_scale)
X_test = scaler.transform(X_test_to_scale)
n_features = X_train.shape[1]
print(X_train.shape)
#print(X_val.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
```
# Data analisys
```
#check if data has been imported correctly
print("X_train:\n{}\nshape={}\n".format(X_train, X_train.shape))
print("y_train:\n{}\nshape={}\n".format(y_train, y_train.shape))
#print("X_val:\n{}\nshape={}\n".format(X_val,X_val.shape))
#print("y_val:\n{}\nshape={}\n".format(y_val,y_val.shape))
print("X_test:\n{}\nshape={}\n".format(X_test, X_test.shape))
print("y_test:\n{}\nshape={}\n".format(y_test, y_test.shape))
```
# calculate AMS
```
#
def compute_AMS(model, X_test):
"""
Compute theApproximate Median Significance, given a model and the X_test set. Hardcoded to work with my tensors,
one may want to modify it in order to make it work in a more general scenario.
Approximate Median Significance defined as:
AMS = sqrt(
2 { (s + b + b_r) log[1 + (s/(b+b_r))] - s}
)
where b_r = 10, b = background, s = signal, log is natural logarithm
"""
# define a matric which contains our predictions,
# following nomenclature suggestend on the kaggle higgs challenge webpage
submission_matrix=[]
predictions=model.predict(X_test)
#fill the matrix
for i in np.arange(X_test.shape[0]):
EventId=np.str(all_data_test_one_hot["EventId"].values[i])
pred=predictions[i]
if pred == 1:
#signal
label=1
else:
#background
label=0
submission_matrix.append(label)
submission_matrix = np.array(submission_matrix)
#matrix containing the labels and weights of the test dataset
solution_matrix=[]
#fill the matrix
for i in np.arange(X_test.shape[0]):
weight=np.float(all_data_test["Weight"].values[i])
label_s=np.str(all_data_test["Label"].values[i])
if label_s == "s":
#signal
label=1
else:
#bg
label=0
solution_matrix.append([label,weight])
solution_matrix = np.array(solution_matrix)
b=0.
s=0.
# create sum of the weights for b and s, following the AMS evaluation code provided by the challenge
for i in np.arange(len(X_test)):
if submission_matrix[i]==1:
if solution_matrix[i,0]==1:
s+=solution_matrix[i,1]
elif solution_matrix[i,0]==0:
b+=solution_matrix[i,1]
else:
print("shouldn't get there")
br = 10.0
radicand = 2 * ((s + b + br) * np.log(1.0 + s / (b + br)) - s)
if radicand < 0:
print('radicand is negative. Exiting')
exit()
else:
#return ams value
return np.sqrt(radicand)
```
## import lib v6.2 with weight l2 regularization, decaying learning rate decay
```
from Aurelio_Amerio_MLPClassifier_lib_v6_3 import MLPClassifier
```
# model 1, 1 layer
```
learning_rate_1 = 0.001
keep_probs_1 = (0.8,)
batch_size_1 = 128
max_epochs_1 = 1000
beta_w_1 = 0
beta_b_1 = 0
display_step_1 = 10
n_features_1 = n_features
# %% create model with 1 hidden layers
model1 = MLPClassifier(n_layers=1, n_nodes=(256,), learning_rate=learning_rate_1, random_seed=None,
n_features=n_features_1, n_classes=2)
model1.fit(X_train=X_train, y_train=y_train, max_epochs=max_epochs_1,
display_step=display_step_1, batch_size=batch_size_1, keep_probs=keep_probs_1, validation_split=0.1,
beta_w=beta_w_1, beta_b=beta_b_1, verbose=True)
print("model1 accuracy on train set: {:.3f}".format(model1.score(X_train, y_train)))
print("model1 accuracy on validation set: {:.3f}".format(model1.score(model1.X_val, model1.y_val)))
# %% plot results
fig1, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_1, display_step_1)
#axes[0].plot(x_ticks, model1.training_costs, color="blue", label="model1 training")
#axes[0].plot(x_ticks, model1.val_costs, color="blue", label="model1 test",linestyle="dashed")
axes[0].plot(x_ticks, model1.training_costs, color="red", label="model1 training")
axes[0].plot(x_ticks, model1.val_costs, color="orange", label="model1 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("log")
axes[0].set_ylabel("cost")
axes[0].set_yscale("linear")
axes[0].set_ylim(0,2.5)
axes[0].set_title(
"cost(epoch) with dropout,\nbatch_size={}, epochs={}".format(batch_size_1, max_epochs_1))
axes[0].legend()
#axes[1].plot(x_ticks, model1.training_accuracy, color="blue", label="model1 training")
#axes[1].plot(x_ticks, model1.val_accuracy, color="blue", label="model1 test",linestyle="dashed")
axes[1].plot(x_ticks, model1.training_accuracy, color="red", label="model1 training")
axes[1].plot(x_ticks, model1.val_accuracy, color="orange", label="model1 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch) with dropout,\nbatch_size={}, epochs={}".format(batch_size_1, max_epochs_1))
axes[1].legend()
fig1.tight_layout()
plt.show()
fig1.savefig("higgs_fig1.png", dpi=500)
print("model1 accuracy on test set: {:.3f}".format(model1.score(X_test, y_test)))
#ams can be conputed only when the full dataset is used
#print("model1 AMS: {:.4f}".format(compute_AMS(model1, X_test)))
```
['DER_mass_MMC','DER_mass_transverse_met_lep','DER_mass_vis','DER_pt_h','DER_deltaeta_jet_jet','DER_mass_jet_jet','DER_prodeta_jet_jet','DER_deltar_tau_lep','DER_pt_tot','DER_sum_pt','DER_pt_ratio_lep_tau','DER_met_phi_centrality','DER_lep_eta_centrality','PRI_tau_pt','PRI_tau_eta','PRI_tau_phi','PRI_lep_pt','PRI_lep_eta','PRI_lep_phi','PRI_met','PRI_met_phi','PRI_met_sumet','PRI_jet_num','PRI_jet_leading_pt','PRI_jet_leading_eta','PRI_jet_leading_phi','PRI_jet_subleading_pt','PRI_jet_subleading_eta','PRI_jet_subleading_phi','PRI_jet_all_pt']
## Model2
```
learning_rate_2 = 0.001
keep_probs_2 = (0.5,)
batch_size_2 = 128
max_epochs_2 = 1000
beta_w_2 = 0
beta_b_2 = 0
decay_rate_2 = None #0.96
display_step_2 = 10
n_features_2 = n_features
# %% create model with 1 hidden layers
model2 = MLPClassifier(n_layers=1, n_nodes=(1024,), learning_rate=learning_rate_2, random_seed=None,
n_features=n_features_2, n_classes=2)
model2.fit(X_train=X_train, y_train=y_train, max_epochs=max_epochs_2,
display_step=display_step_2, batch_size=batch_size_2, keep_probs=keep_probs_2, validation_split=0.1,
beta_w=beta_w_2, beta_b=beta_b_2, decay_rate=decay_rate_2, verbose=True)
print("model2 accuracy on train set: {:.3f}".format(model2.score(X_train, y_train)))
print("model2 accuracy on validation set: {:.3f}".format(model2.score(model2.X_val, model2.y_val)))
# %% plot results
fig2, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_2, display_step_2)
#axes[0].plot(x_ticks, model1.training_costs, color="blue", label="model1 training")
#axes[0].plot(x_ticks, model1.val_costs, color="blue", label="model1 test",linestyle="dashed")
axes[0].plot(x_ticks, model2.training_costs, color="red", label="model2 training")
axes[0].plot(x_ticks, model2.val_costs, color="orange", label="model2 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("log")
axes[0].set_ylabel("cost")
axes[0].set_yscale("log")
#axes[0].set_ylim(0,10000)
axes[0].set_title(
"cost(epoch) with dropout,\nbatch_size={}, epochs={}".format(batch_size_2, max_epochs_2))
axes[0].legend()
#axes[1].plot(x_ticks, model1.training_accuracy, color="blue", label="model1 training")
#axes[1].plot(x_ticks, model1.val_accuracy, color="blue", label="model1 test",linestyle="dashed")
axes[1].plot(x_ticks, model2.training_accuracy, color="red", label="model2 training")
axes[1].plot(x_ticks, model2.val_accuracy, color="orange", label="model2 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch) with dropout,\nbatch_size={}, epochs={}".format(batch_size_2, max_epochs_2))
axes[1].legend()
fig2.tight_layout()
plt.show()
fig2.savefig("higgs_fig2.png", dpi=500)
print("model2 accuracy on test set: {:.3f}".format(model2.score(X_test, y_test)))
#ams only available for all the data
#print("model2 AMS: {:.4f}".format(compute_AMS(model2, X_test)))
```
# model3 with 1 layer but with full data and all the features
```
learning_rate_3 = 0.001
# we need a big dropout in order to contrast the tendency to overfit of a model with a high number of nodes'
keep_probs_3 = (0.5,)
batch_size_3 = 128
max_epochs_3 = 200
beta_w_3 = 0
beta_b_3 = 0
decay_rate_3 = None #0.96
display_step_3 = 10
n_features_3 = n_features
# %% create model with 1 hidden layers
model3 = MLPClassifier(n_layers=1, n_nodes=(1024,), learning_rate=learning_rate_3, random_seed=None,
n_features=n_features_3, n_classes=2)
model3.fit(X_train=X_train, y_train=y_train, max_epochs=max_epochs_3,
display_step=display_step_3, batch_size=batch_size_3, keep_probs=keep_probs_3, validation_split=0.1,
beta_w=beta_w_3, beta_b=beta_b_3, decay_rate=decay_rate_3, verbose=True)
print("model3 accuracy on train set: {:.3f}".format(model3.score(X_train, y_train)))
print("model3 accuracy on validation set: {:.3f}".format(model3.score(model3.X_val, model3.y_val)))
# %% plot results
fig3, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_3, display_step_3)
#axes[0].plot(x_ticks, model1.training_costs, color="blue", label="model1 training")
#axes[0].plot(x_ticks, model1.val_costs, color="blue", label="model1 test",linestyle="dashed")
axes[0].plot(x_ticks, model3.training_costs, color="red", label="model3 training")
axes[0].plot(x_ticks, model3.val_costs, color="orange", label="model3 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("log")
axes[0].set_ylabel("cost")
axes[0].set_yscale("linear")
axes[0].set_ylim(0.3,0.5)
axes[0].set_title(
"cost(epoch),\nbatch_size={}, epochs={}".format(batch_size_3, max_epochs_3))
axes[0].legend()
#axes[1].plot(x_ticks, model1.training_accuracy, color="blue", label="model1 training")
#axes[1].plot(x_ticks, model1.val_accuracy, color="blue", label="model1 test",linestyle="dashed")
axes[1].plot(x_ticks, model3.training_accuracy, color="red", label="model3 training")
axes[1].plot(x_ticks, model3.val_accuracy, color="orange", label="model3 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch),\nbatch_size={}, epochs={}".format(batch_size_3, max_epochs_3))
axes[1].legend()
fig3.tight_layout()
plt.show()
fig3.savefig("higgs_fig3.png", dpi=500)
print("model3 accuracy on test set: {:.3f}".format(model3.score(X_test, y_test)))
#ams only available for all the data
print("model3 AMS: {:.4f}".format(compute_AMS(model3, X_test)))
```
# model 4 with 3 hidden layers
```
learning_rate_4 = 0.001
# we need a big dropout in order to contrast the tendency to overfit of a model with a high number of nodes
keep_probs_4 = (0.8, 0.5, 0.5, 0.5)
batch_size_4 = 128
max_epochs_4 = 200
beta_w_4 = 0.0001
beta_b_4 = 0.0001
decay_rate_4 = None #0.96
display_step_4 = 10
n_features_4 = n_features
# %% create model with 3 hidden layers
model4 = MLPClassifier(n_layers=4, n_nodes=(600,600,600,600), learning_rate=learning_rate_4, random_seed=None,
n_features=n_features_4, n_classes=2)
model4.fit(X_train=X_train, y_train=y_train, max_epochs=max_epochs_4,
display_step=display_step_4, batch_size=batch_size_4, keep_probs=keep_probs_4, validation_split=0.1,
beta_w=beta_w_4, beta_b=beta_b_4, decay_rate=decay_rate_4, verbose=True)
print("model4 accuracy on train set: {:.3f}".format(model4.score(X_train, y_train)))
print("model4 accuracy on validation set: {:.3f}".format(model4.score(model4.X_val, model4.y_val)))
# %% plot results
fig4, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_4, display_step_4)
axes[0].plot(x_ticks, model4.training_costs, color="red", label="model4 training")
axes[0].plot(x_ticks, model4.val_costs, color="orange", label="model4 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("linear")
axes[0].set_ylabel("cost")
axes[0].set_yscale("log")
#axes[0].set_ylim(0,10000)
axes[0].set_title(
"cost(epoch),\nbatch_size={}, epochs={}".format(batch_size_4, max_epochs_4))
axes[0].legend()
axes[1].plot(x_ticks, model4.training_accuracy, color="red", label="model4 training")
axes[1].plot(x_ticks, model4.val_accuracy, color="orange", label="model4 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch),\nbatch_size={}, epochs={}".format(batch_size_4, max_epochs_4))
axes[1].legend()
fig4.tight_layout()
plt.show()
fig4.savefig("higgs_fig4.png", dpi=500)
print("model4 accuracy on test set: {:.3f}".format(model4.score(X_test, y_test)))
#ams only available for all the data
print("model4 AMS: {:.4f}".format(compute_AMS(model4, X_test)))
```
# model 6 with 1 layer, exponential decay (model5 on the report)
```
learning_rate_6 = 0.01
# we need a big dropout in order to contrast the tendency to overfit of a model with a high number of nodes'
keep_probs_6 = (0.8,)
batch_size_6 = 128
max_epochs_6 = 300
beta_w_6 = 0.00005
beta_b_6 = 0.00005
decay_rate_6 = 0.96
decay_steps_6 = 10000 # I want the learning rate to be around 0.001 around the 30% of the optimization process
display_step_6 = 10
n_features_6 = n_features
# %% create model with 3 hidden layers
model6 = MLPClassifier(n_layers=1, n_nodes=(1024,), learning_rate=learning_rate_6, random_seed=None,
n_features=n_features_6, n_classes=2)
model6.fit(X_train=X_train, y_train=y_train, max_epochs=max_epochs_6,
display_step=display_step_6, batch_size=batch_size_6, keep_probs=keep_probs_6, validation_split=None,
beta_w=beta_w_6, beta_b=beta_b_6, decay_rate=decay_rate_6, decay_steps=decay_steps_6, verbose=True)
print("model6 accuracy on train set: {:.3f}".format(model6.score(X_train, y_train)))
#print("model6 accuracy on validation set: {:.3f}".format(model6.score(model6.X_val, model6.y_val)))
# %% plot results
fig6, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_6, display_step_6)
axes[0].plot(x_ticks, model6.training_costs, color="red", label="model5 training")
#axes[0].plot(x_ticks, model6.val_costs, color="orange", label="model6 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("linear")
axes[0].set_ylabel("cost")
axes[0].set_yscale("log")
#axes[0].set_ylim(0,10000)
axes[0].set_title(
"cost(epoch),\nbatch_size={}, epochs={}".format(batch_size_6, max_epochs_6))
axes[0].legend()
axes[1].plot(x_ticks, model6.training_accuracy, color="red", label="model5 training")
#axes[1].plot(x_ticks, model6.val_accuracy, color="orange", label="model6 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch),\nbatch_size={}, epochs={}".format(batch_size_6, max_epochs_6))
axes[1].legend()
fig6.tight_layout()
plt.show()
fig6.savefig("higgs_fig6_model5.png", dpi=500)
print("model6 accuracy on test set: {:.3f}".format(model6.score(X_test, y_test)))
#ams only available for all the data
print("model6 AMS: {:.4f}".format(compute_AMS(model6, X_test)))
```
# preproc pca
```
from sklearn.decomposition import PCA
# keep the first two principal components of the data
scaler_pca= StandardScaler()
scaler_pca.fit(X_train_to_scale)
X_train_scaled_pca=scaler_pca.transform(X_train_to_scale)
X_test_scaled_pca=scaler_pca.transform(X_test_to_scale)
n_components_pca = 22
pca = PCA(n_components=n_components_pca)
# fit PCA model to breast cancer data
pca.fit(X_train_scaled_pca)
X_train_pca = pca.transform(X_train_scaled_pca)
X_test_pca = pca.transform(X_test_scaled_pca)
pca.explained_variance_ratio_
```
# model 8 with 3 hidden layers, leaky relu (model6 in the report)
```
learning_rate_7 = 0.001
# we need a big dropout in order to contrast the tendency to overfit of a model with a high number of nodes'
keep_probs_7 = (0.8, 0.5, 0.5, 0.5)
batch_size_7 = 64
max_epochs_7 = 1000
beta_w_7 = 0.0005
beta_b_7 = 0.00005
decay_rate_7 = 0.96
decay_steps_7 = 10000 # I want the learning rate to be around 0.0001 around the 30% of the optimization process
display_step_7 = 10
n_features_7 = n_components_pca
# %% create model with 3 hidden layers
model7 = MLPClassifier(n_layers=4, n_nodes=(600,400,400,400), learning_rate=learning_rate_7, random_seed=None,
n_features=n_features_7, n_classes=2)
model7.fit(X_train=X_train_pca, y_train=y_train, max_epochs=max_epochs_7,
display_step=display_step_7, batch_size=batch_size_7, keep_probs=keep_probs_7, validation_split=0.1,
beta_w=beta_w_7, beta_b=beta_b_7, decay_rate=decay_rate_7, decay_steps=decay_steps_7, verbose=True)
print("model7 accuracy on train set: {:.3f}".format(model7.score(X_train_pca, y_train)))
#print("model7 accuracy on validation set: {:.3f}".format(model7.score(model7.X_val, model6.y_val)))
# %% plot results
fig7, axes = plt.subplots(2, 1, figsize=(6, 6))
x_ticks = np.arange(0, max_epochs_7, display_step_7)
axes[0].plot(x_ticks, model7.training_costs, color="red", label="model6 training")
axes[0].plot(x_ticks, model7.val_costs, color="orange", label="model6 validation", linestyle="dashed")
axes[0].set_xlabel("epochs")
axes[0].set_xscale("linear")
axes[0].set_ylabel("cost")
axes[0].set_yscale("log")
#axes[0].set_ylim(0,10000)
axes[0].set_title(
"cost(epoch),\nbatch_size={}, epochs={}".format(batch_size_7, max_epochs_7))
axes[0].legend()
axes[1].plot(x_ticks, model7.training_accuracy, color="red", label="model6 training")
axes[1].plot(x_ticks, model7.val_accuracy, color="orange", label="model6 validation", linestyle="dashed")
axes[1].set_xlabel("epochs")
axes[1].set_xscale("log")
axes[1].set_ylabel("accuracy")
axes[1].set_title(
"accuracy(epoch),\nbatch_size={}, epochs={}".format(batch_size_7, max_epochs_7))
axes[1].legend()
fig7.tight_layout()
plt.show()
fig7.savefig("higgs_fig7_model6.png", dpi=500)
print("model7 accuracy on test set: {:.3f}".format(model7.score(X_test_pca, y_test)))
#ams only available for all the data
print("model7 AMS: {:.4f}".format(compute_AMS(model7, X_test_pca)))
```
| github_jupyter |
# MXNet - Gluon Code Snippets
#### Index:
## 1. Import Libraries
```
from mxnet import autograd, nd
# #Gluon data module to read data
from mxnet.gluon import data as gdata
# #Neural Network Layers
from mxnet.gluon import nn
# #Model Parameter Initalizer
from mxnet import init
# #Gluon module to define loss functions
from mxnet.gluon import loss as gloss
# #Optimization Algorithm
from mxnet.gluon import Trainer
```
## 2. Reading Data
```
"""
X: features
y: labels
"""
# #Combining the features and labels into a training set
dataset = gdata.ArrayDataset(features, labels)
# #Randomly reading data in batches - Mini Batch
batch_size = 10
data_iter = gdata.DataLoader(dataset, batch_size, shuffle=True)
```
## 3. Model Definition
```
# ###########################
# # INPUT #
# ###########################
# #Sequential Container
net = nn.Sequential()
# ###########################
# # HIDDEN LAYERS #
# ###########################
net.add(nn.Dense(256, activation='relu')) # #256 hidden units with a ReLU activation function
# #DROPOUT
# #We add dropout after each of the fully connected layers
# #and specify the dropout probability
net.add(nn.Dense(256, activation="relu"),
# Add a dropout layer after the first fully connected layer
nn.Dropout(drop_prob1),
nn.Dense(256, activation="relu"),
# Add a dropout layer after the second fully connected layer
nn.Dropout(drop_prob2),
nn.Dense(10))
# ###########################
# # OUTPUT #
# ###########################
# #Adding a Dense layer with a scalar output
net.add(nn.Dense(1))
# #Adding a Dense layer with 10 outputs
net.add(nn.Dense(10))
```
### Parameter Initialization
```
"""
Default Method - Each weight parameter element is randomly sampled from
a uniform distribution U[-0.07,0.07], with the bias
parameter equal to 0
"""
net.initialize()
"""
Each weight parameter element is randomly sampled at
initialization from a normal distribution with zero
mean and sigma standard deviation.
The bias parameter is initialized to zero by default
"""
net.initialize(init.Normal(sigma=0.01))
"""
Reinitialize all the parameters in the network
'force_reinit' ensures that the variables are initialized again, regardless of
whether they were already initialized previously
"""
net.initialize(init=init.Normal(sigma=0.01), force_reinit=True)
net[0].weight.data()[0]
# #Reinitialize all parameters to a constant value of 1
net.initialize(init=init.Constant(1), force_reinit=True)
net[0].weight.data()[0]
# #Reinitialize the parameters in a specific layer
net[1].initialize(init=init.Constant(42), force_reinit=True)
"""
Custom Initialization
Sometimes the initialization methods we need are not provided
in the `init` module; in such cases we can implement a subclass
of the `Initializer` class
"""
class MyInit(init.Initializer):
"""
U[5,10] with probability 1/4
w ∼ 0 with probability 1/2
U[−10,−5] with probability 1/4
"""
def _init_weight(self, name, data):
print('Init', name, data.shape)
data[:] = nd.random.uniform(low=-10, high=10, shape=data.shape)
data *= data.abs() >= 5
net.initialize(MyInit(), force_reinit=True)
net[0].weight.data()[0]
```
### Parameter Access
```
# #Sequential Class - Each layer of the network can be selected via indexing - net[i]
net[0].params
# #Each layer of the network can be selected via indexing - net[i]
net[0]
# #The weights and biases in each layer of the network
w = net[0].weight.data()
b = net[0].bias.data()
# #This above is equivalent to
w = net[0].params['dense0_weight'].data()
b = net[0].params['dense0_bias'].data()
# #All the parameters only for the first layer
net[0].collect_params()
# #All the parameters of the entire network
# #each of the lines below produce a differently formated output
net.collect_params
net.collect_params()
# #We could also use RegEx to filter out parameters
net.collect_params('.*weight')
net.collect_params('dense0.*')
def relu(X):
return nd.maximum(X, 0)
```
### Layers and Blocks
A Block consists of one or more layers.
Requirements for a Block are:
1. Input data
2. `forward` method produces the output
3. `backward` method produces the gradient - performed automatically?
4. Initialize and store block specific parameters
In fact, the Sequential class is derived from the Block class
```
class MLP(nn.Block):
# Declare a layer with model parameters. Here, we declare two fully
# connected layers
def __init__(self, **kwargs):
# Call the constructor of the MLP parent class Block to perform the
# necessary initialization. In this way, other function parameters can
# also be specified when constructing an instance, such as the model
# parameter, params, described in the following sections
super(MLP, self).__init__(**kwargs)
self.hidden = nn.Dense(256, activation='relu') # Hidden layer
self.output = nn.Dense(10) # Output layer
# Define the forward computation of the model, that is, how to return the
# required model output based on the input x
def forward(self, x):
# #Forward propogation step
return self.output(self.hidden(x))
net = MLP()
net.initialize()
net(x)
"""
Custom Layer without parameters
"""
class CenteredLayer(nn.Block):
def __init__(self, **kwargs):
super(CenteredLayer, self).__init__(**kwargs)
def forward(self, x):
# #subtracts the mean from the input block
return x - x.mean()
layer = CenteredLayer()
layer(nd.array([1, 2, 3, 4, 5]))
net = nn.Sequential()
net.add(nn.Dense(128), CenteredLayer())
net.initialize()
"""
Custom Layer with parameters
"""
params = gluon.ParameterDict()
params.get('param2', shape=(2, 3))
params
class MyDense(nn.Block):
# #Custom implementation of the Dense layer
# units: the number of outputs in this layer; in_units: the number of
# inputs in this layer
def __init__(self, units, in_units, **kwargs):
# #Dense layer has two parameters - weight and bias
super(MyDense, self).__init__(**kwargs)
self.weight = self.params.get('weight', shape=(in_units, units))
self.bias = self.params.get('bias', shape=(units,))
def forward(self, x):
linear = nd.dot(x, self.weight.data()) + self.bias.data()
return nd.relu(linear)
dense = MyDense(units=3, in_units=5)
dense.params
net = nn.Sequential()
net.add(MyDense(8, in_units=64),
MyDense(1, in_units=8))
net.initialize()
net(nd.random.uniform(shape=(2, 64)))
```
## 4. Define Loss Functions
```
loss = gloss.L2Loss() # #Squared Loss or L2-norm loss
loss = gloss.SoftmaxCrossEntropyLoss()
def softmax(X):
X_exp = X.exp()
partition = X_exp.sum(axis=1, keepdims=True)
return X_exp/partition
def cross_entropy(y_hat, y):
return -nd.pick(y_hat, y).log()
```
## 5. Define the Optimization Algorithm
```
# #Algo: Mini-batch Stochastic Gradient Descent Algorithm
"""
The optimization algorithm will iterate over all parameters present
in the network
"""
trainer = Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.03})
```
## 6. Model Training
```
"""
* For a pre-defined number of epochs, we make a pass
over the dataset that has been sampled via mini-batching
the features(X) and the labels(y)
* Then, for each mini-batch:
- make prediction via `net(X)` and compare it to the label
y and compute the loss function in the forward pass
- compute gradients via backward pass
- update the model parameters via SGD in the `trainer()` method
"""
num_epochs = 3
for epoch in range(1, num_epochs + 1):
for X, y in data_iter:
with autograd.record():
l = loss(net(X), y)
l.backward()
trainer.step(batch_size)
l = loss(net(features), labels)
print('epoch %d, loss: %f' % (epoch, l.mean().asnumpy()))
"""
Compute the error in estimating the weights and biases
"""
w = net[0].weight.data()
print('Error in estimating w', true_w.reshape(w.shape) - w)
b = net[0].bias.data()
print('Error in estimating b', true_b - b)
```
## 7. File I/O
```
"""
NDArray
"""
x = nd.arange(4)
nd.save('x-file', x)
x2 = nd.load('x-file')
# #nd.save('x-files', [x, y])
# #mydict = {'x': x, 'y': y}
# #nd.save('mydict', mydict)
"""
Gluon Model Parameters
This saves model parameters, not the entire model itself
"""
net.save_parameters('mlp.params')
```
| github_jupyter |
# Generate volcanic ERF time series
Theme Song: Mt. Pinatubo<br>
Artist: The Low Frequency In Stereo<br>
Album: Futuro<br>
Released: 2009
```
from netCDF4 import Dataset, num2date
import numpy as np
import matplotlib.pyplot as pl
import pandas as pd
from ar6.utils import check_and_download
import scipy.stats
%matplotlib inline
pl.rcParams['figure.figsize'] = (16, 9)
pl.rcParams['font.size'] = 18
pl.rcParams['font.family'] = 'Arial'
def linear(delta_saod, scaling):
erf = scaling*delta_saod
return erf
```
Downloading the volcanic data from Toohey and Sigl and Glossac requires registration, so no quick way to get this to work
Download data from:
https://cera-www.dkrz.de/WDCC/ui/cerasearch/entry?acronym=eVolv2k_v2
Put this in '../data_input_large/eVolv2k_v3_EVA_AOD_-500_1900_1.nc'
Download data from:
https://asdc.larc.nasa.gov/project/GloSSAC/GloSSAC_2.0
Put this in '../data_input_large/GloSSAC_V2.0.nc'
```
# -500 to 1900 ERF
nc = Dataset('../data_input_large/eVolv2k_v3_EVA_AOD_-500_1900_1.nc')
aod550_mt = nc.variables['aod550'][:]
lat_mt = nc.variables['lat'][:]
time_mt = nc.variables['time'][:]
nc.close()
time_mt[-51*12]
nc = Dataset('../data_input_large/GloSSAC_V2.0.nc')
data_glossac = nc.variables['Glossac_Aerosol_Optical_Depth'][:]
lat_glossac = nc.variables['lat'][:]
trp_hgt_glossac = nc.variables['trp_hgt'][:] # lat, month
alt_glossac = nc.variables['alt'][:]
nc.close()
time_mt[-51*12]
lat_glossac
alt_glossac
data_glossac[0,0,:]
lat_mt_bnds = np.concatenate([[90], 0.5*(lat_mt[1:]+lat_mt[:-1]), [-90]])
weights = -np.squeeze(np.diff(np.sin(np.radians(lat_mt_bnds))))
lat_glossac_bnds = np.concatenate(([-90], 0.5*(lat_glossac[1:]+lat_glossac[:-1]), [90]))
weights_glossac = np.diff(np.sin(np.radians(lat_glossac_bnds)))
aod_mt = np.zeros((len(time_mt)))
for i in range(len(time_mt)):
aod_mt[i] = np.average(aod550_mt[i,:],weights=weights)
angstrom = (550/525)**(-2.33)
aod_glossac = np.zeros(480)
for i in range(480):
aod_glossac[i] = np.average(data_glossac[i,:,2],weights=weights_glossac)*angstrom
check_and_download(
'../data_input_large/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc',
'ftp://iacftp.ethz.ch/pub_read/luo/CMIP6/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc'
)
nc = Dataset('../data_input_large/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc')
ext = nc.variables['ext550'][:].transpose((2,1,0)) # time, height, lat
lev = nc.variables['altitude'][:]
lat = nc.variables['latitude'][:]
time = nc.variables['month'][:]
print(nc.variables['month'])
nc.close()
lat_bnds = np.concatenate(([-90], 0.5*(lat[1:]+lat[:-1]), [90]))
weights = np.diff(np.sin(np.radians(lat_bnds)))
tax = np.zeros(165*12)
aod_cmip6 = np.zeros(165*12)
for i in range(0,1970,12):
gl_mn_OD = np.average(np.sum(np.mean(ext[i:i+12,...],axis=0) * 0.5 ,axis=0),weights=weights) # 0.5 is thickness in km
for i in range(1980):
aod_cmip6[i] = np.average(np.sum(ext[i,...] * 0.5,axis=0),weights=weights)
aod = np.concatenate((aod_mt[:-51*12],aod_cmip6[:129*12],aod_glossac))
len(aod)
aod[28200:28812] = (1-np.linspace(0,1,612))*aod_mt[-51*12:]+np.linspace(0,1,612)*aod_cmip6[:612]
aod[29748:29868] = (1-np.linspace(0,1,120))*aod_cmip6[1548:1668] + np.linspace(0,1,120)*aod_glossac[:120]
# repeat last year
aod = np.append(aod, aod[-12:])
pl.plot(np.arange(1845+1/24,1901+1/24,1/12), aod_mt[28140:], label='Toohey & Sigl -500 to 1900 incl. background')
pl.plot(np.arange(1850+1/24,1905+1/24,1/12), aod_cmip6[:660], label='CMIP6')
pl.plot(np.arange(1845+1/24,1905+1/24,1/12), aod[28140:28860], label='blended')
pl.legend()
pl.plot(np.arange(1979+1/24,2019+1/24,1/12), aod_glossac, label='Glossac')
pl.plot(np.arange(1975+1/24,2015+1/24,1/12), aod_cmip6[-480:], label='CMIP6')
pl.plot(np.arange(1975+1/24,2020+1/24,1/12), aod[-540:], label='blended')
pl.legend()
pl.plot(aod[-528:])
pl.plot(np.arange(1845+1/24,2020,1/12), aod[28140:])
volc_erf_minus20 = np.zeros((2520))
aod_2500yr_clim = np.zeros(12)
for i in range(12):
aod_2500yr_clim[i] = np.mean(aod[i:(2250*12):12]) # change of approach: pre-industrial defined as pre-1750
for i in range(2520):
volc_erf_minus20[i] = np.mean(linear(aod[i*12:i*12+12] - aod_2500yr_clim, -20))
print (np.mean(volc_erf_minus20))
aod[i*12:i*12+12]
i
pl.plot(volc_erf_minus20)
pl.plot(aod)
years = np.arange(-500,2020, dtype=int)
df = pd.DataFrame(data=volc_erf_minus20, index=years, columns=['volcanic_erf'])
df.index.name = 'year'
df.to_csv('../data_output/volcanic_erf.csv')
months = np.arange(-500+1/24,2020,1/12)
df = pd.DataFrame(data=aod, index=months, columns=['stratospheric_AOD'])
df.index.name = 'year'
df.to_csv('../data_output/volcanic_sAOD_monthly_-50001-201912.csv')
aod_annual_mean = np.zeros(2520)
for i in range(2520):
aod_annual_mean[i] = np.mean(aod[i*12:i*12+12])
scipy.stats.linregress(aod_annual_mean, volc_erf_minus20)
```
| github_jupyter |
# Optional Coding Exercise
## -- Implementing a "CART" Decision Tree From Scratch
```
%load_ext watermark
%watermark -d -u -a 'Sebastian Raschka' -v -p numpy,scipy,matplotlib
import numpy as np
```
<br>
<br>
<br>
<br>
<br>
<br>
## 1) Implementing a "CART" Decision Tree from Scratch
In this exercise, you are going to learn how to implement the CART decision tree algorithm we discussed in class. This decision tree algorithm will construct a binary decision tree based on maximizing Information Gain using the Gini Impurity measure on continuous features.
Implementing machine learning algorithms from scratch is a very important skill, and this homework will provide exercises that will help you to develop this skill. Even if you are interested in the more theoretical aspects of machine learning, being comfortable with implementing and trying out algorithms is vital for doing research, since even the more theoretical papers in machine learning are usually accompanied by experiments or simulations to a) verify results and b) to compare algorithms with the state-of-the art.
Since many students are not expert Python programmers (yet), I will provide partial solutions to the homework tasks such that you have a framework or guide to implement the solutions. Areas that you need to fill in will be marked with comments (e.g., `# YOUR CODE`). For these partial solutions, I first implemented the functions myself, and then I deleted parts you need to fill in by these comments. However, note that you can, of course, use more or fewer lines of code than I did. In other words, all that matter is that the function you write can create the same outputs as the ones I provide. How many lines of code you need to implement that function, and how efficient it is, does not matter here. The expected outputs for the respective functions will be provided for most functions so that you can double-check your solutions.
### 1.1) Splitting a node (4 pts)
First, we are going to implement a function that splits a dataset along a feature axis into sub-datasets. For this, we assume that the feature values are continuous (we are expecting NumPy float arrays). If the input is a NumPy integer array, we could convert it into a float array via
float_array = integer_array.astype(np.float64)
To provide an intuitive example of how the splitting function should work, suppose you are given the following NumPy array with four feature values, feature values 0-3:
np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
The function you are going to implement should return a dictionary, where the dictionary key stores the information about which data point goes to the left child note and which data point goes to the right child node after applying a threshold for splitting.
For example, if we were to use a `split` function on the array shown above with a theshold $t=2.5$, the split function should return the following dictionary:
{
'left': array([0, 1, 3, 4, 6, 7, 8, 9]), # smaller than or equal to threshold
'right': array([2, 5]) # larger than threshold'
'threshold': 2.5 # threshold for splitting, e.g., 2.5 means <= 2.5
}
Note that we also store a "threshold" key here to keep track of what value we used for the split -- we will need this later.
Now it's your turn to implement the split function.
```
# SOLUTION
def split(array, t):
"""
Function that splits a feature based on a threshold.
Parameters
----------
array : NumPy array, type float, shape=(num_examples,)
A NumPy array containing feature values (float values).
t : float
A threshold parameter for dividing the examples into
a left and a right child node.
Returns
--------
d : dictionary of the split
A dictionary that has 3 keys, 'left', 'right', 'threshold'.
The 'threshold' simply references the threshold t we provided
as function argument. The 'left' child node is an integer array
containing the indices of the examples corresponding to feature
values with value <= t. The 'right' child node is an integer array
stores the indices of the examples for which the feature value > t.
"""
index = np.arange(array.shape[0])
mask = array <= t
left = index[mask]
right = index[~mask]
d = {'left': left, 'right': right, 'threshold': t}
return d
# DO NOT EDIT OR DELETE THIS CELL
ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
print(split(ary, t=2.5))
print(split(ary, t=1.5))
print(split(ary, t=-0.5))
print(split(ary, t=1.0))
```
### 1.2) Implement a function to compute the Gini Impurity (6 pts)
After implementing the splitting function, the next step is to implement a criterion function so that we can compare splits on different features. I.e., we use this criterion function to decide which feature is the best feature to split for growing the decision tree at each node. As discussed in class, our splitting criterion will be Information Gain. However, before we implement an Information Gain function, we need to implement a function that computes the Gini Impurity at each node, which we need to compute Information Gain. For your reference, we defined Gini Impurity as follows:
$$G(p) = 1 - \sum_i (p_i)^2$$
where you can think of $p_i$ as the proportion of examples with class label $i$ at a given node.
```
# SOLUTION
def gini(array):
"""
Function that computes the Gini Impurity.
Parameters
-----------
array : NumPy array, type int, shape=(num_examples,)
A NumPy array containing integers representing class
labels.
Returns
----------
Gini impurity (float value).
Example
----------
>>> gini(np.array([0, 0, 1, 1]))
0.5
"""
frequencies = [(array==c).sum()/array.shape[0] for c in np.unique(array)]
res = [p**2 for p in frequencies]
return 1. - sum(res)
```
TIP: To check your solution, try out the `gini` function on some example arrays. Note that the Gini impurity is maximum (0.5) if the classes are uniformly distributed; it is minimum if the array contains labels from only one single class.
```
# DO NOT EDIT OR DELETE THIS CELL
print(round(gini(np.array([0, 1, 0, 1, 1, 0])), 4))
print(round(gini(np.array([1, 2])), 4))
print(round(gini(np.array([1, 1])), 4))
print(round(gini(np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])), 4))
print(round(gini(np.array([0, 0, 0])), 4))
print(round(gini(np.array([1, 1, 1, 0, 1, 4, 4, 2, 1])), 4))
```
### 1.3) Implement Information Gain (6 pts)
Now that you have a working solution for the `gini` function, the next step is to compute the Information Gain. For your reference, information gain is computed as
$$GAIN(\mathcal{D}, x_j) = H(\mathcal{D}) - \sum_{v \in Values(x_j)} \frac{|\mathcal{D}_v|}{|\mathcal{D}|} H(\mathcal{D}_v).$$
```
# SOLUTION
def information_gain(x_array, y_array, split_dict):
"""
Function to compute information gain.
Parameters
-----------
x_array : NumPy array, shape=(num_examples)
NumPy array containing the continuous feature
values of a given feature variable x.
y_array : NumPy array, shape=(num_examples)
NumPy array containing the continuous feature
values of a given feature variable x.
split_dict : dictionary
A dictionary created by the `split` function, which
contains the indices for the left and right child node.
Returns
---------
Information gain for the given split in `split_dict`.
"""
parent_gini = gini(y_array)
for child in ('left', 'right'):
# freq := |D_v| / |D|
freq = split_dict[child].shape[0] / float(x_array.shape[0])
child_gini = gini(y_array[split_dict[child]])
parent_gini -= freq*child_gini
return parent_gini
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `information_gain` function.
```
# DO NOT EDIT OR DELETE THIS CELL
x_ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
y_ary = np.array([0, 1, 1, 0, 0, 0, 1, 1, 0, 0])
split_dict_1 = split(ary, t=2.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_1))
split_dict_2 = split(ary, t=1.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_2))
split_dict_3 = split(ary, t=-1.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_3))
```
### 1.4) Creating different splitting thresholds (4 pts)
Now, we should have almost all the main components that we need for implementing the CART decision tree algorithm: a `split` function, a `gini` function, and an `information_gain` function based on the `gini` function. However, since we are working with continuous feature variables, we need to find a good threshold $t$ on the number line of each feature, which we can use with our function `split`.
For simplicity, we are going to implement a function that creates different thresholds based on the values found in a given feature variable. More precisely, we are going to implement a function `get_thresholds` that returns the lowest and highest feature value in a feature value array, plus the midpoint between each adjacent pairs of features (assuming the feature variable is sorted).
For example, if a feature array consists of the values
[0.1, 1.2, 2.4, 2.5, 2.7, 3.3, 3.7]
the returned thresholds should be
[0.1, (0.1+1.2)/2, (1.2+2.4)/2, (2.4+2.5)/2, (2.5+2.7)/2, (2.7+3.3)/2, (3.3+3.5/2), 3.7]
```
# SOLUTION
def get_thresholds(array):
"""
Get thresholds from a feature array.
Parameters
-----------
array : NumPy array, type float, shape=(num_examples,)
Array with feature values.
Returns
-----------
NumPy float array containing thresholds.
"""
sorted_ary = np.sort(array)
output_array = np.zeros(array.shape[0]+1, dtype=np.float64)
output_array[0] = sorted_ary[0]
output_array[-1] = sorted_ary[-1]
for i in range(array.shape[0]-1):
output_array[i+1] = (sorted_ary[i] + sorted_ary[i+1])/2
return output_array
# DO NOT EDIT OR DELETE THIS CELL
a = np.array([0.1, 1.2, 2.4, 2.5, 2.7, 3.3, 3.7])
print(get_thresholds(a))
b = np.array([3.7, 2.4, 1.2, 2.5, 3.3, 2.7, 0.1])
print(get_thresholds(b))
```
### 1.5) Selecting the best splitting threshold (4 pts)
In the previous section, we implemented a function `get_thresholds` to create different splitting thresholds for a given feature. In this section, we are now implementing a function that selects the best threshold (the threshold that results in the largest information gain) from the array returned by `get_thresholds` by combining the
- `get_thresholds`
- `split`
- `information_gain`
functions.
```
# SOLUTION
def get_best_threshold(x_array, y_array):
"""
Function to obtain the best threshold
based on maximizing information gain.
Parameters
-----------
x_array : NumPy array, type float, shape=(num_examples,)
Feature array containing the feature values of a feature
for each training example.
y_array : NumPy array, type int, shape=(num_examples,)
NumPy array containing the class labels for each
training example.
Returns
-----------
A float representing the best threshold to split the given
feature variable on.
"""
all_thresholds = get_thresholds(x_array)
info_gains = np.zeros(all_thresholds.shape[0])
for idx, t in enumerate(all_thresholds):
split_dict_t = split(x_array, t=t)
ig = information_gain(x_array=x_array,
y_array=y_array,
split_dict=split_dict_t)
info_gains[idx] = ig
best_idx = np.argmax(info_gains)
best_threshold = all_thresholds[best_idx]
return best_threshold
# DO NOT EDIT OR DELETE THIS CELL
x_ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
y_ary = np.array([0, 1, 1, 0, 0, 0, 1, 1, 0, 0])
print(get_best_threshold(x_array=x_ary,
y_array=y_ary))
x_ary = np.array([0.0, 3.0, 1.0, 0.0, 1.0, 2.0, 0.0, 1.0, 4.0, 1.0,])
y_ary = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 0])
print(get_best_threshold(x_array=x_ary,
y_array=y_ary))
```
### 1.6) Decision Tree Splitting (4 pts)
The next task is to combine all the previously developed functions to recursively split a dataset on its different features to construct a decision tree that separates the examples from different classes well. We will call this function `make_tree`.
For simplicity, the decision tree returned by the `make_tree` function will be represented by a Python dictionary. To illustrate this, consider the following dataset consisting of 6 training examples (class labels are 0 or 1) and 2 feature variables $X_0$ and $X_1$:
```
Inputs:
[[0. 0.]
[0. 1.]
[1. 0.]
[1. 1.]
[2. 0.]
[2. 1.]]
Labels:
[0 1 0 1 1 1]
```
Based on this dataset with 6 training examples and two features, the resulting decision tree in form of the Python dictionary should look like as follows:
You should return a dictionary with the following form:
```
{'X_1 <= 0.000000': {'X_0 <= 1.500000': array([0, 0]),
'X_0 > 1.500000': array([1])
},
'X_1 > 0.000000': array([1, 1, 1])
}
```
Let me further illustrate what the different parts of the dictionary mean. Here, the `'X_1'` in `'X_1 <= 0'` refers feature 2 (the first column of the NumPy array; remember that Python starts the index at 0, in contrast to R).
- 'X_1 <= 0': For training examples stored in this node where the 2nd feature is less than or equal to 0.
- 'X_1 > 0': For training examples stored in this node where the 2nd feature is larger than 0.
The "array" is a NumPy array that stores the class labels of the training examples at that node. In the case of `'X_1 <= 0'` we actually store actually a sub-dictionary, because this node can be split further into 2 child nodes with `'X_0 <= 1.500000'` and `'X_0 > 1.500000'`.
```
# SOLUTION
def make_tree(X, y):
"""
A recursive function for building a binary decision tree.
Parameters
----------
X : NumPy array, type float, shape=(num_examples, num_features)
A design matrix representing the feature values.
y : NumPy array, type int, shape=(num_examples,)
NumPy array containing the class labels corresponding to the training examples.
Returns
----------
Dictionary representation of the decision tree.
"""
# ZHONGJIE:
# This is also ok:
# if y.shape[0] == 1 or y.shape[0] == 0:
# return y
# Return array if node is empty or pure (all labels are the same or 1 example in leaf node)
if y.shape[0] == 1 or y.shape[0] == 0 or (np.unique(y)).shape[0] == 1:
return y
# Select the best threshold for each feature
thresholds = np.array([get_best_threshold(x_array=feature, y_array=y) for feature in X.T])
# Compute information gain for each feature based on the best threshold for each feature
gains = np.zeros(X.shape[1])
split_dicts = []
for idx, (feature, threshold) in enumerate(zip(X.T, thresholds)):
split_dict = split(feature, threshold)
split_dicts.append(split_dict)
ig = information_gain(feature, y, split_dict)
gains[idx] = ig
# Early stopping if there is no information gain
if (gains <= 1e-05).all():
return y
# Else, get best feature
best_feature_idx = np.argmax(gains)
results = {}
subset_dict = split_dicts[best_feature_idx]
for node in ('left', 'right'):
child_y_subset = y[subset_dict[node]]
child_X_subset = X[subset_dict[node]]
if node == 'left':
results["X_%d <= %f" % (best_feature_idx, subset_dict['threshold'])] = \
make_tree(child_X_subset, child_y_subset)
else:
results["X_%d > %f" % (best_feature_idx, subset_dict['threshold'])] = \
make_tree(child_X_subset, child_y_subset)
return results
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `make_tree` function.
```
# DO NOT EDIT OR DELETE THIS CELL
x1 = np.array([0., 0., 1., 1., 2., 2.])
x2 = np.array([0., 1., 0., 1., 0., 1.])
X = np.array( [x1, x2]).T
y = np.array( [0, 1, 0, 1, 1, 1])
print('Inputs:\n', X)
print('\nLabels:\n', y)
print('\nDecision tree:\n', make_tree(X, y))
```
### 1.7) Building a Decision Tree API (4 pts)
The final step of this part of the homework is now to write an API around our decision tree code so that we can use is for making predictions. Here, we will use the common convention, established by scikit-learn, to implement the decision tree as a Python class with
- a `fit` method that learns the decision tree model from a training set via the `make_tree` function we already implemented;
- a `predict` method to predict the class labels of training examples or any unseen data points.
For making predictions, since not all leaf nodes are guaranteed to be single training examples, we will use a majority voting function to predict the class label as discussed in class. I already implemented a `_traverse` method, which will recursively traverse a decision tree dictionary that is produced by the `make_tree` function.
Note that for simplicity, the `predict` method will only be able to accept one data point at a time (instead of a collection of data points). Hence `x` is a vector of size $\mathbb{R}^m$, where $m$ is the number of features. I use capital letters `X` to denote a matrix of size $\mathbb{R}^{n\times m}$, where $n$ is the number of training examples.
```
# SOLUTION
class CARTDecisionTreeClassifer(object):
def __init__(self):
pass
def fit(self, X, y):
self.splits_ = make_tree(X, y)
def _majority_vote(self, label_array):
return np.argmax(np.bincount(label_array))
def _traverse(self, x, d):
if isinstance(d, np.ndarray):
return d
for key in d:
if '<=' in key:
name, value = key.split(' <= ')
feature_idx = int(name.split('_')[-1])
value = float(value)
if x[feature_idx] <= value:
return self._traverse(x, d[key])
else:
name, value = key.split(' > ')
feature_idx = int(name.split('_')[-1])
value = float(value)
if x[feature_idx] > value:
return self._traverse(x, d[key])
def predict(self, x):
label_array = self._traverse(x, self.splits_)
return self._majority_vote(label_array)
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `make_tree` function.
```
# DO NOT EDIT OR DELETE THIS CELL
tree = CARTDecisionTreeClassifer()
tree.fit(X, y)
print(tree.predict(np.array([0., 0.])))
print(tree.predict(np.array([0., 1.])))
print(tree.predict(np.array([1., 0.])))
print(tree.predict(np.array([1., 0.])))
print(tree.predict(np.array([1., 1.])))
print(tree.predict(np.array([2., 0.])))
print(tree.predict(np.array([2., 1.])))
```
| github_jupyter |
# Apprentice Challenge
This challenge is a diagnostic of your current python pandas, matplotlib/seaborn, and numpy skills. These diagnostics will help inform your selection into the Machine Learning Guild's Apprentice program.
## Challenge Background: A Magic Eight Ball & Randomness

Do you remember these days? Holding a question in your mind and shaking the magic eight ball for an answer?
From Matel via Amazon: "The original Magic 8 Ball is the novelty toy that lets anyone seek advice about their future! All you have to do is simply 'ask the ball' any yes or no question, then wait for your answer to be revealed. Turn the toy upside-down and look inside the window on the bottom - this is where your secret message appears!"
Answers can be positive (i.e. 'It is certain'), negative (i.e. 'Don’t count on it') or neutral (i.e. 'Ask again later').
In this data analysis programming challenge, you will be programmatically analyzing a Magic Eight Ball's fortune telling. This is the type of exploratory analysis typically performed before building machine learning models.
## Instructions
You need to know your way around `pandas` DataFrames and basic Python programming. You have **1 hour** to complete the challenge. We strongly discourage searching the internet for challenge answers.
General Notes:
* Read the first paragraph above to familiarize yourself with the topic.
* Feel free to poke around with the iPython notebook.
* To run a cell, you press `CRTL+ENTER`
* Complete each of the tasks listed below in the notebook.
* You need to provide your code for challenge in the cells which say "-- YOUR CODE FOR TAKS NUMBER --"
* Make sure to run the very last read-only code cell.
**Please reach out to [Guild Mailbox](mailto:guildprogram@deloitte.com) with any questions.**
# Task 1: Generate Your Fortune!
**Instructions**
Ask our Python-based magic-eight ball 20 questions. You can ask it anything. Save the questions and the fortunes in respective lists which we will use down stream. Use the eightball method `get_fortune` to generate the responses. The sample code below will help you get started.
```
import eightball
fortune = eightball.get_fortune("What is the answer to life, the universe, and everything?")
print("The Eight Balls Says: {}".format(fortune))
```
Once you reach the box/cell containing the Python code, click on it press Ctrl + Enter and notice what happens!
**Sample Questions**
* Is there ice cream on the moon?
* Will I make a lot of money and become a bizzilionaire?
* Am I going to get a pony?
**Expected Output**
* `questions` which is a `list` of 20 strings
* `fortunes` which is a `list` of 20 strings
```
# -- YOUR CODE FOR TASK 1 --
# Import packages
from datetime import datetime
# Start Time
start_time = datetime.now()
print(start_time)
# Import the eightball module
# Store your questions list as a variable "questions"
questions = ...
# Generate respones to your questions and store them as a variable "fortunes"
fortunes = ...
# Print the result
print(...)
print("Check your answers)
print(type(questions))
print(type(fortunes))
```
# Task 2: Create a DataFrame!
**Instructions**
Let's analyze your newly minted fortunes. Perhaps we can uncover the magic of the eightball. To start our analysis, put `questions` and `fortunes` in a pandas `DataFrame` called `questions_fortunes`. Your DataFrame should have two columns, one for each of the respective lists. What is the shape of your DataFrame?
**Output**
* `questions_fortunes` which is a `pandas.DataFrame` with two columns called `question` & `fortune`
* Shape of questions_fortunes stored in a variable `shape`
```
# -- YOUR CODE FOR TASK 2 --
# Import pandas as pd
# Combine two lists into a dataframe with specified column names
questions_fortunes = ...
# Define the shape of the dataaframe
shape = ...
print("Check your answers)
print(list(questions_fortunes))
```
# Task 3: Getting Fortunes
**Instructions**
In the data sub-folder of the challenge folder, there is a dataset ("questions-fortunes.txt") with additional questions and magic eightball fortunes. Read that dataset into a pandas DataFrame called `temp` and combine with your `questions_fortunes` DataFrame. Be sure the index doesn't repeat. Call this new DataFrame `questions_fortunes_updt`.
**Output**
* A temp dataframe of additional questions/fortunes from "questions-fortunes.txt" datafile
* An updated pandas DataFrame `questions_fortunes` with additional rows from the "question-fortunes.txt" datafile.
```
# -- YOUR CODE FOR TASK 3--
# Create temp DataFrame from "questions-fortunes.txt"
temp = ...
# Combine with existing `questions_fortunes` DataFrame
questions_fortunes_updt = ...
print("Check your answers)
print(questions_fortunes_updt.shape[0])
```
# Task 4: Common Fortunes
**Instructions**
With something close to 1,700 questions and fortunes, we can study the patterns of fortune telling.
***Part 1:***
* Create a variable named `Fortune_Counts` which counts the number of times a fortune is determined for each of the available fortune in `questions_fortunes_updt`. The DataFrame should have two columns: `fortune` and `num_fortune`. The DataFrame should be sorted by `num_fortune` in descending order. Print the entire result.
* We know that fortunes from the magic ball fall into one of three categories: Positive, Negative, and Neutral. Use a dictionary of lookup values to assign one of these categories to each question/fortune. Add this as a column `category` to the `Fortune_Counts` DataFrame.
* You can access the positive, negative, neutral lookup with the `eightball.fortune_category` property. To add a new column in your DataFrame using a dictionary try adapting this technique:
```
raw = [[0, "Pony"],
[0, "Saddle"],
[0, "Lasso"],
[1, "Saddle"]
]
prices = {'Pony': 9.99,
'Saddle': 4.95,
'Lasso': '3.25'
}
df = pd.DataFrame(raw, columns=["orderID", "item"])
df['price'] = df['item'].map(prices)
```
***Part 2:***
Create a barchart of your dataset `Fortune_Counts` from Part 1 and color the bars by their category. Create a `seaborn.barplot` with a bar for each fortune colored by category (positive / negative / neutral). Please use the seaborn plotting library. You can install seaborn using `pip`. You can read about the API for the barplot [here](https://seaborn.pydata.org/generated/seaborn.barplot.html). Make the x-axis num_fortune, the y-axis fortune.
**Output**
*Part 1:* A sorted DataFrame detailing the number of questions per fortune outcome, along with the mapped category
*Part 2:* A `seaborn.barplot` with a bar for each fortune colored by category (positive / negative / neutral).
```
# Task 4 Part 1
# -- YOUR CODE FOR TASK 4.1 --
# Create dataFrame 'Fortune_Counts' with fortune, category, and num_fortune
Fortune_Counts = ...
# Import categories from eightball
# Create column 'category' mapping the category to each question
Fortune_Counts['category'] = ...
print(Fortune_Counts)
print("Check your answers)
# Task 4 Part 2
# -- YOUR CODE FOR TASK 4.2 --
import seaborn as sns
# Create barplot using sns.barplot()
plt = sns.barplot(...)
print("Check your answers)
print("Ignore this cell")
print("Check your answers)
print("Keep up the good work!")
print("Pass this Cell")
```
# Task 5: Question your Questions
Magic Eightballs have twenty set responses. We can safely assume that our python-based magic eight ball is not drawing its response from the great beyond. Understanding the patterns of your questions along with the fortunes provided may help interperet the algorithms behind the eight ball.
**Instructions**
***Part 1:*** How long are your questions? Do a quick character count on each of your questions. Include the new data in `questions_fortunes_updt` as a new column called `question_length`. For this task, we recommend using a pandas method and a base python function in the same line.
What is the average question length?
What is the average question length by fortune?
***Part 2:*** It seems there may be a correlation between the input (question) length and the output (fortune). Use a pivot table to take a look at number of questions associated with the question length (as the index) vs the fortune told (columns). What do you notice? Make sure all rows display. Fill null values with "-" for easier interpretability.
**Output**
*Part 1:*
* A new column in `questions_fortunes_updt` called `question_length`
* A variable called `Avg_Length` of the average length of all questions
* A variable called `Avg_Length_Fortune` of the average length of all questions by Fortune
*Part 2:*
* Pivot Table displaying all rows
```
# Task 5 Part 1
# -- YOUR CODE FOR TASK 5.1 --
# Create new column `question-length`
# Calculate average length
Avg_Length = ...
# Calculate average length by fortune
Avg_Length_Fortune = ...
print("Check your answers)
# Task 5 Part 2
# -- YOUR CODE FOR TASK 5.2 --
# Display all rows
# Create pivot table
print("Check your answers)
print("Do you excel at Pivot Tables?")
print("We're trying to find out.")
```
# Task 6: Telling Analysis
**Instructions**
Remember our assumption that the output is a function of the input. Based on the pivot table above, it appears that each question length is associated with only one fortune, illustrating that the fortune choice is a function of the input question string length. Thus, we should expect the that two different questions of the same length would procedure the same result. The code below tests and proves this hypothesis.
Our job is done then right? We can know what the eightball is going to tell us with any question, so we can tell the future. This is because the algorithm for this magic eight ball uses the length of the question as the seed.
***Part 1:***
Use the list below to test this hypothesis.
```
same_length = ["Can whales dance?",
"Can zebras paint?"
]
```
***Part 2:***
Using what you now know about this magic eight ball, write your own function `get_new_fortune` that takes an input `question` and outputs a fortune `option`. Include at least two arguments in your function: `options` and `question`. Use two function arguments, one still being the options list from the eightball module as the fortune options. Get creative and add a third parameter to your function which alters the fortune told!
Test your function by applying it to a question 'Will I make a lot of money and become a bizzilionaire?' to get a fortune.
***Output***
* `hypothesis_test` which boolean variable
* `get_new_fortune` a function with at least two inputs: `question` and `options`
```
# Task 6 Part 1
# -- YOUR CODE FOR TASK 6.1 --
# Same Length List
same_length = ["Can whales dance?",
"Can zebras paint?"
]
# Code to test if fortunes told are identical
hypothesis_test = ...
print(hypothesis_test)
print("Check your answers)
# Task 6 Part 2
# -- YOUR CODE FOR TASK 6.2 --
# Create your function `get_new_fortune`
# Test your function on the question: "Will I make a lot of money and become a bizzilionaire?"
print("Check your answers)
print("Was our hypothesis true?")
print("Can you tell your own fortune now?")
```
# Wrapping up!
Please save this notebook as "Last Name - First Name.ipynb". Make sure to save this file and submit it via the link you received in your Deloitte email.
Happy coding!
## References
1. [Javascript Magic 8 Ball with Basic DOM Manipulation](https://medium.com/@kellylougheed/javascript-magic-8-ball-with-basic-dom-manipulation-1636b83c3c26)
2. [Mattel Games Magic 8 Ball](https://www.amazon.com/Mattel-Games-Magic-8-Ball/dp/B00001ZWV7) - Where to buy on Amazon.
| github_jupyter |
<hr style="height:3px;border:none;color:#333;background-color:#333;" />
<img style=" float:right; display:inline" src="http://opencloud.utsa.edu/wp-content/themes/utsa-oci/images/logo.png"/>
### **University of Texas at San Antonio**
<br/>
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 2.5em;"> **Open Cloud Institute** </span>
<hr style="height:3px;border:none;color:#333;background-color:#333;" />
### Machine Learning/BigData EE-6973-001-Fall-2016
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Paul Rad, Ph.D.** </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Ali Miraftab, Research Fellow** </span>
<hr style="height:1.5px;border:none;color:#333;background-color:#333;" />
<hr style="height:1.5px;border:none;color:#333;background-color:#333;" />
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 2em;"> **Sentiment Analysis of Movie Reviews** </span>
<br/>
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.6em;"> Divya Bhaskaran, Akhilesh Reddy Baddigam </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> *University of Texas at San Antonio, San Antonio, Texas, USA* </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.4em;"> {ioa034, qlb680}@my.utsa.edu </span>
<br/>
<br/>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Project Definition:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Convolution neural network is used in all the machine learning applications; it also eases the classification of a sentence whether it is a positive review or negative review by the reviews given by the viewers to a movie. In our model we use word embedding, which is used to reduce the dimension of the large vocabulary to low dimension space vectors as input. </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Our approach towards the text classification problem is by labelling the dataset in to positive and negative sets, processing data in to required format, applying convolution filters, get a predicted output and training the data is to be carried out using different functions and techniques available in tensor flow. The main idea is to play with libraries available in tensor flow and increase the efficiency with that of the present one.</span>
[1]: YoonKim, (Sep 2014). Convolutional Neural Networks for Sentence Classification. New York University
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Outcome:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> Applying Convolutional Neural network for Sentence classification and improving the efficiency of the prediction. </span>
<span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.5em;"> **Dataset:** </span> <span style="color:#000; font-family: 'Bebas Neue'; font-size: 1.3em;"> This dataset is taken from <a href="http://www.cs.cornell.edu/people/pabo/movie-review-data/">Cornell website</a>. This site contains a number of dataset in specific we would like to use, sentence polarity dataset v1.0.movie review data set, which contains 5331 positive review and 5331 negative review as individual sentences. </span>
[1]: www.cs.cornell.edu/people/pabo/movie-review-data/
| github_jupyter |
# Augmenter l'accuracy du Computer Vision grâce aux réseaux à convolution
Dans le workshop précédent, vous avez vu comment reconnaître des vêtements à travers un réseau de neurones constitué de 3 couches. Vous avez pu experimenter l'impact des différents paramètres du modèle comme le nombre de neurones dans la couche cachée, le nombre d'epochs, etc. sur l'accuracy final.
Nous vous avons remis le code précedents pour un petit rappel. Exécutez la cellule suivante et retenez l'accuracy affichée à la fin de l'entraînement.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
Votre accuracy est probablement égal à 89% sur le jeu de train et 87% sur le jeu de validation... pas trop mal... Mais comment peut-on améliorer ce score ? Nous pouvons utiliser les convolutions. Nous ne donnerons pas trop de détails ici, mais le concept des réseaux de neurones à convolution est d'apprendre à detecter des patterns spécifiques sur le contenu d'une image.
Si vous avez déjà fait du processing d'images en utilisant un filtre (comme ici: https://en.wikipedia.org/wiki/Kernel_(image_processing), alors vous serez très familiers avec les convolutions.
En bref, vous prenez un filtre (généralement 3x3 ou 5x5) et vous le passez sur l'image. En modifiant les pixels sous-jacents en fonction de la formule de ce filtre représenté par une matrice, vous pouvez effectuer des opérations telles que la détection des contours. Ainsi, par exemple, si vous examinez le lien ci-dessus, vous verrez que pour un filtre de 3x3 défini pour la détection de contours, la cellule du milieu est définie à 8 et tous ses voisins à -1. Dans ce cas, pour chaque pixel, vous allez multiplier sa valeur par 8, puis y soustraire la valeur de chaque voisin. En le faisant pour chaque pixel, vous obtiendrez une nouvelle image dont les contours sont améliorés.
C'est parfait pour le computer vision, parce que souvent les features qui définissent un objet ne représentent qu'une partie de l'image entière, et que l'information dont on a besoin est beaucoup plus faible que tous les pixels de l'image. Ce concept nous permet alors de nous focaliser uniquement sur des features qui sont mises en valeur.
En ajoutant des couches de convolution avant vos couches dense, l'information fournie aux dense layers est bien plus ciblée, et potentiellement plus précise.
**Execice:**
Rajoutez au code precédent des couches de convolution, et regarder l'impact qu'il y aura sur l'accuracy.<br>
Vous devez atteindre un minimum de 92% d'accuracy sur les données de train et de 91% sur les données de validation.
**Indices**:
- Vous avez 60 000 images de size 28\*28\*1
- Doc [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D)
- Doc [MaxPooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D)
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
### Début du code ### (≈ 4 lignes de code)
### Fin du code ###
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
Vous devriez avoir atteint une accucacy proche de 93% sur les données de train et 91% sur les données de validation.Cela est significatif, vous êtes sur la bonne direction !
Essayez d'exécuter le code sur plus d'epochs, comme 20 par exemple, et regardez les résultats ! Mais malgré que les résultats du train deviennent de mieux en mieux, les résultats de la validation ont tendance à diminuer, dû à l'overfitting.
Revisualisez le code ci-dessus, puis regardez étape par étape comment les convolutions sont construites:
La première étape est de récupérer les données. Vous verrez qu'il y a quelques changements car les données de train doivent être reshaped. En effet, la première convolution attend un tensor simple contenant toutes les données. Donc au lieu de donner 60000 images de 28x28x1 dans une liste, nous devons donner un seul array 4D de 60000x28x28x1, et ceci de même pour les images de test. Si vous ne faîtes pas cela, vous aurez une erreur pendant l'entraînement car les couches de convolutions n'aurait pas réussi à reconnaître les shapes.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
```
L'étape suivante est de définir votre modèle. Maintenant au lieu de donner en premier votre couche d'input, vous devez ajouter d'abord une couche de convolution. Les paramètres sont les suivants:
1. Le nombre de convolutions que vous voulez generer. Ce nombre est complètement arbitraire mais il faut généralement commencer avec un nombre multiple de 32.
2. La size de la convolution, dans notre cas une grille de 3x3
3. La fonction d'activation, dans notre cas la ReLU qui rappelons, équivaut à x quand x > 0 sinon 0
4. Dans la première couche, la shape des données en entrée.
Vous suivrez ensuite avec une couche de MaxPooling qui sert à compresser l'image, en gardant les aspects les plus importants déterminés par la convolution. En spécifiant (2,2) pour le MaxPooling, l'effet sera de diviser par 4 la taille de l'image. Sans aller trop loin dans les détails, l'idée est qu'il crée un array de pixels 2x2,récupère la plus grande valeur de cette array, et transforme ces 4 pixels en 1 pixel. De façon itérative à travers tout l'image, le nombre de pixels à l'horizontal et à la verticale sera divisé chacun par 2, ce qui réduira l'image de 25%.
Vous pouvez appeler la méthode model.summary() pour visualiser les size et shape de votre réseau, et vous verrez qu'après chaque couche de MaxPooling, la size de l'image est divisée.
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
```
Puis ajoutez une autre convolution
```
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2)
```
Et maintenant flatten l'output. Après cela vous aurez exactement la même structure de réseau de neurones que la version non convolution
```
tf.keras.layers.Flatten(),
```
Les mêmes couches dense 128, et 10 pour l'output que dans l'exemple de pré-convolution.
```
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
```
Maintenant vous pouvez compiler votre modèle, faire appel à la méthode fit pour l'entraînement, et évaluer la loss et l'accuracy grâce au jeu de validation.
```
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
# Visualisation des convolutions et du pooling
Ce code va vous permettre de visualiser les convolutions graphiquement. Le `print(test_labels[;30])` vous montre les 30 premiers labels réels du jeu de test, et vous pouvez voir que ceux des index 0, 23 et 28 ont la même valeur (9). Il s'agit tous de chaussures. Jetez un oeil sur le résultat de la convolution sur chacun de ces index, et vous commencerez à voir quelques features qu'ils ont en commun sortir.
N'hésitez pas à changer les valeurs des index pour visualiser différentes images.
```
print(test_labels[:30])
import matplotlib.pyplot as plt
from tensorflow.keras import models
%matplotlib inline
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=23
THIRD_IMAGE=28
CONVOLUTION_NUMBER = 1
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
```
**Exercices:**
1. Essayer de modifier les convolutions. Changez les 32s en 16 ou 64. Quel impact cela aura sur l'accuracy et/ou le temps d'entraînement ?
2. Retirez la convolution finale. Quel impact cela aura sur l'accuracy ou le temps d'entraînement ?
3. Pourquoi ne pas ajouter plus de convolution ? Quel impact cela aura sur l'accuracy ou le temps d'entraînement ? Expérimentez.
4. Retirer toutes les convolutions sauf la première ? Quel sera l'impact ? Expérimentez.
5. Dans le workshop précent vous avez implémenté un callback pour vérifier votre fonction de loss et stopper l'entraînement dès qu'une certaine valeur est atteinte. Essayez de voir si vous pouvez l'implémenter ici !
```
import tensorflow as tf
### Début du code ### (≈ 4 lignes de code)
### Fin du code ###
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(23, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
| github_jupyter |
# Spark Train Logistic Regression
Train Logistic Regression classifier with Apache SparkML
```
%%bash
export version=`python --version |awk '{print $2}' |awk -F"." '{print $1$2}'`
echo $version
if [ $version == '36' ] || [ $version == '37' ]; then
echo 'Starting installation...'
pip3 install pyspark==2.4.8 wget==3.2 pyspark2pmml==0.5.1 > install.log 2> install.log
if [ $? == 0 ]; then
echo 'Please <<RESTART YOUR KERNEL>> (Kernel->Restart Kernel and Clear All Outputs)'
else
echo 'Installation failed, please check log:'
cat install.log
fi
elif [ $version == '38' ] || [ $version == '39' ]; then
pip3 install pyspark==3.1.2 wget==3.2 pyspark2pmml==0.5.1 > install.log 2> install.log
if [ $? == 0 ]; then
echo 'Please <<RESTART YOUR KERNEL>> (Kernel->Restart Kernel and Clear All Outputs)'
else
echo 'Installation failed, please check log:'
cat install.log
fi
else
echo 'Currently only python 3.6, 3.7 , 3.8 and 3.9 are supported, in case you need a different version please open an issue at https://github.com/IBM/claimed/issues'
exit -1
fi
from pyspark import SparkContext, SparkConf, SQLContext
import os
from pyspark.ml.classification import LogisticRegression
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark2pmml import PMMLBuilder
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler
import logging
import shutil
import site
import sys
import wget
import re
if sys.version[0:3] == '3.8':
url = ('https://github.com/jpmml/jpmml-sparkml/releases/download/1.7.2/'
'jpmml-sparkml-executable-1.7.2.jar')
wget.download(url)
shutil.copy('jpmml-sparkml-executable-1.7.2.jar',
site.getsitepackages()[0] + '/pyspark/jars/')
elif sys.version[0:3] == '3.6':
url = ('https://github.com/jpmml/jpmml-sparkml/releases/download/1.5.12/'
'jpmml-sparkml-executable-1.5.12.jar')
wget.download(url)
else:
raise Exception('Currently only python 3.6 and 3.8 is supported, in case '
'you need a different version please open an issue at '
'https://github.com/elyra-ai/component-library/issues')
data_parquet = os.environ.get('data_parquet',
'data.parquet') # input file name (parquet)
master = os.environ.get('master',
"local[*]") # URL to Spark master
model_target = os.environ.get('model_target',
"model.xml") # model output file name
data_dir = os.environ.get('data_dir',
'../../data/') # temporary directory for data
input_columns = os.environ.get('input_columns',
'["x", "y", "z"]') # input columns to consider
parameters = list(
map(lambda s: re.sub('$', '"', s),
map(
lambda s: s.replace('=', '="'),
filter(
lambda s: s.find('=') > -1 and bool(re.match(r'[A-Za-z0-9_]*=[.\/A-Za-z0-9]*', s)),
sys.argv
)
)))
for parameter in parameters:
logging.warning('Parameter: ' + parameter)
exec(parameter)
conf = SparkConf().setMaster(master)
if sys.version[0:3] == '3.6':
conf.set("spark.jars", 'jpmml-sparkml-executable-1.5.12.jar')
sc = SparkContext.getOrCreate(conf)
sqlContext = SQLContext(sc)
spark = sqlContext.sparkSession
df = spark.read.parquet(data_dir + data_parquet)
# register a corresponding query table
df.createOrReplaceTempView('df')
from pyspark.sql.types import DoubleType
df = df.withColumn("x", df.x.cast(DoubleType()))
df = df.withColumn("y", df.y.cast(DoubleType()))
df = df.withColumn("z", df.z.cast(DoubleType()))
splits = df.randomSplit([0.8, 0.2])
df_train = splits[0]
df_test = splits[1]
indexer = StringIndexer(inputCol="class", outputCol="label")
vectorAssembler = VectorAssembler(inputCols=eval(input_columns),
outputCol="features")
normalizer = MinMaxScaler(inputCol="features", outputCol="features_norm")
# import org.apache.spark.ml.feature.RFormula
# val irisData = spark.read.format("csv").option("header", "true").
# option("inferSchema", "true").load("Iris.csv")
# val irisSchema = irisData.schema
# val rFormula = new RFormula().setFormula("Species ~ .")
# val dtClassifier = new DecisionTreeClassifier().setLabelCol(
# rFormula.getLabelCol).setFeaturesCol(rFormula.getFeaturesCol)
# val pipeline = new Pipeline().setStages(Array(rFormula, dtClassifier))
lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8)
pipeline = Pipeline(stages=[indexer, vectorAssembler, normalizer, lr])
model = pipeline.fit(df_train)
prediction = model.transform(df_train)
binEval = MulticlassClassificationEvaluator(). \
setMetricName("accuracy"). \
setPredictionCol("prediction"). \
setLabelCol("label")
binEval.evaluate(prediction)
pmmlBuilder = PMMLBuilder(sc, df_train, model)
pmmlBuilder.buildFile(data_dir + model_target)
```
| github_jupyter |
# Testing
In the Digital Humanities Lab, we're going to be ensuring that our code is thoroughly documented and tested. This is important because we are collaborating with others and we will also be sharing our code publicly. Once you get used to writing documentation, then tests, then code, you may find that writing the code comes more easily because you have already thought through what a function (for example) does and what the possible edge cases are.
Not being careful and thorough with testing can cause significant problems. Some historical examples of failure due to not testing correctly or not testing thoroughly include:
* [Mars Probe Lost Due to Simple Math Error](http://articles.latimes.com/1999/oct/01/news/mn-17288)
* [Why carmakers always insisted on male crash dummies](https://www.boston.com/cars/news-and-reviews/2012/08/22/why-carmakers-always-insisted-on-male-crash-test-dummies)
* [Boeing 787 Dreamliners contain a potentially catastrophic software bug](https://arstechnica.com/information-technology/2015/05/boeing-787-dreamliners-contain-a-potentially-catastrophic-software-bug/)
While the lab will not be developing probes, cars, or airplanes, it is still important to test code to ensure that it is useful to other developers and end users. We recommend writing the test prior to writing the code.
## doctest
Python comes prepackaged with a test framework module called [doctest](https://docs.python.org/3.7/library/doctest.html). This module searches for pieces of text within code comments that look like interactive Python sessions wihin code, then executes those sessions in order to confirm that that code runs exactly as expected.
The doctest also generates documentation for our code. We'll go through an example of using doctest with a function we create called `count_vowels()`.
We start by naming the function and writing a doctest in triple quotes.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
"""
```
So far, we have written a sentence on what the function does, and a test that if the word `paris` is provided, the function will return `2` as there are two vowels in that word. This provides a line of documentation and an example of the function with expected output for humans.
We can also add documentation for computers to read, telling it that the computer should expect the parameter of `word` to be of type string, and that the function should return an integer.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
:param word: str
:return: int
"""
```
With this completed, we need to write the function.
We can run doctest by importing the module with `import doctest` and end our Python program with:
```python
doctest.testmod()
```
```
import doctest
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
:param word: str
:return: int
"""
total = 0
for letter in word:
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
count_vowels('paris')
```
So far our test works, and our function runs as expected. But, what happens if we use a word with an upper-case vowel?
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
>>> count_vowels('Oslo')
2
:param word: str
:return: int
"""
total = 0
for letter in word:
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
```
When we run the code above, the test fails because the upper-case `O` is not counted, let's amend that.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
>>> count_vowels('Oslo')
2
:param word: str
:return: int
"""
total = 0
for letter in word.lower():
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
count_vowels('Oslo')
```
With doctest, you should always have an estimate ready to be able to verify what is being returned via your program. For a novel with 316,059 words like *Middlemarch*, how many vowels would you expect to have?
From here, you can work to improve the tests, and through this testing improve the code so that it can accommodate edge cases and the full range of possibilities. Start with the following:
* Write a test for a type that is not a string (e.g. an integer)
* Write a test for words that have the letter `y`, which is sometimes considered a vowel in English.
* Write a test to handle `word` being a sentence — do you want a sentence to be passed to `word`?
* Write a test to deal with accented vowels, like the `ï` in `naïve` or the two `é`s in `résumé`.
## Resources
* [Python 3.7 documentation for the doctest module](https://docs.python.org/3.7/library/doctest.html)
* [doctest — Testing through Documentation](https://pymotw.com/3/doctest/)
* [doctest Introduction](http://pythontesting.net/framework/doctest/doctest-introduction/)
| github_jupyter |
# Build a stock market brief - S01E06-automate-the-brief
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Yahoo%20Finance/Build%20a%20stock%20market%20brief/S01E06-automate-the-brief.ipynb" target="_parent">
<img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/>
</a>
Bring everything together in an email to get your daily brief.
## Import necessary packages
```
import naas
import naas_drivers
```
## Set variables
```
stock = 'TSLA'
subject = "⏰ Reminder - Follow your favorite company : Tesla"
emails = open("users.txt", "r").read()
```
# Get stock data
```
data = naas_drivers.yahoofinance.get(stock, date_from=-200, moving_averages=[20,50])
data
```
## Calculate daily variations
```
data_display = data.copy()
data_display = data_display.sort_values(by="Date", ascending=False).reset_index(drop=True)
data_display
value = data_display.loc[0, "Close"]
value_last = data_display.loc[1, "Close"]
varv = value - value_last
varp = varv / abs(value_last) * 100
varv_diplay = '{:+.2f}'.format(varv)
varp_diplay = '{:+.2f}%'.format(varp)
value_display = '{:.2f}'.format(value)
print(value_display)
print(varv_diplay)
print(varp_diplay)
```
## Create chart
```
graph = naas_drivers.plotly.stock(data)
```
## Export chart in png and html
```
image_file = f"{stock}.png"
html_file = f"{stock}.html"
naas_drivers.plotly.export(graph, [html_file, image_file])
```
## Expose image and html
```
url_img = naas.assets.add(image_file)
url_html = naas.assets.add(html_file, params={"inline": True})
```
## Get score of the news
```
%run S01E04-score-the-news.ipynb
sentiment_news
```
## Get prediction chart
```
%run S01E05-predict-the-future.ipynb
email_content = naas_drivers.html.generate(
display='iframe',
title=f'Evolution of {stock} stock',
image_0 = "https://i.pinimg.com/originals/e1/2a/67/e12a6721d52e079c5e33632f66ddb8a9.jpg",
heading_1 = f'📈 Last close price: {value_display}' ,
heading_2 = f'🎰 Variations: {varv_diplay} ({varp_diplay})' ,
heading_3 = f'😎 News sentiment: {sentiment_news}' ,
button_Actual_200px=url_html,
text_3 = "👉 Click on the button above to open dynamic chart." ,
image= url_img,
button_Prediction_200px= url_html_file_prediction,
text_4 = "👉 Buy or Sell on <a href='https://www.etoro.com/markets/tsla'>Etoro</a>" ,
image_future = url_image_file_prediction,
heading_5 = "🔥 News hotlist",
table = table_news_email,
)
```
# Send email to emails list
```
naas.notifications.send(emails, subject, email_content)
```
# Schedule this notebook and make it run every day at 9AM CET .
```
naas.scheduler.add(recurrence="0 9 * * *")
```
| github_jupyter |
# **1D Map Conjugacy for the Kuramoto-Sivashinsky PDE**
```
import numpy as np
from utils import Kuramoto
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# Set plotting parameters
parameters = {'axes.labelsize': 16,
'axes.titlesize': 18,
'legend.fontsize': 13,
'xtick.labelsize': 16,
'ytick.labelsize': 16,
'figure.figsize': (12, 8),
'figure.titlesize': 18,
'font.serif': 'Computer Modern Roman',
}
plt.rcParams.update(parameters)
plt.rc('text', usetex=True)
```
## **Generate Measurement Data**
```
# Continuous-time simulation data
# Initializations
dt = .005
nu = 0.0298
modes = 14
t_span = np.arange(0, 5000, dt)
x0 = 0.1*np.random.rand(modes)
# Solution data
xsol = []
xsol = odeint(Kuramoto, x0, t_span,args = (nu,modes,))
# Plot Kuramoto-Sivashinsky Solution (x_10 vs. x_1)
plt.plot(xsol[1000:10000,0],xsol[1000:10000,9],'b')
plt.title("The Kuramoto-Sivashinsky Attractor")
plt.xlabel("$x_1(t)$")
plt.ylabel("$x_{10}(t)$")
# Create section data
Psec = []
temp = [0]*len(xsol[:,1])
count = 0
for m in range(len(temp)-1):
if xsol[m,0] <= 0 and xsol[m+1,0] >= 0: # section condition: x_1 = 0
temp[count] = xsol[m,1:]
count = count + 1
Psec.append(np.array(temp[1:count]))
xn, xnp1 = Psec[0][:-1], Psec[0][1:]
#Scale data
max_xn = xn.max()
min_xn = xn.min()
slope = 1/(max_xn - min_xn)
yint = -slope*min_xn
xn = slope*xn + yint
xnp1 = slope*xnp1 + yint
# Build input data matrix of forward iterates
forward_iters = 50
xnforward = []
xnp1 = xnp1[:-forward_iters]
for j in range(forward_iters):
xnforward.append(xn[j:-forward_iters+j])
# Plot Rossler Section Data
plt.plot(Psec[0][:-1,1],Psec[0][1:,1],'k.')
plt.title("Intersection of the Kuramoto-Sivashinsky Attractor with the Poincare Section")
plt.xlabel("$x_{2,n}$")
plt.ylabel("$x_{2,n+1}$")
```
## **Network Training**
```
import tensorflow as tf
from architecture_1D import Conjugacy
width = 200
size_x = 13 #number of x variables
degree = 2 #degree of latent mapping
activation = 'selu'
steps = 1
numblks_in = 4
numblks_out = 4
c1 = 3.5 # initialized mapping coefficients
c2 = -3.5
c3 = 0.0
c4 = 0.0
c5 = 0.0
stretchloss = 1
learning_rate = 0.00001
conjugacy = Conjugacy(width, size_x, activation, degree, steps, numblks_in, numblks_out, c1, c2, c3, c4, c5, stretchloss)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience = 10) # patience is set intentially low to speed up examples
optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
conjugacy.compile(optimizer=optimizer, loss = 'mse')
conjugacy.fit(xnforward, xnp1, callbacks = [callback], epochs = 1000)
```
## **Network Output**
```
# Print Discovered Mapping
print('Discovered Conjugate Mapping:')
print('')
print('g(y) =',conjugacy.c1.numpy(),'*y +',conjugacy.c2.numpy(),'*y^2')
# Network Summary
print('')
conjugacy.summary()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re
from itertools import combinations
import pcalg
import networkx as nx
DATA_FILE = "./data/20200807_user-db_cpu-load_03.json"
TARGET_DATA = {"containers": ["container_cpu_usage_seconds_total", "container_fs_io_current", "container_memory_working_set_bytes", "container_network_receive_bytes_total", "container_network_transmit_bytes_total"],
"services": ["throughput", "latency"]}
#"nodes": ["node_cpu_seconds_total", "node_disk_io_now", "node_filesystem_avail_bytes", "node_memory_MemAvailable_bytes", "node_network_receive_bytes_total", "node_network_transmit_bytes_total"]}
raw_data = pd.read_json(DATA_FILE)
raw_data
#for s in raw_data["services"]["carts"]:
# print(s["metric_name"])
#for c in raw_data["containers"]["carts"]:
# print(c["metric_name"])
# Prepare data matrix
data_df = pd.DataFrame()
for target in TARGET_DATA:
for t in raw_data[target].dropna():
for metric in t:
if metric["metric_name"] in TARGET_DATA[target]:
metric_name = metric["metric_name"].replace("container_", "").replace("node_", "")
target_name = metric["{}_name".format(target[:-1])].replace("gke-microservices-experi-default-pool-", "")
column_name = "{}-{}_{}".format(target[0], target_name, metric_name)
data_df[column_name] = np.array(metric["values"], dtype=np.float64)[:, 1]
data_df
labels = {}
for i in range(len(data_df.columns)):
labels[i] = data_df.columns[i]
labels
containers_list = []
for v in labels.values():
if re.match("^c-", v):
container_name = v.split("_")[0].replace("c-", "")
if container_name not in containers_list:
containers_list.append(container_name)
print(containers_list)
containers_metrics = {}
for c in containers_list:
nodes = []
for k, v in labels.items():
if re.match("^c-{}_".format(c), v):
nodes.append(k)
containers_metrics[c] = nodes
print(containers_metrics)
```
## 事前知識
```
# Communicating dependency
com_deps = {
"front-end": ["orders", "carts", "user", "catalogue"],
"catalogue": ["front-end", "catalogue-db"],
"catalogue-db": ["catalogue"],
"orders": ["front-end", "orders-db", "carts", "user", "payement", "shipping"],
"orders-db": ["orders"],
"user": ["front-end", "user-db", "orders"],
"user-db": ["user"],
"payment": ["orders"],
"shipping": ["orders", "rabbitmq"],
"queue-master": ["rabbitmq"],
"rabbitmq": ["shipping", "queue-master"],
"carts": ["front-end", "carts-db", "orders"],
"carts-db": ["carts"]
}
# Share hosts
container_hosts = {
"front-end": "66a015a7-i5rl",
"catalogue": "66a015a7-w0i8",
"catalogue-db": "66a015a7-w0i8",
"orders": "66a015a7-w0i8",
"orders-db": "66a015a7-eq47",
"user": "66a015a7-i5rl",
"user-db": "66a015a7-w0i8",
"payment": "66a015a7-g7qj",
"shipping": "66a015a7-g7qj",
"queue-master": "66a015a7-eq47",
"rabbitmq": "66a015a7-w0i8",
"carts": "66a015a7-g7qj",
"carts-db": "66a015a7-eq47"
}
no_deps_container_pair = []
for i, j in combinations(containers_list, 2):
if j not in com_deps[i] and container_hosts[i] != container_hosts[j]:
no_deps_container_pair.append([i, j])
print(len(no_deps_container_pair))
print(no_deps_container_pair)
no_paths = []
for pair in no_deps_container_pair:
for i in containers_metrics[pair[0]]:
for j in containers_metrics[pair[1]]:
no_paths.append([i, j])
print(len(no_paths))
# Prepare init graph
init_g = nx.Graph()
node_ids = range(len(data_df.columns))
init_g.add_nodes_from(node_ids)
for (i, j) in combinations(node_ids, 2):
init_g.add_edge(i, j)
print(init_g.number_of_edges())
for no_path in no_paths:
init_g.remove_edge(no_path[0], no_path[1])
print(init_g.number_of_edges())
#agraph = nx.nx_agraph.to_agraph(init_g).draw(prog='sfdp', format='png')
#Image(agraph)
```
## Fisher-Z検定
```
dm = data_df.values
dm.shape
import sys
sys.path.append("../")
from citest.ci_tests import ci_test_gauss
cm = np.corrcoef(dm.T)
(G, sep_set) = pcalg.estimate_skeleton(indep_test_func=ci_test_gauss,
data_matrix=dm,
alpha=0.01,
corr_matrix=cm,
init_graph=init_g)
G = pcalg.estimate_cpdag(skel_graph=G, sep_set=sep_set)
# Exclude nodes that have no neighbors for visualization
remove_nodes = list(G.nodes)
for edge in G.edges():
for n in edge:
try:
remove_nodes.remove(n)
except ValueError:
pass
print(remove_nodes)
G.remove_nodes_from(remove_nodes)
G = nx.relabel_nodes(G, labels)
from IPython.display import Image, SVG, display
# prog=[‘neato’|’dot’|’twopi’|’circo’|’fdp’|’nop’]
agraph = nx.nx_agraph.to_agraph(G).draw(prog='sfdp', format='png')
Image(agraph)
```
## chi square test
```
# Determine the number of categories by Sturges' rule
data_size = len(data_df)
n_bins = int(np.log2(data_size) + 1)
print("Number of bins: {}".format(n_bins))
disc_data_df = pd.DataFrame()
for col in data_df.columns:
disc_data_df[col] = pd.cut(data_df[col], n_bins, labels=np.arange(0, n_bins))
disc_data_df
dm = disc_data_df.values
dm.shape
from citest.chi_square import chi_square
(g, sep_set) = pcalg.estimate_skeleton(indep_test_func=chi_square,
data_matrix=dm,
alpha=0.01,
init_graph=init_g)
g = pcalg.estimate_cpdag(skel_graph=g, sep_set=sep_set)
# Exclude nodes that have no neighbors for visualization
remove_nodes = list(g.nodes)
for edge in g.edges():
for n in edge:
try:
remove_nodes.remove(n)
except ValueError:
pass
print(remove_nodes)
g.remove_nodes_from(remove_nodes)
g = nx.relabel_nodes(g, labels)
from IPython.display import Image, SVG, display
# prog=[‘neato’|’dot’|’twopi’|’circo’|’fdp’|’nop’]
agraph = nx.nx_agraph.to_agraph(g).draw(prog='sfdp', format='png')
Image(agraph)
```
| github_jupyter |
# Summary of Quantum Operations
## Fundamentals
(revised by Amba Datt Pant, originaly created by Diwakar Sigdel)
## Qubit
- Regular or classical computer works on rules of logic - operation based on bits 0 or 1.
- qubit --> quantum bit that can follow quantum mechanics (rules of quantum mechanics). **0, 1 and intermediate state**
- Qubit has to be linked together by **superposition** and **quantum entanglement**.
```
from qiskit import *
from math import pi
import numpy as np
from qiskit.visualization import plot_bloch_multivector,plot_state_qsphere
import matplotlib.pyplot as plt
```
## Single Qubit Quantum states
A single qubit quantum state can be written as
$$\left|\psi\right\rangle = \alpha\left|0\right\rangle + \beta \left|1\right\rangle$$
where $\alpha$ and $\beta$ are complex numbers. In a measurement the probability of the bit being in $\left|0\right\rangle$ is $|\alpha|^2$ and $\left|1\right\rangle$ is $|\beta|^2$. As a vector this is
$$
\left|\psi\right\rangle =
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}.
$$
Note due to conservation probability $|\alpha|^2+ |\beta|^2 = 1$ and since global phase is undetectable $\left|\psi\right\rangle := e^{i\delta} \left|\psi\right\rangle$ we only requires two real numbers to describe a single qubit quantum state.
A convenient representation is
$$\left|\psi\right\rangle = \cos(\theta/2)\left|0\right\rangle + \sin(\theta/2)e^{i\phi}\left|1\right\rangle$$
where $0\leq \phi < 2\pi$, and $0\leq \theta \leq \pi$. From this it is clear that there is a one-to-one correspondence between qubit states ($\mathbb{C}^2$) and the points on the surface of a unit sphere ($\mathbb{R}^3$). **This is called the Bloch sphere representation of a qubit state.**
Quantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.
$$\left|\psi'\right\rangle = U\left|\psi\right\rangle$$
A general unitary must be able to take the $\left|0\right\rangle$ to the above state. That is
$$
U = \begin{pmatrix}
\cos(\theta/2) & a \\
e^{i\phi}\sin(\theta/2) & b
\end{pmatrix}
$$
where $a$ and $b$ are complex numbers constrained such that $U^\dagger U = I$ for all $0\leq\theta\leq\pi$ and $0\leq \phi<2\pi$. This gives 3 constraints and as such $a\rightarrow -e^{i\lambda}\sin(\theta/2)$ and $b\rightarrow e^{i\lambda+i\phi}\cos(\theta/2)$ where $0\leq \lambda<2\pi$ giving
$$
U = \begin{pmatrix}
\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\
e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2)
\end{pmatrix}.
$$
This is the most general form of a single qubit unitary.

-------
### Qubit flipping
#### 1. $\psi = |0 \rangle; \psi = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$
- qiskit.visualization.plot_bloch_multivector \
(plots the Bloch sphere )
- **plot_bloch_multivector(state, title=' ', figsize=None, *, rho=None)**
- state (Statevector or DensityMatrix or ndarray) – an N-qubit quantum state.
- title (str) – a string that represents the plot title
- figsize (tuple) – Has no effect, here for compatibility only.
https://qiskit.org/documentation/stubs/qiskit.visualization.plot_bloch_multivector.html
```
q = np.array([1.+0.j, 0.+0.j])
plot_bloch_multivector(q)
plot_state_qsphere(q)
```
- plot_state_qsphere(state, figsize=None, ax=None, show_state_labels=True, show_state_phases=False, use_degrees=False, *, rho=None)
- Plot the qsphere representation of a quantum state.
- Here, the **size of the points is proportional to the probability of the corresponding term in the state and the color represents the phase**.
- https://qiskit.org/documentation/stubs/qiskit.visualization.plot_state_qsphere.html
#### 2. $\psi = |1 \rangle; \psi = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$
```
q = np.array([0.+0.j, 1.+0.j])
plot_bloch_multivector(q)
plot_state_qsphere(q)
```
#### Experiment 1:
- qc = **QuantumCircuit(n_q,n_b)**\
(This quantum circuit called qc (here), is created by Qiskit using QuantumCircuit.)
- The **number n_q defines the number of qubits in the circuit**.
- The **n_b we define the number of output bits** we will extract from the circuit at the end.
- https://qiskit.org/textbook/ch-states/atoms-computation.html
- The **dashed lines in the image** are just to **distinguish the different parts of the circuit** (although they can have more interesting uses too). They are made by using the **barrier** command.
Half adder using Qiskit to show simple outline of quantum circuit.

```
qc = QuantumCircuit(1)
qc.barrier() # barrier --> distinguish the different parts of the circuit
qc1 = qc.copy()
qc.x(0) # x gate in 0 indexed qubit
#qc.h(1) # x gate in 0 indexed qubit
qc.barrier()
qc2 =qc.copy()
qc.draw('mpl') # mpl --> matplotlib
backend = Aer.get_backend('statevector_simulator')
q1 = execute(qc1,backend).result().get_statevector()
q2 = execute(qc2,backend).result().get_statevector()
print(q1,q2)
```
------------------------------
### Aer.get_backend('name_of_simulator')
- Aer can import using **from qiskit import Aer, execute**
- 
- https://qiskit.org/documentation/tutorials/simulators/1_aer_provider.html
- In Qiskit, we use **backend** to refer to the things on which quantum programs actually run (simulators or real quantum devices). To set up a job for a backend, we need to set up the corresponding backend object.
- The simulator we want is defined in the part of qiskit known as **Aer**. By giving the name of the simulator we want to the get_backend() method of Aer, we get the backend object we need. In this case, the name is '**statevector_simulator**'.
```
# A list of all possible simulators in Aer
Aer.backends()
```
### Statevector
- statevectors to describe the state of the system
```
# position of a car along a track
```
Classical system, x = 4 \

In terms of statevector ! https://qiskit.org/textbook/ch-states/representing-qubit-states.html

--------------------------------------------
### Superposition
#### 3. $\psi = \frac{1}{\sqrt{2}} |0\rangle + \frac{1}{\sqrt{2}} |1 \rangle ; \psi = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}$
```
q = np.array([1/np.sqrt(2)+0.j, 1/np.sqrt(2)+0.j]) #input as complex number
plot_bloch_multivector(q)
plot_state_qsphere(q)
```
#### 4. $\psi = \frac{1}{\sqrt{2}} |0\rangle - \frac{1}{\sqrt{2}} |1 \rangle ; \psi = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix}$
```
q = np.array([1/np.sqrt(2)+0.j, -(1/np.sqrt(2))+0.j])
plot_bloch_multivector(q)
plot_state_qsphere(q)
```
#### Experiment 2 :
```
qc = QuantumCircuit(1)
qc.barrier()
qc1 = qc.copy()
qc.h(0)
qc.barrier()
qc2 =qc.copy()
qc.draw('mpl') # mpl -> matplotlib
```
- $\psi_1 = |0 \rangle$ and $\psi_2 = \frac{1}{\sqrt{2}} |0\rangle + \frac{1}{\sqrt{2}} |1 \rangle$
```
backend = Aer.get_backend('statevector_simulator')
q1 = execute(qc1,backend).result().get_statevector()
q2 = execute(qc2,backend).result().get_statevector()
print(q1,q2)
qc = QuantumCircuit(1)
qc.barrier()
qc1 = qc.copy()
qc.x(0) # x-gate like NOT gate
qc.h(0) # input Hadamard gate (h-gate)
qc.barrier()
qc2 =qc.copy()
qc.draw('mpl')
```
- $\psi_1 = |1 \rangle$ and $\psi_2 = \frac{1}{\sqrt{2}} |0\rangle - \frac{1}{\sqrt{2}} |1 \rangle$
```
backend = Aer.get_backend('statevector_simulator')
q1 = execute(qc1,backend).result().get_statevector()
q2 = execute(qc2,backend).result().get_statevector()
print(q1,q2)
```
----------------------
- https://qiskit.org/textbook/preface.html
--------------------------------------
### Memo how to install Qutip and QisKit?
when you install Anaconda, tick/check on "Only me". If you check all users then it will need more setting in environment for installing Qutip.
- Open Anaconda Navigator
- Launch Powershell Prompt
- type: conda config --add channels conda-forge
- conda install qutip
- follow instruction (simply type y to proceed)
- you can check simply by running from qutip import * in jupyter notebook
- install qiskit
- type: pip install qiskit \
https://qiskit.org/documentation/install.html
| github_jupyter |
<a href="https://colab.research.google.com/github/SerafDosSantos/MesBlocNotes/blob/main/exemple_de_PoW_(Proof_of_Work).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Exemple de la preuve de travail (PoW ou Proof-of-Work)
Ce document est un tutoriel éducatif sur la preuve de travail utilisée sur certaines technologies de chaîne de blocs.
La preuve de travail est un moyen pour une autorité d'un système distribué (i.e. un noeud dans un écosystème à chaîne de blocs), d'ajouter un bloc de données à l'écosystème par la solution d'un casse-tête algorithmique difficile à trouver, mais dont le niveau de difficulté de la solution est prédéfinie.
Courramment utilisé dans les chaînes de blocs énergivores, cette méthode d'autorisation d'ajout de données à l'écosystème permet entre autre aux machines-mineuses d'obtenir en retour une valeur en cryptomonnaie.
Cette dernière n'est accordée qu'à la première machine-mineuse ayant :
1. Ayant été la machine-mineuse la plus rapide pour trouver la solution
2. Que cette solution ait été validé par par les autres machines-mineuses
3. Que la solution du bloc soit confirmée par la plus longue chaîne
Bitcoin en étant son pionnier (et suivi par Ethereum entre autres) utilise la preuve de travail pour neutraliser la puissance de destruction du système distribué de la chaîne de blocs, ou du moins, son écosystème.
Accompagnée par les moyens cryptographiques de notre époque, la destruction de la base de données en chaîne de blocs distribué s'avère quasi impossible à en prendre le contrôle et en faire une fraude dans le registre.
Trois points à tenir compte de ce que compose un écosystème à chaîne de blocs à la base :
1. Le registre : en chaîne de blocs et son armure cryptographique
2. Le noeud : mémoire et puissance de calcul, la machnine-mineuse
3. Le consensus : qui défini la raison de l'écosystème pour valider ou rejeter un bloc de transactions dans l'écosystème
4. La distribution : multiplicitée des noeuds pour éviter une attaque 50%+1 de pouvoir sur le réseau
4. L'unité monnétaire digitale : essentiellement des structures de données de valeur en échange d'argent réel
Vu que ce document éducatif fait uniquement une démonstration de la preuve de travail, je dédierai des documents séparés aux autres méthodes d'autorité d'un noeud de chaîne de blocs ainsi qu'aux autres points systémiques des chaînes de blocs selon leur écosystème.
Nous n'en tenons seulement dans ce document à ce que défini la preuve de travail : trouver une solution d'un casse-tête d'un niveau de difficulté prédéfini.
## Qu'est-ce qu'est le hachage ?
Le hachage, vulgairement expliqué est la fonction de transformer une source de données et de le présenter sous un autre format le exclusivement unique au plus possible.
Résumé en _peu_ de caractères alphabétiques et numériques une valeur d'information qui est beaucoup plus grande en information fait partie du domaine de la cryptographie.
Résumer la vie d'un tumultueux acteur né dans les années 1930s et mort dans les années 2000s (i.e. donc environs 80 années d'information) en plusieurs bouquins biographique par différents écrivains et qui se lit en quelques heures, est une forme de hashing de la vie, ou plutôt l'information, d'un être humain.
Le bouqin est un résumé très minime de sa vie versus le nombre de paroles et de pensées (des données en mots compris et conscientisés) qui ont dû passer à travers l'esprit de cet être et qui en fit l'esprit de son succès du mi-20ième siècle.
Un Hashing est essentiellement la même chose en données numériques. Il est fait d'une plus petite quantité d'informations qu'une conscience, mais en degrés d'informations scriptés, i.e. l'histoire de l'humanité, celle des données informatiques sont de très grandes quantitées de données (comme un ou plusieurs livres ou plusieurs appareils IoTs) et se limitent à la base à nos données numériques humaines, nécessitant malgré tout beaucoup de puissance CPU et GPU pour les traiter.
Réduire une structure de données en une suite de chiffres et de lettres finie tout en formant un identifiant unique est un des champs d'études en cryptographie et une des applications de la cryptographie en informatique.
---
**_"A hash is a function that converts one value to another. Hashing data is a common practice in computer science and is used for several different purposes. Examples include cryptography, compression, checksum generation, and data indexing. Hashing is a natural fit for cryptography because it masks the original data with another value."_**
(via [https://techterms.com/definition/hash](https://techterms.com/definition/hash) )
```
import hashlib
```
### Exemple de Hashing en Python
Avec une chaîne de caractères comme __"Bonjour, mon nom est Michel!"__ on peut arriver à définir un identifiant unique composé de valeurs définies et produites avec une fonction d'encryption comme celle du sha256() utilisé en programmation informatique.
```
phrase = "Bonjour, mon nom est Michel!"
phrase_encodee = phrase.encode('utf-8')
hash = hashlib.sha256(phrase_encodee).hexdigest()
print(hash)
```
Résultant le Hash ci-dessus.
Si l'on change tout caractère, aussi minimal qu'un seul ou plusieurs, le Hash change.
Par exemple, pour "Hasher" maintenant : __"Bonsoir, mon nom est Mich3l!"__; on obtient une valeur différente :
```
phrase = "Bonsoir, mon nom est Mich3l!"
phrase_encodee = phrase.encode('utf-8')
hash = hashlib.sha256(phrase_encodee).hexdigest()
print(hash)
```
Le résultat sortant en est différent.
## La difficulté de trouver l'information encryptée, i.e. décrypter un Hash pour définir l'information originale
## Les degrées de difficultés de trouver un Hash parmi tant d'autres, d'un nonce parmi tant d'autres : L'essentiel du PoW
```
#!/usr/bin/env python
# example of proof-of-work algorithm
# taken from the book Mastering Bitcoin, 2nd Edition
# by Andreas M. Antonopoulos
# from O'Reilly Media Inc.
import hashlib
import time
try:
long # Python 2
xrange
except NameError:
long = int # Python 3
xrange = range
max_nonce = 2 ** 32 # 4 billion
# ###
# fonction qui défini les opérations pour le proof-of-work
# ###
def proof_of_work(header, difficulty_bits):
# calculate the difficulty target
target = 2 ** (256 - difficulty_bits)
for nonce in xrange(max_nonce):
hash_result = hashlib.sha256(str(header).encode('utf-8') + str(nonce).encode('utf-8')).hexdigest()
# check if this is a valid result, below the target
if long(hash_result, 16) < target:
print("Success with nonce %d" % nonce)
print("Hash is %s" % hash_result)
return (hash_result, nonce)
print("Failed after %d (max_nonce) tries" % nonce)
return nonce
# ###
# début d'opération de ce script
# ###
if __name__ == '__main__':
nonce = 0
hash_result = ''
# difficulty from 0 to 31 bits
for difficulty_bits in xrange(32):
difficulty = 2 ** difficulty_bits
print("--------------------------------------------------------------")
print("Difficulty: %ld (%d bits)" % (difficulty, difficulty_bits))
print("Starting search...")
# checkpoint the current time
start_time = time.time()
# make a new block which includes the hash from the previous block
# we fake a block of transactions - just a string
new_block = 'test block with transactions' + hash_result
# find a valid nonce for the new block
(hash_result, nonce) = proof_of_work(new_block, difficulty_bits)
# checkpoint how long it took to find a result
end_time = time.time()
elapsed_time = end_time - start_time
print("Elapsed Time: %.4f seconds" % elapsed_time)
if elapsed_time > 0:
# estimate the hashes per second
hash_power = float(long(nonce) / elapsed_time)
print("Hashing Power: %ld hashes per second" % hash_power)
```
# Annexe
| github_jupyter |
```
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from rdkit.Chem import rdMolDescriptors as rdmd
from rdkit.Chem.Scaffolds import MurckoScaffold
import pandas as pd
from tqdm import tqdm
import time
import numpy as np
from scipy.spatial.distance import cdist
from sklearn.cluster import MiniBatchKMeans
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import matthews_corrcoef,confusion_matrix, roc_auc_score, roc_curve
import seaborn as sns
import numpy as np #
import pandas as pd
import string
import json
from patsy import dmatrices
from operator import itemgetter
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split, StratifiedShuffleSplit, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn import preprocessing
from sklearn.metrics import classification_report
not_to_be_selected_list=[
'Activity Summary', 'Viability Activity', 'PUBCHEM_ACTIVITY_SCORE',
'Viability Potency (uM)', 'Viability Efficacy (%)', "index",
"Nuclei_Correlation_Manders_AGP_DNA",
"Nuclei_Correlation_Manders_AGP_ER",
"Nuclei_Correlation_Manders_AGP_Mito",
"Nuclei_Correlation_Manders_AGP_RNA",
"Nuclei_Correlation_Manders_DNA_AGP",
"Nuclei_Correlation_Manders_DNA_ER",
"Nuclei_Correlation_Manders_DNA_Mito",
"Nuclei_Correlation_Manders_DNA_RNA",
"Nuclei_Correlation_Manders_ER_AGP",
"Nuclei_Correlation_Manders_ER_DNA",
"Nuclei_Correlation_Manders_ER_Mito",
"Nuclei_Correlation_Manders_ER_RNA",
"Nuclei_Correlation_Manders_Mito_AGP",
"Nuclei_Correlation_Manders_Mito_DNA",
"Nuclei_Correlation_Manders_Mito_ER",
"Nuclei_Correlation_Manders_Mito_RNA",
"Nuclei_Correlation_Manders_RNA_AGP",
"Nuclei_Correlation_Manders_RNA_DNA",
"Nuclei_Correlation_Manders_RNA_ER",
"Nuclei_Correlation_Manders_RNA_Mito",
"Nuclei_Correlation_RWC_AGP_DNA",
"Nuclei_Correlation_RWC_AGP_ER",
"Nuclei_Correlation_RWC_AGP_Mito",
"Nuclei_Correlation_RWC_AGP_RNA",
"Nuclei_Correlation_RWC_DNA_AGP",
"Nuclei_Correlation_RWC_DNA_ER",
"Nuclei_Correlation_RWC_DNA_Mito",
"Nuclei_Correlation_RWC_DNA_RNA",
"Nuclei_Correlation_RWC_ER_AGP",
"Nuclei_Correlation_RWC_ER_DNA",
"Nuclei_Correlation_RWC_ER_Mito",
"Nuclei_Correlation_RWC_ER_RNA",
"Nuclei_Correlation_RWC_Mito_AGP",
"Nuclei_Correlation_RWC_Mito_DNA",
"Nuclei_Correlation_RWC_Mito_ER",
"Nuclei_Correlation_RWC_Mito_RNA",
"Nuclei_Correlation_RWC_RNA_AGP",
"Nuclei_Correlation_RWC_RNA_DNA",
"Nuclei_Correlation_RWC_RNA_ER",
"Nuclei_Correlation_RWC_RNA_Mito",
"Nuclei_Granularity_14_AGP",
"Nuclei_Granularity_14_DNA",
"Nuclei_Granularity_14_ER",
"Nuclei_Granularity_14_Mito",
"Nuclei_Granularity_14_RNA",
"Nuclei_Granularity_15_AGP",
"Nuclei_Granularity_15_DNA",
"Nuclei_Granularity_15_ER",
"Nuclei_Granularity_15_Mito",
"Nuclei_Granularity_15_RNA",
"Nuclei_Granularity_16_AGP",
"Nuclei_Granularity_16_DNA",
"Nuclei_Granularity_16_ER",
"Nuclei_Granularity_16_Mito",
"Nuclei_Granularity_16_RNA"]
info= [
"StdInChI",
"PUBCHEM_ACTIVITY_OUTCOME"]
df =pd.read_csv("GO_CP_MitoOverlap_nocelldeath.csv" , usecols=lambda x: x not in not_to_be_selected_list)
df =df[df.PUBCHEM_ACTIVITY_OUTCOME != "Inconclusive"]
#df = df.replace({'PUBCHEM_ACTIVITY_OUTCOME': {"Active": 1, "Inactive": 0}})
df
from rdkit.Chem import inchi
from rdkit import Chem
def get_standardize_smiles(value):
try: return Chem.MolToSmiles(Chem.inchi.MolFromInchi(value))
except: return "Cannot_do"
from pandarallel import pandarallel
pandarallel.initialize()
df['smiles_r']=df['StdInChI'].parallel_apply(get_standardize_smiles)
def fp_as_DataStructs(mol):
fp = AllChem.GetMorganFingerprintAsBitVect(mol, radius=2, nBits=2048)
#arr = np.zeros((1,), np.int)
#DataStructs.ConvertToNumpyArray(fp, arr)
return fp
mol_list = [Chem.MolFromSmiles(x) for x in df.smiles_r]
df['Mol'] = mol_list
df['fp'] = [fp_as_DataStructs(x) for x in df.Mol]
df
CP_features= df.iloc[:, 2:1731]
#CP_features = df[CP_features_list].to_numpy()
CP_features
GO_features= df.iloc[:, 1731:-3]
GO_features
#GO_features = df[GO_features_list].to_numpy()
X_CP = CP_features
X_GO = GO_features
Y = df[["PUBCHEM_ACTIVITY_OUTCOME"]]
X_CP.shape
X_GO.shape
X_Morgan = np.array([x for x in df['fp']])
X_Morgan.shape
Y.shape
import collections
from imblearn.over_sampling import SMOTE
from imblearn.combine import SMOTEENN, SMOTETomek
from sklearn.model_selection import RandomizedSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
from scipy.stats import randint
from sklearn.metrics import classification_report
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import f1_score
from sklearn.model_selection import cross_val_predict
from numpy import argmax
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import HalvingRandomSearchCV
from tqdm import tqdm
from sklearn.metrics import average_precision_score
X_GO=X_GO.loc[:, (X_GO != X_GO.iloc[0]).any()]
X_GO.replace([np.inf, -np.inf], np.nan, inplace=True)
X_GO = X_GO.dropna(axis=1)
X_GO
from scipy import stats
from scipy.stats import ks_2samp
from scipy.stats import pointbiserialr
from tqdm import tqdm
def filter_columns(X, Y):
selected_ks= ks(X, Y)
pbs_features = pbs(X,Y)
mwu_features= mwu(X,Y)
imp_features= list(set(selected_ks +pbs_features +mwu_features ))
print(len(imp_features))
return imp_features
from scipy.stats import mannwhitneyu
def mwu(X, Y):
target="PUBCHEM_ACTIVITY_OUTCOME"
p_list=[]
r_list=[]
list_of_lists=[]
col_list=[]
for column in (X.columns):
df_std= pd.concat([X, Y], axis=1)
inactives=df_std[column][df_std[target]==0].to_list()
actives=df_std[column][df_std[target]==1].to_list()
if (actives != inactives):
try:
r, p = mannwhitneyu(inactives, actives, alternative='two-sided')
p_list.append(p)
col_list.append(column)
r_list.append(r)
except:
continue
list_of_lists=[col_list, r_list, p_list]
data_mwu = pd.DataFrame(list_of_lists).transpose()
data_mwu.columns = ['Feature', 'effect', 'p_value']
data_mwu['p_value'] = data_mwu['p_value'].astype(float)
data_mwu['effect'] = data_mwu['effect'].astype(float)
data_mwu['-log10_pvalue'] = -np.log10(data_mwu['p_value'])
data_mwu['log2_effect'] = np.log2(data_mwu['effect'])
data_mwu['product'] = data_mwu['log2_effect'] * (data_mwu['-log10_pvalue'])
data_mwu= data_mwu.sort_values('log2_effect',ascending=False)
data_mwu = data_mwu.dropna(subset=["product"])
data_mwu.reset_index(inplace=True, drop=True)
data_mwu[data_mwu.log2_effect>14].sort_values("-log10_pvalue", ascending=False)[:10]
plt.scatter( data_mwu["log2_effect"],data_mwu["-log10_pvalue"], s=1)
plt.xlabel('log2_effect')
plt.ylabel('-log10_pvalue P')
plt.show()
negative= data_mwu.tail(40).dropna().Feature.to_list()
positive= data_mwu.head(40).Feature.to_list()
mwu_features = positive + negative
return(mwu_features)
def pbs(X, Y):
target="PUBCHEM_ACTIVITY_OUTCOME"
p_list=[]
r_list=[]
list_of_lists=[]
col_list=[]
for column in (X.columns):
df= pd.concat([X, Y], axis=1)
CP_features= df[column].values
target_labels= df[target].values
r, p = pointbiserialr(target_labels, CP_features)
p_list.append(p)
col_list.append(column)
r_list.append(r)
list_of_lists=[col_list, r_list, p_list]
data_pbs = pd.DataFrame(list_of_lists).transpose()
data_pbs.columns = ['Feature', 'effect', 'p_value']
data_pbs['p_value'] = data_pbs['p_value'].astype(float)
data_pbs['effect'] = data_pbs['effect'].astype(float)
data_pbs['-log10_pvalue'] = -np.log10(data_pbs['p_value'])
#data_pbs['log2_effect'] = np.log2(data_pbs['effect'])
data_pbs['product'] = data_pbs['effect'] * (data_pbs['-log10_pvalue'])
data_pbs= data_pbs.sort_values('effect',ascending=False)
data_pbs[data_pbs.effect<0].sort_values("-log10_pvalue", ascending=False)[:10]
plt.scatter( data_pbs["effect"],data_pbs["-log10_pvalue"], s=1)
plt.xlabel('effect')
plt.ylabel('-log10_pvalue P')
plt.show()
negative_CP= data_pbs.tail(40).dropna().Feature.to_list()
positive_CP= data_pbs.head(40).Feature.to_list()
pbs_features = positive_CP + negative_CP
return(pbs_features)
from tqdm import notebook
def ks(X, Y):
target="PUBCHEM_ACTIVITY_OUTCOME"
p_list=[]
r_list=[]
list_of_lists=[]
col_list=[]
for column in notebook.tqdm(X.columns):
df_std= pd.concat([X, Y], axis=1)
inactives=df_std[column][df_std[target]==0]
actives=df_std[column][df_std[target]==1]
r, p = ks_2samp(inactives, actives)
p_list.append(p)
col_list.append(column)
r_list.append(r)
list_of_lists=[col_list, r_list, p_list]
data_ks = pd.DataFrame(list_of_lists).transpose()
data_ks.columns = ['Feature', 'KS_test_statistic_value', 'KS_test_p_value']
data_ks['KS_test_p_value'] = data_ks['KS_test_p_value'].astype(float)
data_ks['KS_test_statistic_value'] = data_ks['KS_test_statistic_value'].astype(float)
data_ks.sort_values(by="KS_test_statistic_value", ascending=False)
data_ks['-log10_pvalue'] = -np.log10(data_ks['KS_test_p_value'])
data_ks['log2_effect'] = np.log2(data_ks['KS_test_statistic_value'])
data_ks['product'] = data_ks['log2_effect'] * (data_ks['-log10_pvalue'])
data_ks= data_ks.sort_values('KS_test_statistic_value',ascending=False)
data_ks = data_ks.dropna(subset=["product"])
data_ks.reset_index(inplace=True, drop=True)
plt.scatter( data_ks["KS_test_statistic_value"],data_ks["-log10_pvalue"], s=8)
plt.xlabel('statistic_value')
plt.ylabel('-log10_pvalue P')
plt.show()
ks_features= data_ks.head(40).Feature.to_list()
return(ks_features)
sns.set(rc={'figure.figsize':(5,4)}, font_scale=1)
list_of_lists=[]
for state in tqdm(range(0,50)):
print(state)
outercv = StratifiedKFold(n_splits=4,shuffle=True, random_state=state)
for (train_index, test_index) in (outercv.split(X_CP, Y)):
#print(train_index)
#print(test_index)
X_train_CP= X_CP.iloc[train_index]
X_train_GO = X_GO.iloc[train_index]
X_train_Morgan= X_Morgan[train_index]
Y_train= Y.iloc[train_index]
X_test_CP= X_CP.iloc[test_index]
X_test_GO= X_GO.iloc[test_index]
X_test_Morgan= X_Morgan[test_index]
Y_test= Y.iloc[test_index]
print(X_train_CP.shape)
print(X_test_CP.shape)
selected_columns_CP=[[filter_columns(X_train_CP, Y_train)]]
selected_columns_CP = list(np.array(selected_columns_CP).flat)
X_train_CP = X_train_CP[selected_columns_CP].to_numpy()
X_test_CP= X_test_CP[selected_columns_CP].to_numpy()
print(selected_columns_CP)
print(X_train_CP.shape)
print(X_test_CP.shape)
selected_columns_GO=[[filter_columns(X_train_GO, Y_train)]]
selected_columns_GO = list(np.array(selected_columns_GO).flat)
X_train_GO = X_train_GO[selected_columns_GO].to_numpy()
X_test_GO= X_test_GO[selected_columns_GO].to_numpy()
print(selected_columns_GO)
print(X_train_CP.shape)
print(X_test_CP.shape)
Y_train= Y_train["PUBCHEM_ACTIVITY_OUTCOME"].to_numpy()
Y_test= Y_test["PUBCHEM_ACTIVITY_OUTCOME"].to_numpy()
print(collections.Counter(Y_train))
print(collections.Counter(Y_test))
#print('Original dataset shape %s' % collections.Counter(Y_train))
#resampler = SMOTE(random_state=42, sampling_strategy="all" )
#X_res, Y_res = resampler.fit_resample(X_train, Y_train)
#print('Resampled dataset shape %s' % collections.Counter(Y_res))
inner_cv = StratifiedKFold(n_splits=4, shuffle=True, random_state=state)
# Create a based model
rf = RandomForestClassifier(n_jobs=-1)
# Instantiate the grid search model
#rsh = GridSearchCV(estimator = rf, param_grid = param_grid, cv = inner_cv, n_jobs=40, verbose = 2)
# Instantiate the RandomHalving search model
param_dist_grid = {
#'max_depth': randint(10, 20),
#'max_features': randint(40, 50),
#'min_samples_leaf': randint(5, 15),
#'min_samples_split': randint(5, 15),
#'n_estimators':[200, 300, 400, 500, 600],
#'bootstrap': [True, False],
#'oob_score': [False],
'random_state': [42],
#'criterion': ['gini', 'entropy'],
'n_jobs': [-1],
#'class_weight' : [None, 'balanced']
}
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
##CP MODELS
print("Running ____________________CP MODELS")
rsh.fit(X_train_CP, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob_CP_cross_val = cross_val_predict(rsh.best_estimator_, X_train_CP, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_CP_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_CP_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_CP_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_CP_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_CP_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "CP", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_train_CP, Y_train)
y_prob_CP_held_out = classifier.predict_proba(X_test_CP)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_CP_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_CP_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_CP_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_CP_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "CP", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##GO MODELS
print("Running ____________________GOMODELS")
rsh.fit(X_train_GO, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob_GO_cross_val = cross_val_predict(rsh.best_estimator_, X_train_GO, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_GO_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_GO_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_GO_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_GO_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_GO_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_train_GO, Y_train)
y_prob_GO_held_out = classifier.predict_proba(X_test_GO)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_GO_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_GO_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_GO_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_GO_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Morgan MODELS
print("Running ____________________Morgan MODELS")
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
rsh.fit(X_train_Morgan, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob_Morgan_cross_val = cross_val_predict(rsh.best_estimator_, X_train_Morgan, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_Morgan_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_Morgan_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_Morgan_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_Morgan_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_Morgan_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Morgan",f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_train_Morgan, Y_train)
y_prob_Morgan_held_out = classifier.predict_proba(X_test_Morgan)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_Morgan_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_Morgan_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_Morgan_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_Morgan_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Early Stage Fusion MODELS CP+GO
X_combined_train = np.concatenate((X_train_GO, X_train_CP), axis=1)
X_combined_test = np.concatenate((X_test_GO, X_test_CP), axis=1)
print("Running ____________________Early Stage Fusion MODELS CP+GO")
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
rsh.fit(X_combined_train, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob = cross_val_predict(rsh.best_estimator_, X_combined_train, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Early Stage Fusion CP+GO",f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_combined_train, Y_train)
y_prob = classifier.predict_proba(X_combined_test)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Early Stage Fusion CP+GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Early Stage Fusion MODELS CP+Morgan
X_combined_train = np.concatenate((X_train_Morgan, X_train_CP), axis=1)
X_combined_test = np.concatenate((X_test_Morgan, X_test_CP), axis=1)
print("Running ____________________Early Stage Fusion MODELS CP+Morgan")
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
rsh.fit(X_combined_train, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob = cross_val_predict(rsh.best_estimator_, X_combined_train, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Early Stage Fusion CP+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_combined_train, Y_train)
y_prob = classifier.predict_proba(X_combined_test)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Early Stage Fusion CP+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Early Stage Fusion MODELS GO+Morgan
X_combined_train = np.concatenate((X_train_Morgan, X_train_GO), axis=1)
X_combined_test = np.concatenate((X_test_Morgan, X_test_GO), axis=1)
print("Running ____________________Early Stage Fusion MODELS GO+Morgan")
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
rsh.fit(X_combined_train, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob = cross_val_predict(rsh.best_estimator_, X_combined_train, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Early Stage Fusion GO+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_combined_train, Y_train)
y_prob = classifier.predict_proba(X_combined_test)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Early Stage Fusion GO+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Early Stage Fusion MODELS CP+GP+Morgan
X_combined_train = np.concatenate((X_train_Morgan, X_train_CP, X_train_GO ), axis=1)
X_combined_test = np.concatenate((X_test_Morgan, X_test_CP, X_test_GO), axis=1)
print("Running ____________________Early Stage Fusion MODELS CP+GO+Morgan")
rsh = HalvingRandomSearchCV(estimator=rf, param_distributions=param_dist_grid,
factor=2, random_state=state, n_jobs=-1, verbose=2, cv = inner_cv)
rsh.fit(X_combined_train, Y_train)
#rsh.fit(X_res, Y_res) #If using SMOTE
y_prob = cross_val_predict(rsh.best_estimator_, X_combined_train, Y_train, cv=inner_cv, method='predict_proba')[:, 1]
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#best_thresh=0.5 #If using SMOTE
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Early Stage Fusion CP+GO+Morgan",f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
classifier = rsh.best_estimator_
classifier.fit(X_combined_train, Y_train)
y_prob = classifier.predict_proba(X_combined_test)[:,1]
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Early Stage Fusion CP+GO+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Late Stage MODELS CP+GO+Morgan
print("Running ____________________Late Stage Averaged MODELS CP,GO, Morgan")
y_prob_cross_val = np.mean( np.array([ y_prob_CP_cross_val, y_prob_Morgan_cross_val, y_prob_GO_cross_val ]), axis=0 )
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Late Stage Model Averages CP+GO+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
y_prob_held_out= np.mean( np.array([ y_prob_CP_held_out, y_prob_Morgan_held_out, y_prob_GO_held_out ]), axis=0 )
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Late Stage Model Averages CP+Morgan+GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Late Stage MODELS CP+Morgan
print("Running ____________________Late Stage Averaged MODELS CP, Morgan")
y_prob_cross_val = np.mean( np.array([ y_prob_CP_cross_val, y_prob_Morgan_cross_val]), axis=0 )
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Late Stage Model Averages CP+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
y_prob_held_out= np.mean( np.array([ y_prob_CP_held_out, y_prob_Morgan_held_out]), axis=0 )
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Late Stage Model Averages CP+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Late Stage MODELS GO+Morgan
print("Running ____________________Late Stage Averaged MODELS GO, Morgan")
y_prob_cross_val = np.mean( np.array([ y_prob_Morgan_cross_val, y_prob_GO_cross_val ]), axis=0 )
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Late Stage Model Averages GO+Morgan", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
y_prob_held_out= np.mean( np.array([ y_prob_Morgan_held_out, y_prob_GO_held_out ]), axis=0 )
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Late Stage Model Averages Morgan+GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
##Late Stage MODELS CP+GO
print("Running ____________________Late Stage Averaged MODELS CP,GO")
y_prob_cross_val = np.mean( np.array([ y_prob_CP_cross_val, y_prob_GO_cross_val ]), axis=0 )
# calculate roc curves
fpr, tpr, thresholds = roc_curve(Y_train, y_prob_cross_val)
# get the best threshold
J = tpr - fpr
ix = argmax(J)
best_thresh = thresholds[ix]
print('Best Threshold=%f' % (best_thresh))
#CrossVal Results
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_cross_val ]
conf_matrix = confusion_matrix(Y_train, y_pred)
print(conf_matrix)
print(classification_report(Y_train, y_pred))
ba= balanced_accuracy_score(Y_train, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_train, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_train, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_train, y_prob_cross_val)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_train, y_prob_cross_val, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_train, y_prob_cross_val)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["CrossVal", "Late Stage Model Averages CP+GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
#HeldOutTest
y_prob_held_out= np.mean( np.array([ y_prob_CP_held_out, y_prob_GO_held_out ]), axis=0 )
y_pred = [ 1 if x>best_thresh else 0 for x in y_prob_held_out ]
conf_matrix = confusion_matrix(Y_test, y_pred)
print(conf_matrix)
print(classification_report(Y_test, y_pred))
ba= balanced_accuracy_score(Y_test, y_pred)
print('balanced_accuracy_score ',ba)
mcc=matthews_corrcoef(Y_test, y_pred)
print('matthews_corrcoef ',mcc)
# Sensitivity, hit rate, recall, or true positive rate
Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1])
# Specificity or true negative rate
Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1])
print( 'Sensitivity', Sensitivity)
print( 'Specificity', Specificity)
f1= f1_score(Y_test, y_pred, average='binary')
print('F1 Toxic', f1)
AUC = roc_auc_score(Y_test, y_prob_held_out)
print('AUC-ROC ',AUC)
AUCPR = average_precision_score(Y_test, y_prob_held_out, average='weighted')
print('AUCPR ',AUCPR)
# calculate roc curves
best_model_fpr, best_model_tpr, _ = roc_curve(Y_test, y_prob_held_out)
plt.plot(best_model_fpr, best_model_tpr, marker='.', label='Our Model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
row=["Held-Out", "Late Stage Model Averages CP+GO", f1, Sensitivity, Specificity, ba, mcc, AUC, AUCPR]
list_of_lists.append(row)
df_results = pd.DataFrame(list_of_lists, columns=["Type","Fingerprint","F1_Toxic","Sensitivity","Specificity","BA","MCC","AUC-ROC","AUCPR"])
df_results
df_results.to_csv("00_RF_CP_GO_nocelldeath_Models_filterinternal_F1_scores_withouthpo_40.csv", index=False)
import pandas as pd
df_results=pd.read_csv("00_RF_CP_GO_nocelldeath_Models_filterinternal_F1_scores_withouthpo_40.csv")
df_results
import ptitprince as pt
variable = "AUC-ROC"
#variable = "BA"
#variable = "Sensitivity"
#variable = "Specificity"
#variable = "MCC"
#variable = "AUCPR"
fingerprints =['CP', 'GO', 'Morgan',
'Early Stage Fusion CP+GO+Morgan',
'Late Stage Model Averages CP+Morgan+GO',]
df_results= df_results[df_results.Fingerprint.isin(fingerprints)]
df_results.Fingerprint.unique()
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
for variable in ["F1_Toxic", "Sensitivity", "Specificity", "BA", "MCC", "AUC-ROC", "AUCPR"]:
pal = "Set2"
sns.set(rc={'figure.figsize':(11.7,8.27)}, font_scale=2)
sns.set_style("whitegrid")
ax=pt.half_violinplot( x = variable, y = 'Fingerprint', data = df_results[df_results.Type=="Held-Out"], palette = pal,
bw = .2, cut = 0.,scale = "area", width = .6,
inner = None, orient = 'h')
ax=sns.stripplot( x = variable, y = 'Fingerprint', data = df_results[df_results.Type=="Held-Out"], palette = pal,
edgecolor = "white",size = 3, jitter = 1, zorder = 0,
orient = 'h')
ax=sns.boxplot( x = variable, y = 'Fingerprint', data = df_results[df_results.Type=="Held-Out"], color = "black",
width = .15, zorder = 10, showcaps = True,
boxprops = {'facecolor':'none', "zorder":10}, showfliers=True,
whiskerprops = {'linewidth':2, "zorder":10},
saturation = 1, orient = 'h')
plt.show()
```
| github_jupyter |
## Training excitatory-inhibitory recurrent network
Here we will train recurrent neural network with excitatory and inhibitory neurons on a simple perceptual decision making task.
[](https://colab.research.google.com/github/gyyang/nn-brain/blob/master/EI_RNN.ipynb)
## Install dependencies
```
# # If on Google Colab, uncomment to install neurogym to use cognitive tasks
# ! git clone https://github.com/gyyang/neurogym.git
# %cd neurogym/
# ! pip install -e .
```
## Defining a perceptual decision making task
```
# We will import the task from the neurogym library
import neurogym as ngym
# Environment
task = 'PerceptualDecisionMaking-v0'
timing = {
'fixation': ('choice', (50, 100, 200, 400)),
'stimulus': ('choice', (100, 200, 400, 800)),
}
kwargs = {'dt': 20, 'timing': timing}
seq_len = 100
# Make supervised dataset
dataset = ngym.Dataset(task, env_kwargs=kwargs, batch_size=16,
seq_len=seq_len)
# A sample environment from dataset
env = dataset.env
# Visualize the environment with 2 sample trials
_ = ngym.utils.plot_env(env, num_trials=2)
# Network input and output size
input_size = env.observation_space.shape[0]
output_size = env.action_space.n
```
## Define E-I recurrent network
Here we define a E-I recurrent network, in particular, no self-connections are allowed.
```
# Define networks
import torch
import torch.nn as nn
from torch.nn import init
from torch.nn import functional as F
import math
class PosWLinear(nn.Module):
r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`
Same as nn.Linear, except that weight matrix is constrained to be non-negative
"""
__constants__ = ['bias', 'in_features', 'out_features']
def __init__(self, in_features, out_features, bias=True):
super(PosWLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = torch.nn.Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = torch.nn.Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def forward(self, input):
# weight is non-negative
return F.linear(input, torch.abs(self.weight), self.bias)
class EIRecLinear(nn.Module):
r"""Recurrent E-I Linear transformation.
Args:
hidden_size: int, layer size
e_prop: float between 0 and 1, proportion of excitatory units
"""
__constants__ = ['bias', 'hidden_size', 'e_prop']
def __init__(self, hidden_size, e_prop, bias=True):
super().__init__()
self.hidden_size = hidden_size
self.e_prop = e_prop
self.e_size = int(e_prop * hidden_size)
self.i_size = hidden_size - self.e_size
self.weight = nn.Parameter(torch.Tensor(hidden_size, hidden_size))
mask = np.tile([1]*self.e_size+[-1]*self.i_size, (hidden_size, 1))
np.fill_diagonal(mask, 0)
self.mask = torch.tensor(mask, dtype=torch.float32)
if bias:
self.bias = nn.Parameter(torch.Tensor(hidden_size))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
# Scale E weight by E-I ratio
self.weight.data[:, :self.e_size] /= (self.e_size/self.i_size)
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def effective_weight(self):
return torch.abs(self.weight) * self.mask
def forward(self, input):
# weight is non-negative
return F.linear(input, self.effective_weight(), self.bias)
class EIRNN(nn.Module):
"""E-I RNN.
Reference:
Song, H.F., Yang, G.R. and Wang, X.J., 2016.
Training excitatory-inhibitory recurrent neural networks
for cognitive tasks: a simple and flexible framework.
PLoS computational biology, 12(2).
Args:
input_size: Number of input neurons
hidden_size: Number of hidden neurons
Inputs:
input: (seq_len, batch, input_size)
hidden: (batch, hidden_size)
e_prop: float between 0 and 1, proportion of excitatory neurons
"""
def __init__(self, input_size, hidden_size, dt=None,
e_prop=0.8, sigma_rec=0, **kwargs):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.e_size = int(hidden_size * e_prop)
self.i_size = hidden_size - self.e_size
self.num_layers = 1
self.tau = 100
if dt is None:
alpha = 1
else:
alpha = dt / self.tau
self.alpha = alpha
self.oneminusalpha = 1 - alpha
# Recurrent noise
self._sigma_rec = np.sqrt(2*alpha) * sigma_rec
# self.input2h = PosWLinear(input_size, hidden_size)
self.input2h = nn.Linear(input_size, hidden_size)
self.h2h = EIRecLinear(hidden_size, e_prop=0.8)
def init_hidden(self, input):
batch_size = input.shape[1]
return (torch.zeros(batch_size, self.hidden_size).to(input.device),
torch.zeros(batch_size, self.hidden_size).to(input.device))
def recurrence(self, input, hidden):
"""Recurrence helper."""
state, output = hidden
total_input = self.input2h(input) + self.h2h(output)
state = state * self.oneminusalpha + total_input * self.alpha
state += self._sigma_rec * torch.randn_like(state)
output = torch.relu(state)
return state, output
def forward(self, input, hidden=None):
"""Propogate input through the network."""
if hidden is None:
hidden = self.init_hidden(input)
output = []
steps = range(input.size(0))
for i in steps:
hidden = self.recurrence(input[i], hidden)
output.append(hidden[1])
output = torch.stack(output, dim=0)
return output, hidden
class Net(nn.Module):
"""Recurrent network model.
Args:
input_size: int, input size
hidden_size: int, hidden size
output_size: int, output size
rnn: str, type of RNN, lstm, rnn, ctrnn, or eirnn
"""
def __init__(self, input_size, hidden_size, output_size, **kwargs):
super().__init__()
# Excitatory-inhibitory RNN
self.rnn = EIRNN(input_size, hidden_size, **kwargs)
# self.fc = PosWLinear(self.rnn.e_size, output_size)
self.fc = nn.Linear(self.rnn.e_size, output_size)
def forward(self, x):
rnn_activity, _ = self.rnn(x)
rnn_e = rnn_activity[:, :, :self.rnn.e_size]
out = self.fc(rnn_e)
return out, rnn_activity
```
## Train the network on the decision making task
```
import torch.optim as optim
import numpy as np
# Instantiate the network and print information
hidden_size = 50
net = Net(input_size=input_size, hidden_size=hidden_size,
output_size=output_size, dt=env.dt, sigma_rec=0.15)
print(net)
# Use Adam optimizer
optimizer = optim.Adam(net.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()
running_loss = 0
running_acc = 0
print_step = 200
for i in range(5000):
inputs, labels = dataset()
inputs = torch.from_numpy(inputs).type(torch.float)
labels = torch.from_numpy(labels.flatten()).type(torch.long)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output, activity = net(inputs)
output = output.view(-1, output_size)
loss = criterion(output, labels)
loss.backward()
optimizer.step() # Does the update
running_loss += loss.item()
if i % print_step == (print_step - 1):
running_loss /= print_step
print('Step {}, Loss {:0.4f}'.format(i+1, running_loss))
running_loss = 0
```
## Run the network post-training and record neural activity
```
env.reset(no_step=True)
env.timing.update({'fixation': ('constant', 500),
'stimulus': ('constant', 500)})
perf = 0
num_trial = 500
activity_dict = {}
trial_infos = {}
stim_activity = [[], []] # response for ground-truth 0 and 1
for i in range(num_trial):
env.new_trial()
ob, gt = env.ob, env.gt
inputs = torch.from_numpy(ob[:, np.newaxis, :]).type(torch.float)
action_pred, rnn_activity = net(inputs)
# Compute performance
action_pred = action_pred.detach().numpy()
choice = np.argmax(action_pred[-1, 0, :])
correct = choice == gt[-1]
# Log trial info
trial_info = env.trial
trial_info.update({'correct': correct, 'choice': choice})
trial_infos[i] = trial_info
# Log stimulus period activity
rnn_activity = rnn_activity[:, 0, :].detach().numpy()
activity_dict[i] = rnn_activity
# Compute stimulus selectivity for all units
# Compute each neuron's response in trials where ground_truth=0 and 1 respectively
rnn_activity = rnn_activity[env.start_ind['stimulus']: env.end_ind['stimulus']]
stim_activity[env.trial['ground_truth']].append(rnn_activity)
print('Average performance', np.mean([val['correct'] for val in trial_infos.values()]))
```
### Plot neural activity from sample trials
```
import matplotlib.pyplot as plt
e_size = net.rnn.e_size
trial = 2
plt.figure()
_ = plt.plot(activity_dict[trial][:, :e_size], color='blue', label='Excitatory')
_ = plt.plot(activity_dict[trial][:, e_size:], color='red', label='Inhibitory')
plt.xlabel('Time step')
plt.ylabel('Activity')
```
### Compute stimulus selectivity for sorting neurons
Here for each neuron we compute its stimulus period selectivity $d'$
```
mean_activity = []
std_activity = []
for ground_truth in [0, 1]:
activity = np.concatenate(stim_activity[ground_truth], axis=0)
mean_activity.append(np.mean(activity, axis=0))
std_activity.append(np.std(activity, axis=0))
# Compute d'
selectivity = (mean_activity[0] - mean_activity[1])
selectivity /= np.sqrt((std_activity[0]**2+std_activity[1]**2+1e-7)/2)
# Sort index for selectivity, separately for E and I
ind_sort = np.concatenate((np.argsort(selectivity[:e_size]),
np.argsort(selectivity[e_size:])+e_size))
```
### Plot network connectivity sorted by stimulus selectivity
```
# Plot distribution of stimulus selectivity
plt.figure()
plt.hist(selectivity)
plt.xlabel('Selectivity')
plt.ylabel('Number of neurons')
W = net.rnn.h2h.effective_weight().detach().numpy()
# Sort by selectivity
W = W[:, ind_sort][ind_sort, :]
wlim = np.max(np.abs(W))
plt.figure()
plt.imshow(W, cmap='bwr_r', vmin=-wlim, vmax=wlim)
plt.colorbar()
plt.xlabel('From neurons')
plt.ylabel('To neurons')
plt.title('Network connectivity')
```
# Supplementary Materials
Code for making publication quality figures as it appears in the paper.
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
plot_e = 8
plot_i = int(plot_e / 4)
plot_total = (plot_e + plot_i) * 2
# Sort index for selectivity, separately for E and I
ind_sort = np.concatenate((
np.argsort(selectivity[:e_size])[:plot_e],
np.argsort(selectivity[:e_size])[-plot_e:],
np.argsort(selectivity[e_size:])[:plot_i]+e_size,
np.argsort(selectivity[e_size:])[-plot_i:]+e_size))
# Plot distribution of stimulus selectivity
plt.figure()
plt.hist(selectivity)
plt.xlabel('Selectivity')
plt.ylabel('Number of neurons')
W = net.rnn.h2h.effective_weight().detach().numpy()
# Sort by selectivity
W = W[:, ind_sort][ind_sort, :]
wlim = np.percentile(np.abs(W), 99)
# wlim = np.max(np.abs(W))
wlim = int(wlim*100)/100
n_neuron = len(ind_sort)
fig = plt.figure(figsize=(3, 3))
ax = fig.add_axes([0.1, 0.1, 0.7, 0.7])
im = ax.imshow(W, cmap='RdBu', vmin=-wlim, vmax=wlim,
extent=(-0.5, n_neuron-0.5, -0.5, n_neuron-0.5),
interpolation='nearest'
)
# ax.plot([plot_e-0.5] * 2, [plot_total-0.5, plot_total+0.5], 'black', lw=0.5)
xticks = np.array([0, plot_e*2, plot_total]) - 0.5
yticks = plot_total - 1 - xticks
plt.xticks(xticks, ['', '', ''])
plt.yticks(yticks, ['', '', ''])
plt.xlabel('From neurons')
plt.ylabel('To neurons')
# plt.title('Network connectivity')
for loc in ['left', 'right', 'top', 'bottom']:
# ax.spines[loc].set_color('gray')
ax.spines[loc].set_visible(False)
divider = make_axes_locatable(ax)
cax = fig.add_axes([0.82, 0.1, 0.02, 0.7])
cb = plt.colorbar(im, cax=cax, ticks=[-wlim, 0, wlim])
cb.set_label('Connection weight', labelpad=-1)
cb.outline.set_linewidth(0)
# cb.set_ticklabels(['-0.8', '', '0.8'])
from pathlib import Path
fname = Path('figures/connectivity')
fig.savefig(fname.with_suffix('.pdf'), transparent=True)
fig.savefig(fname.with_suffix('.png'), dpi=300)
```
| github_jupyter |
```
from importlib import reload
import sys
from imp import reload
import warnings
warnings.filterwarnings('ignore')
if sys.version[0] == '2':
reload(sys)
sys.setdefaultencoding("utf-8")
import pandas as pd
df1 = pd.read_csv('labeledTrainData.tsv', delimiter="\t")
df1 = df1.drop(['id'], axis=1)
df1.head()
df1
df2 = pd.read_csv('imdb_master.csv',encoding="latin-1")
df2.head()
df2 = df2.drop(['Unnamed: 0','type','file'],axis=1)
df2.columns = ["review","sentiment"]
df2.head()
df2 = df2[df2.sentiment != 'unsup']
df2['sentiment'] = df2['sentiment'].map({'pos': 1, 'neg': 0})
df2.head()
df = pd.concat([df1, df2]).reset_index(drop=True)
df.head()
import re
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
stop_words = set(stopwords.words("english"))
lemmatizer = WordNetLemmatizer()
def clean_text(text):
text = re.sub(r'[^\w\s]','',text, re.UNICODE)
text = text.lower()
text = [lemmatizer.lemmatize(token) for token in text.split(" ")]
text = [lemmatizer.lemmatize(token, "v") for token in text]
text = [word for word in text if not word in stop_words]
text = " ".join(text)
return text
df['Processed_Reviews'] = df.review.apply(lambda x: clean_text(x))
df.head()
df.Processed_Reviews.apply(lambda x: len(x.split(" "))).mean()
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense , Input , LSTM , Embedding, Dropout , Activation, GRU, Flatten
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model, Sequential
from keras.layers import Convolution1D
from keras import initializers, regularizers, constraints, optimizers, layers
max_features = 6000
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(df['Processed_Reviews'])
list_tokenized_train = tokenizer.texts_to_sequences(df['Processed_Reviews'])
maxlen = 130
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
y = df['sentiment']
embed_size = 128
model = Sequential()
model.add(Embedding(max_features, embed_size))
model.add(Bidirectional(LSTM(32, return_sequences = True)))
model.add(GlobalMaxPool1D())
model.add(Dense(20, activation="relu"))
model.add(Dropout(0.05))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 100
epochs = 3
x=model.fit(X_t,y, batch_size=batch_size, epochs=epochs, validation_split=0.2)
df_test=pd.read_csv("testData.tsv",header=0, delimiter="\t", quoting=3)
df_test.head()
df_test["review"]=df_test.review.apply(lambda x: clean_text(x))
df_test["sentiment"] = df_test["id"].map(lambda x: 1 if int(x.strip('"').split("_")[1]) >= 5 else 0)
y_test = df_test["sentiment"]
list_sentences_test = df_test["review"]
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
X_te = pad_sequences(list_tokenized_test, maxlen=maxlen)
prediction = model.predict(X_te)
y_pred = (prediction > 0.5)
from sklearn.metrics import f1_score, confusion_matrix
print('F1-score: {0}'.format(f1_score(y_pred, y_test)))
print('Confusion matrix:')
confusion_matrix(y_pred, y_test)
model.save('model_current.h5')
example1="A very bad and horrible film"
# Dont pass string,will tokenize characters,pass a list of 1 or more strings
example_list=tokenizer.texts_to_sequences(example1)
ex=pad_sequences(example_list, maxlen=maxlen)
abc=ex.ravel()
model.predict(abc)
ex2=["Really bad horrible film wish did not see","A great perfect film which is extremely good acting with plot"]
ex3=["Really bad horrible film wish did not see terrible but ok so far"]
ex_list=tokenizer.texts_to_sequences(ex2)
ex3_list=tokenizer.texts_to_sequences(ex3)
ex2padded=pad_sequences(ex_list,maxlen=maxlen)
ex3pad=pad_sequences(ex3_list,maxlen=maxlen)
model.predict(ex2padded)
#model.predict(ex3pad)
import matplotlib.pyplot as plt
loss=[0.3648,0.2237,0.1750]
acc=[0.8328,0.9131,0.9358]
val_loss=[0.3122,0.2225,0.1501]
val_acc=[0.8751,0.9185,0.9475]
epoch=[1,2,3]
plt.plot(epoch,loss,color="red")
plt.plot(epoch,acc,color="green")
plt.xlabel("Epoch")
plt.ylabel("Error(red) and Accuracy(green)")
```
| github_jupyter |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/14-cassava-leaf-efficientnetb5-smoothing-01-512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB5(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
x = L.Dropout(.5)(base_model.output)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
model = Model(inputs=inputs, outputs=output)
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test) / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
```
!wget http://www.gutenberg.org/files/11/11-0.txt
from keras.models import Sequential
from keras.layers import Dense,Activation
from keras.layers.recurrent import SimpleRNN
import numpy as np
fin=open('11-0.txt',encoding='utf-8-sig')
lines=[]
for line in fin:
line = line.strip().lower()
#line = line.decode("ascii","ignore")
if(len(line)==0):
continue
lines.append(line)
fin.close()
text = " ".join(lines)
text[52:100]
import re
text = text.lower()
text = re.sub('[^0-9a-zA-Z]+',' ',text)
from collections import Counter
counts = Counter()
counts.update(text.split())
words = sorted(counts, key=counts.get, reverse=True)
nb_words = len(text.split())
word2index = {word: i for i, word in enumerate(words)}
index2word = {i: word for i, word in enumerate(words)}
SEQLEN = 10
STEP = 1
input_words = []
label_words = []
text2=text.split()
for i in range(0,nb_words-SEQLEN,STEP):
x=text2[i:(i+SEQLEN)]
y=text2[i+SEQLEN]
input_words.append(x)
label_words.append(y)
print('input words list: ','\n',input_words[0])
print('label(output) words list: ','\n',label_words[0])
total_words = len(set(words))
X = np.zeros((len(input_words), SEQLEN, total_words), dtype=np.bool)
y = np.zeros((len(input_words), total_words), dtype=np.bool)
for i, input_word in enumerate(input_words):
for j, word in enumerate(input_word):
X[i, j, word2index[word]] = 1
y[i,word2index[label_words[i]]]=1
print('Shape of X: ',X.shape)
print('Shape of y: ',y.shape)
HIDDEN_SIZE = 128
BATCH_SIZE = 32
NUM_ITERATIONS = 100
NUM_EPOCHS_PER_ITERATION = 1
NUM_PREDS_PER_EPOCH = 100
from keras.layers import LSTM
model = Sequential()
model.add(LSTM(HIDDEN_SIZE,return_sequences=False,input_shape=(SEQLEN,total_words)))
model.add(Dense(total_words, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.summary()
int(len(input_words)*0.1)
len(input_words)
for iteration in range(50):
print("=" * 50)
print("Iteration #: %d" % (iteration))
model.fit(X, y, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS_PER_ITERATION, validation_split = 0.1)
test_idx = np.random.randint(int(len(input_words)*0.1)) * (-1)
test_words = input_words[test_idx]
print("Generating from seed: %s" % (test_words))
for i in range(NUM_PREDS_PER_EPOCH):
Xtest = np.zeros((1, SEQLEN, total_words))
for i, ch in enumerate(test_words):
Xtest[0, i, word2index[ch]] = 1
pred = model.predict(Xtest, verbose=0)[0]
ypred = index2word[np.argmax(pred)]
print(ypred,end=' ')
test_words = test_words[1:] + [ypred]
from keras.layers import LSTM, Input, Bidirectional
model = Sequential()
model.add(Bidirectional(LSTM(HIDDEN_SIZE,return_sequences=False),input_shape=(SEQLEN,total_words)))
model.add(Dense(total_words, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.summary()
for iteration in range(50):
print("=" * 50)
print("Iteration #: %d" % (iteration))
model.fit(X, y, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS_PER_ITERATION, validation_split = 0.1)
test_idx = np.random.randint(int(len(input_words)*0.1)) * (-1)
test_words = input_words[test_idx]
print("Generating from seed: %s" % (test_words))
for i in range(NUM_PREDS_PER_EPOCH):
Xtest = np.zeros((1, SEQLEN, total_words))
for i, ch in enumerate(test_words):
Xtest[0, i, word2index[ch]] = 1
pred = model.predict(Xtest, verbose=0)[0]
ypred = index2word[np.argmax(pred)]
print(ypred,end=' ')
test_words = test_words[1:] + [ypred]
```
| github_jupyter |
# Python for scientific computing
> Marcos Duarte, Renato Naville Watanabe
> [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab)
> Federal University of ABC, Brazil
<p style="text-align: right;">A <a href="https://jupyter.org/">Jupyter Notebook</a></p>
The [Python programming language](https://www.python.org/) with [its ecosystem for scientific programming](https://scipy.org/) has features, maturity, and a community of developers and users that makes it the ideal environment for the scientific community.
This talk will show some of these features and usage examples.
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Computing-as-a-third-kind-of-Science" data-toc-modified-id="Computing-as-a-third-kind-of-Science-1"><span class="toc-item-num">1 </span>Computing as a third kind of Science</a></span></li><li><span><a href="#The-lifecycle-of-a-scientific-idea" data-toc-modified-id="The-lifecycle-of-a-scientific-idea-2"><span class="toc-item-num">2 </span>The lifecycle of a scientific idea</a></span></li><li><span><a href="#About-Python-[Python-documentation]" data-toc-modified-id="About-Python-[Python-documentation]-3"><span class="toc-item-num">3 </span>About Python [<a href="http://www.python.org/doc/essays/blurb/" target="_blank">Python documentation</a>]</a></span></li><li><span><a href="#About-Python-[Python-documentation]" data-toc-modified-id="About-Python-[Python-documentation]-4"><span class="toc-item-num">4 </span>About Python [<a href="http://www.python.org/doc/essays/blurb/" target="_blank">Python documentation</a>]</a></span></li><li><span><a href="#Glossary-for-the-Python-technical-characteristics-I" data-toc-modified-id="Glossary-for-the-Python-technical-characteristics-I-5"><span class="toc-item-num">5 </span>Glossary for the Python technical characteristics I</a></span></li><li><span><a href="#Glossary-for-the-Python-technical-characteristics-II" data-toc-modified-id="Glossary-for-the-Python-technical-characteristics-II-6"><span class="toc-item-num">6 </span>Glossary for the Python technical characteristics II</a></span></li><li><span><a href="#About-Python" data-toc-modified-id="About-Python-7"><span class="toc-item-num">7 </span>About Python</a></span></li><li><span><a href="#Python" data-toc-modified-id="Python-8"><span class="toc-item-num">8 </span>Python</a></span></li><li><span><a href="#Why-Python-and-not-'X'-(put-any-other-language-here)" data-toc-modified-id="Why-Python-and-not-'X'-(put-any-other-language-here)-9"><span class="toc-item-num">9 </span>Why Python and not 'X' (put any other language here)</a></span></li><li><span><a href="#Popularity-of-Python-for-teaching" data-toc-modified-id="Popularity-of-Python-for-teaching-10"><span class="toc-item-num">10 </span>Popularity of Python for teaching</a></span></li><li><span><a href="#Python-ecosystem-for-scientific-computing-(main-libraries)" data-toc-modified-id="Python-ecosystem-for-scientific-computing-(main-libraries)-11"><span class="toc-item-num">11 </span>Python ecosystem for scientific computing (main libraries)</a></span></li><li><span><a href="#The-Jupyter-Notebook" data-toc-modified-id="The-Jupyter-Notebook-12"><span class="toc-item-num">12 </span>The Jupyter Notebook</a></span></li><li><span><a href="#Jupyter-Notebook-and-IPython-kernel-architectures" data-toc-modified-id="Jupyter-Notebook-and-IPython-kernel-architectures-13"><span class="toc-item-num">13 </span>Jupyter Notebook and IPython kernel architectures</a></span></li><li><span><a href="#Installing-the-Python-ecosystem" data-toc-modified-id="Installing-the-Python-ecosystem-14"><span class="toc-item-num">14 </span>Installing the Python ecosystem</a></span></li><li><span><a href="#Anaconda" data-toc-modified-id="Anaconda-15"><span class="toc-item-num">15 </span>Anaconda</a></span></li><li><span><a href="#Miniconda" data-toc-modified-id="Miniconda-16"><span class="toc-item-num">16 </span>Miniconda</a></span></li><li><span><a href="#IDE-for-Python" data-toc-modified-id="IDE-for-Python-17"><span class="toc-item-num">17 </span>IDE for Python</a></span></li><li><span><a href="#To-learn-about-Python" data-toc-modified-id="To-learn-about-Python-18"><span class="toc-item-num">18 </span>To learn about Python</a></span></li><li><span><a href="#More-examples-of-Jupyter-Notebooks" data-toc-modified-id="More-examples-of-Jupyter-Notebooks-19"><span class="toc-item-num">19 </span>More examples of Jupyter Notebooks</a></span></li><li><span><a href="#Questions?" data-toc-modified-id="Questions?-20"><span class="toc-item-num">20 </span>Questions?</a></span></li></ul></div>
## Computing as a third kind of Science
Traditionally, science has been divided into experimental and theoretical disciplines, but nowadays computing plays an important role in science. Scientific computation is sometimes related to theory, and at other times to experimental work. Hence, it is often seen as a new third branch of science.
<figure><img src="https://raw.githubusercontent.com/jrjohansson/scientific-python-lectures/master/images/theory-experiment-computation.png" width=300 alt="theory-experiment-computation"/></figure>
Figure from [J.R. Johansson](http://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-0-Scientific-Computing-with-Python.ipynb).
## The lifecycle of a scientific idea
```
from IPython.display import Image
Image(filename='../images/lifecycle_FPerez.png', width=600) # image from Fernando Perez
```
## About Python [[Python documentation](http://www.python.org/doc/essays/blurb/)]
*Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs* [[python.org](http://python.org/)].
- *Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together*.
- *Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. Python supports modules and packages, which encourages program modularity and code reuse*.
- Python is free and open source.
## About Python [[Python documentation](http://www.python.org/doc/essays/blurb/)]
- *Often, programmers fall in love with Python because of the increased productivity it provides. Since there is no compilation step, the edit-test-debug cycle is incredibly fast. Debugging Python programs is easy: a bug or bad input will never cause a segmentation fault. Instead, when the interpreter discovers an error, it raises an exception. When the program doesn't catch the exception, the interpreter prints a stack trace.*
- A source level debugger allows inspection of local and global variables, evaluation of arbitrary expressions, setting breakpoints, stepping through the code a line at a time, and so on. The debugger is written in Python itself, testifying to Python's introspective power. On the other hand, often the quickest way to debug a program is to add a few print statements to the source: the fast edit-test-debug cycle makes this simple approach very effective.*
## Glossary for the Python technical characteristics I
- Programming language: a formal language designed to communicate instructions to a computer. A sequence of instructions that specifies how to perform a computation is called a program.
- Interpreted language: a program in an interpreted language is executed or interpreted by an interpreter program. This interpreter executes the program source code, statement by statement.
- Compiled language: a program in a compiled language is first explicitly translated by the user into a lower-level machine language executable (with a compiler) and then this program can be executed.
- Python interpreter: an interpreter is the computer program that executes the program. The most-widely used implementation of the Python programming language, referred as CPython or simply Python, is written in C (another programming language, which is lower-level and compiled).
- High-level: a high-level programming language has a strong abstraction from the details of the computer and the language is independent of a particular type of computer. A high-level programming language is closer to human languages than to the programming language running inside the computer that communicate instructions to its hardware, the machine language. The machine language is a low-level programming language, in fact, the lowest one.
- Object-oriented programming: a programming paradigm that represents concepts as "objects" that have data fields (attributes that describe the object) and associated procedures known as methods.
- Semantics and syntax: the term semantics refers to the meaning of a language, as opposed to its form, the syntax.
- Static and dynamic semantics: static and dynamic refer to the point in time at which some programming element is resolved. Static indicates that resolution takes place at the time a program is written. Dynamic indicates that resolution takes place at the time a program is executed.
- Static and dynamic typing and binding: in dynamic typing, the type of the variable (e.g., if it is an integer or a string or a different type of element) is not explicitly declared, it can change, and in general is not known until execution time. In static typing, the type of the variable must be declared and it is known before the execution time.
- Rapid Application Development: a software development methodology that uses minimal planning in favor of rapid prototyping.
- Scripting: the writing of scripts, small pieces of simple instructions (programs) that can be rapidly executed.
## Glossary for the Python technical characteristics II
- Glue language: a programming language for writing programs to connect software components (including programs written in other programming languages).
- Modules and packages: a module is a file containing Python definitions (e.g., functions) and statements. Packages are a way of structuring Python’s module namespace by using “dotted module names”. For example, the module name A.B designates a submodule named B in a package named A. To be used, modules and packages have to be imported in Python with the import function. Namespace is a container for a set of identifiers (names), and allows the disambiguation of homonym identifiers residing in different namespaces. For example, with the command `import math`, we will have all the functions and statements defined in this module in the namespace '`math.`', for example, `math.pi` is the $\pi$ constant and `math.cos()`, the cosine function.
- Program modularity and code reuse: the degree that programs can be compartmentalized (divided in smaller programs) to facilitate program reuse.
- Source or binary form: source refers to the original code of the program (typically in a text format) which would need to be compiled to a binary form (not anymore human readable) to be able to be executed.
- Major platforms: typically refers to the main operating systems (OS) in the market: Windows (by Microsoft), Mac OSX (by Apple), and Linux distributions (such as Debian, Ubuntu, Mint, etc.). Mac OSX and Linux distros are derived from, or heavily inspired by, another operating system called Unix.
- Edit-test-debug cycle: the typical cycle in the life of a programmer; write (edit) the code, run (test) it, and correct errors or improve it (debug). The read–eval–print loop (REPL) is another related term.
- Segmentation fault: an error in a program that is generated by the hardware which notifies the operating system about a memory access violation.
- Exception: an error in a program detected during execution is called an exception and the Python interpreter raises a message about this error (an exception is not necessarily fatal, i.e., does not necessarily terminate or break the program).
- Stack trace: information related to what caused the exception describing the line of the program where it occurred with a possible history of related events.
- Source level debugger: Python has a module (named pdb) for interactive source code debugging.
- Local and global variables: refers to the scope of the variables. A local variable is defined inside a function and typically can be accessed (it exists) only inside that function unless declared as global.
## About Python
Python is also the name of the software with the most-widely used implementation of the language (maintained by the [Python Software Foundation](http://www.python.org/psf/)).
This implementation is written mostly in the *C* programming language and it is nicknamed CPython.
So, the following phrase is correct: download Python *(the software)* to program in Python *(the language)* because Python *(both)* is great!
## Python
The origin of the name for the Python language in fact is not because of the big snake, the author of the Python language, Guido van Rossum, named the language after Monty Python, a famous British comedy group in the 70's.
By coincidence, the Monty Python group was also interested in human movement science:
```
from IPython.display import YouTubeVideo
YouTubeVideo('iV2ViNJFZC8', width=480, height=360, rel=0)
```
## Why Python and not 'X' (put any other language here)
Python is not the best programming language for all needs and for all people. There is no such language.
Now, if you are doing scientific computing, chances are that Python is perfect for you because (and might also be perfect for lots of other needs):
- Python is free, open source, and cross-platform.
- Python is easy to learn, with readable code, well documented, and with a huge and friendly user community.
- Python is a real programming language, able to handle a variety of problems, easy to scale from small to huge problems, and easy to integrate with other systems (including other programming languages).
- Python code is not the fastest but Python is one the fastest languages for programming. It is not uncommon in science to care more about the time we spend programming than the time the program took to run. But if code speed is important, one can easily integrate in different ways a code written in other languages (such as C and Fortran) with Python.
- The Jupyter Notebook is a versatile tool for programming, data visualization, plotting, simulation, numeric and symbolic mathematics, and writing for daily use.
## Popularity of Python for teaching
```
from IPython.display import IFrame
IFrame('http://cacm.acm.org/blogs/blog-cacm/176450-python-is-now-the-most-popular-' +
'introductory-teaching-language-at-top-us-universities/fulltext',
width='100%', height=450)
```
## Python ecosystem for scientific computing (main libraries)
- [Python](https://www.python.org/) of course (the CPython distribution): a free, open source and cross-platform programming language that lets you work more quickly and integrate your systems more effectively.
- [Numpy](http://numpy.scipy.org): fundamental package for scientific computing with a N-dimensional array package.
- [Scipy](http://scipy.org/scipylib/index.html): numerical routines for scientific computing.
- [Matplotlib](http://matplotlib.org): comprehensive 2D Plotting.
- [Sympy](http://sympy.org): symbolic mathematics.
- [Pandas](http://pandas.pydata.org/): data structures and data analysis tools.
- [IPython](http://ipython.org): provides a rich architecture for interactive computing with powerful interactive shell, kernel for Jupyter, support for interactive data visualization and use of GUI toolkits, flexible embeddable interpreters, and high performance tools for parallel computing.
- [Jupyter Notebook](https://jupyter.org/): web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.
- [Statsmodels](http://statsmodels.sourceforge.net/): to explore data, estimate statistical models, and perform statistical tests.
- [Scikit-learn](http://scikit-learn.org/stable/): tools for data mining and data analysis (including machine learning).
- [Pillow](http://python-pillow.github.io/): Python Imaging Library.
- [Spyder](https://code.google.com/p/spyderlib/): interactive development environment with advanced editing, interactive testing, debugging and introspection features.
## The Jupyter Notebook
The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser. The Jupyter Notebook App can be executed on a local desktop requiring no Internet access (as described in this document) or installed on a remote server and accessed through the Internet.
Notebook documents (or “notebooks”, all lower case) are documents produced by the Jupyter Notebook App which contain both computer code (e.g. python) and rich text elements (paragraph, equations, figures, links, etc...). Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis.
[Try Jupyter Notebook in your browser](https://try.jupyter.org/).
## Jupyter Notebook and IPython kernel architectures
<figure><img src="./../images/jupyternotebook.png" width=800 alt="Jupyter Notebook and IPython kernel architectures"/></figure>
## Installing the Python ecosystem
**The easy way**
The easiest way to get Python and the most popular packages for scientific programming is to install them with a Python distribution such as [Anaconda](https://www.continuum.io/anaconda-overview).
In fact, you don't even need to install Python in your computer, you can run Python for scientific programming in the cloud using [python.org](https://www.python.org/shell/), [pythonanywhere](https://www.pythonanywhere.com/), or [repl.it](https://repl.it/languages/python3).
**The hard way**
You can download Python and all individual packages you need and install them one by one. In general, it's not that difficult, but it can become challenging and painful for certain big packages heavily dependent on math, image visualization, and your operating system (i.e., Microsoft Windows).
## Anaconda
Go to the [*Anaconda* website](https://www.anaconda.com/download/) and download the appropriate version for your computer (but download Anaconda3! for Python 3.x). The file is big (about 500 MB). [From their website](https://docs.anaconda.com/anaconda/install/):
**Linux Install**
In your terminal window type and follow the instructions:
```
bash Anaconda3-4.4.0-Linux-x86_64.sh
```
**OS X Install**
For the graphical installer, double-click the downloaded .pkg file and follow the instructions
For the command-line installer, in your terminal window type and follow the instructions:
```
bash Anaconda3-4.4.0-MacOSX-x86_64.sh
```
**Windows**
Double-click the .exe file to install Anaconda and follow the instructions on the screen
## Miniconda
A variation of *Anaconda* is [*Miniconda*](http://conda.pydata.org/miniconda.html) (Miniconda3 for Python 3.x), which contains only the *Conda* package manager and Python.
Once *Miniconda* is installed, you can use the `conda` command to install any other packages and create environments, etc.
# My current installation
```
from pyversions import versions
versions();
```
## IDE for Python
You might want an Integrated Development Environment (IDE) for programming in Python.
See [Top 5 Python IDEs For Data Science](https://www.datacamp.com/community/tutorials/data-science-python-ide#gs.mN_Wu0M) for possible IDEs.
Soon there will be a new IDE for scientific computing with Python: [JupyterLab](https://github.com/jupyterlab/jupyterlab), developed by the Jupyter team. See [this video about JupyterLab](https://channel9.msdn.com/Events/PyData/Seattle2017/BRK11).
## To learn about Python
There is a lot of good material in the Internet about Python for scientific computing, some of them are:
- [How To Think Like A Computer Scientist](http://openbookproject.net/thinkcs/python/english3e/) or [the interactive edition](https://runestone.academy/runestone/static/thinkcspy/index.html) (book)
- [Python Scientific Lecture Notes](http://scipy-lectures.github.io/) (lecture notes)
- [A Whirlwind Tour of Python](https://github.com/jakevdp/WhirlwindTourOfPython) (tutorial/book)
- [Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/) (tutorial/book)
- [Lectures on scientific computing with Python](https://github.com/jrjohansson/scientific-python-lectures#lectures-on-scientific-computing-with-python) (lecture notes)
## More examples of Jupyter Notebooks
Let's run stuff from:
- [https://github.com/BMClab/BMC](https://github.com/BMClab/BMC)
- [A gallery of interesting Jupyter Notebooks](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks)
## Questions?
- https://www.reddit.com/r/learnpython/
- https://stackoverflow.com/questions/tagged/python
- https://www.reddit.com/r/Python/
- https://python-forum.io/
```
Image(data='http://imgs.xkcd.com/comics/python.png')
import this
```
| github_jupyter |
# Mask R-CNN - Train on Shapes Dataset
This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.
The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
```
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
```
## Configurations
```
class ShapesConfig(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 8
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 128
IMAGE_MAX_DIM = 128
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 5
config = ShapesConfig()
config.display()
```
## Notebook Preferences
```
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
```
## Dataset
Create a synthetic dataset
Extend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:
* load_image()
* load_mask()
* image_reference()
```
class ShapesDataset(utils.Dataset):
"""Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
"""
def load_shapes(self, count, height, width):
"""Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
"""
# Add classes
self.add_class("shapes", 1, "square")
self.add_class("shapes", 2, "circle")
self.add_class("shapes", 3, "triangle")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
for i in range(count):
bg_color, shapes = self.random_image(height, width)
self.add_image("shapes", image_id=i, path=None,
width=width, height=height,
bg_color=bg_color, shapes=shapes)
def load_image(self, image_id):
"""Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but
in this case it generates the image on the fly from the
specs in image_info.
"""
info = self.image_info[image_id]
bg_color = np.array(info['bg_color']).reshape([1, 1, 3])
image = np.ones([info['height'], info['width'], 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for shape, color, dims in info['shapes']:
image = self.draw_shape(image, shape, dims, color)
return image
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
shapes = info['shapes']
count = len(shapes)
mask = np.zeros([info['height'], info['width'], count], dtype=np.uint8)
for i, (shape, _, dims) in enumerate(info['shapes']):
mask[:, :, i:i+1] = self.draw_shape(mask[:, :, i:i+1].copy(),
shape, dims, 1)
# Handle occlusions
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
for i in range(count-2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
# Map class names to class IDs.
class_ids = np.array([self.class_names.index(s[0]) for s in shapes])
return mask.astype(np.bool), class_ids.astype(np.int32)
def draw_shape(self, image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, s = dims
if shape == 'square':
cv2.rectangle(image, (x-s, y-s), (x+s, y+s), color, -1)
elif shape == "circle":
cv2.circle(image, (x, y), s, color, -1)
elif shape == "triangle":
points = np.array([[(x, y-s),
(x-s/math.sin(math.radians(60)), y+s),
(x+s/math.sin(math.radians(60)), y+s),
]], dtype=np.int32)
cv2.fillPoly(image, points, color)
return image
def random_shape(self, height, width):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
shape = random.choice(["square", "circle", "triangle"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
# Center x, y
buffer = 20
y = random.randint(buffer, height - buffer - 1)
x = random.randint(buffer, width - buffer - 1)
# Size
s = random.randint(buffer, height//4)
return shape, color, (x, y, s)
def random_image(self, height, width):
"""Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
"""
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
shapes = []
boxes = []
N = random.randint(1, 4)
for _ in range(N):
shape, color, dims = self.random_shape(height, width)
shapes.append((shape, color, dims))
x, y, s = dims
boxes.append([y-s, x-s, y+s, x+s])
# Apply non-max suppression wit 0.3 threshold to avoid
# shapes covering each other
keep_ixs = utils.non_max_suppression(np.array(boxes), np.arange(N), 0.3)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
return bg_color, shapes
# Training dataset
dataset_train = ShapesDataset()
dataset_train.load_shapes(500, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_train.prepare()
# Validation dataset
dataset_val = ShapesDataset()
dataset_val.load_shapes(50, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_val.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
## Create Model
```
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
```
## Training
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
```
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=2,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
# model.keras_model.save_weights(model_path)
```
## Detection
```
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
```
## Evaluation
```
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
```
| github_jupyter |
# SciPy - Library of scientific algorithms for Python
Original by J.R. Johansson (robert@riken.jp) http://dml.riken.jp/~rob/
Modified by Clayton Miller (miller.clayton@arch.ethz.ch)
The other notebooks in this lecture series are indexed at [http://jrjohansson.github.com](http://jrjohansson.github.com).
# Introduction
The SciPy framework builds on top of the low-level NumPy framwork for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:
* Special functions ([scipy.special](http://docs.scipy.org/doc/scipy/reference/special.html))
* Integration ([scipy.integrate](http://docs.scipy.org/doc/scipy/reference/integrate.html))
* Optimization ([scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.html))
* Interpolation ([scipy.interpolate](http://docs.scipy.org/doc/scipy/reference/interpolate.html))
* Fourier Transforms ([scipy.fftpack](http://docs.scipy.org/doc/scipy/reference/fftpack.html))
* Signal Processing ([scipy.signal](http://docs.scipy.org/doc/scipy/reference/signal.html))
* Linear Algebra ([scipy.linalg](http://docs.scipy.org/doc/scipy/reference/linalg.html))
* Sparse Eigenvalue Problems ([scipy.sparse](http://docs.scipy.org/doc/scipy/reference/sparse.html))
* Statistics ([scipy.stats](http://docs.scipy.org/doc/scipy/reference/stats.html))
* Multi-dimensional image processing ([scipy.ndimage](http://docs.scipy.org/doc/scipy/reference/ndimage.html))
* File IO ([scipy.io](http://docs.scipy.org/doc/scipy/reference/io.html))
Each of these submodules provides a number of functions and classes that can be used to solve problems in their respective topics.
In this lecture we will look at how to use some of these subpackages.
To access the SciPy package in a Python program, we start by importing everything from the `scipy` module.
```
%pylab inline
from IPython.display import Image
from scipy import *
```
If we only need to use part of the SciPy framwork we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name `la`, we can do:
```
import scipy.linalg as la
```
## Ordinary differential equations (ODEs)
SciPy provides two different ways to solve ODEs: An API based on the function `odeint`, and object-oriented API based on the class `ode`. Usually `odeint` is easier to get started with, but the `ode` class offers some finer level of control.
Here we will use the `odeint` functions. For more information about the class `ode`, try `help(ode)`. It does pretty much the same thing as `odeint`, but in an object-oriented fashion.
To use `odeint`, first import it from the `scipy.integrate` module:
```
from scipy.integrate import odeint, ode
```
A system of ODEs are usually formulated on standard form before it is attacked nummerically. The standard form is:
$y' = f(y, t)$
where
$y = [y_1(t), y_2(t), ..., y_n(t)]$
and $f$ is some function that gives the derivatives of the function $y_i(t)$. To solve an ODE we need to know the function $f$ and an initial condition, $y(0)$.
Note that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.
Once we have defined the Python function `f` and array `y_0` (that is $f$ and $y(0)$ in the mathematical formulation), we can use the `odeint` function as:
y_t = odeint(f, y_0, t)
where `t` is and array with time-coordinates for which to solve the ODE problem. `y_t` is an array with one row for each point in time in `t`, where each column corresponds to a solution `y_i(t)` at that point in time.
We will see how we can implement `f` and `y_0` in Python code in the examples below.
## Example: double pendulum
Let's consider a physical example: The double compound pendulum, described in some detail here: http://en.wikipedia.org/wiki/Double_pendulum
```
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
```
The equations of motion of the pendulum are given on the wiki page:
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
To make the Python code simpler to follow, let's introduce new variable names and the vector notation: $x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
${\dot x_1} = \frac{6}{m\ell^2} \frac{ 2 x_3 - 3 \cos(x_1-x_2) x_4}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_2} = \frac{6}{m\ell^2} \frac{ 8 x_4 - 3 \cos(x_1-x_2) x_3}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_3} = -\frac{1}{2} m \ell^2 \left [ {\dot x_1} {\dot x_2} \sin (x_1-x_2) + 3 \frac{g}{\ell} \sin x_1 \right ]$
${\dot x_4} = -\frac{1}{2} m \ell^2 \left [ -{\dot x_1} {\dot x_2} \sin (x_1-x_2) + \frac{g}{\ell} \sin x_2 \right]$
```
g = 9.82
L = 0.5
m = 0.1
def dx(x, t):
"""
The right-hand side of the pendelum ODE
"""
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * cos(x1-x2) * x4)/(16 - 9 * cos(x1-x2)**2)
dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * cos(x1-x2) * x3)/(16 - 9 * cos(x1-x2)**2)
dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * sin(x1-x2) + 3 * (g/L) * sin(x1))
dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * sin(x1-x2) + (g/L) * sin(x2))
return [dx1, dx2, dx3, dx4]
# choose an initial state
x0 = [pi/4, pi/2, 0, 0]
# time coodinate to solve the ODE for: from 0 to 10 seconds
t = linspace(0, 10, 250)
# solve the ODE problem
x = odeint(dx, x0, t)
x
```
## Simple annimation of the pendulum motion.
```
# plot the angles as a function of time
fig, axes = subplots(1,2, figsize=(12,4))
axes[0].plot(t, x[:, 0], 'r', label="theta1")
axes[0].plot(t, x[:, 1], 'b', label="theta2")
x1 = + L * sin(x[:, 0])
y1 = - L * cos(x[:, 0])
x2 = x1 + L * sin(x[:, 1])
y2 = y1 - L * cos(x[:, 1])
axes[1].plot(x1, y1, 'r', label="pendulum1")
axes[1].plot(x2, y2, 'b', label="pendulum2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([1, -1]);
```
## Example: Damped harmonic oscillator
ODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping
The equation of motion for the damped oscillator is:
$\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0$
where $x$ is the position of the oscillator, $\omega_0$ is the frequency, and $\zeta$ is the damping ratio. To write this second-order ODE on standard form we introduce $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
$\displaystyle \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x$
$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = p$
In the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument `args` to the `odeint` function:
```
def dy(y, t, zeta, w0):
"""
The right-hand side of the damped oscillator ODE
"""
x, p = y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return [dx, dp]
# initial state:
y0 = [1.0, 0.0]
# time coodinate to solve the ODE for
t = linspace(0, 10, 1000)
w0 = 2*pi*1.0
# solve the ODE problem for three different values of the damping ratio
y1 = odeint(dy, y0, t, args=(0.0, w0)) # undamped
y2 = odeint(dy, y0, t, args=(0.2, w0)) # under damped
y3 = odeint(dy, y0, t, args=(1.0, w0)) # critial damping
y4 = odeint(dy, y0, t, args=(5.0, w0)) # over damped
fig, ax = subplots()
ax.plot(t, y1[:,0], 'k', label="undamped", linewidth=0.25)
ax.plot(t, y2[:,0], 'r', label="under damped")
ax.plot(t, y3[:,0], 'b', label=r"critical damping")
ax.plot(t, y4[:,0], 'g', label="over damped")
ax.legend();
```
## Fourier transform
Fourier transforms are one of the universial tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic [FFTPACK](http://www.netlib.org/fftpack/) library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
To use the `fftpack` module in a python program, include it using:
```
from scipy.fftpack import *
```
To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:
```
N = len(t)
dt = t[1]-t[0]
# calculate the fast fourier transform
# y2 is the solution to the under-damped oscillator from the previous section
F = fft(y2[:,0])
# calculate the frequencies for the components in F
w = fftfreq(N, dt)
fig, ax = subplots(figsize=(9,3))
ax.plot(w, abs(F));
```
Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the `w` and `F` we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2:
```
indices = where(w > 0) # select only indices for elements that corresponds to positive frequencies
w_pos = w[indices]
F_pos = F[indices]
fig, ax = subplots(figsize=(9,3))
ax.plot(w_pos, abs(F_pos))
ax.set_xlim(0, 5);
```
As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.
## Optimization
Optimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see: http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
To use the optimization module in scipy first include the `optimize` module:
```
from scipy import optimize
```
### Finding a minima
Let's first look at how to find the minima of a simple function of a single variable:
```
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x));
```
We can use the `fmin_bfgs` function to find the minima of a function:
```
x_min = optimize.fmin_bfgs(f, -2)
x_min
optimize.fmin_bfgs(f, 0.5)
```
We can also use the `brent` or `fminbound` functions. They have a bit different syntax and use different algorithms.
```
optimize.brent(f)
optimize.fminbound(f, -4, 2)
```
## Interpolation
Interpolation is simple and convenient in scipy: The `interp1d` function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:
```
from scipy.interpolate import *
def f(x):
return sin(x)
n = arange(0, 10)
x = linspace(0, 9, 100)
y_meas = f(n) + 0.1 * randn(len(n)) # simulate measurement with noise
y_real = f(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
ax.plot(x, y_interp2, 'g', label='cubic interp')
ax.legend(loc=3);
```
## Statistics
The `scipy.stats` module contains a large number of statistical distributions, statistical functions and tests. For a complete documentation of its features, see http://docs.scipy.org/doc/scipy/reference/stats.html.
There is also a very powerful python package for statistical modelling called statsmodels. See http://statsmodels.sourceforge.net for more details.
```
from scipy import stats
# create a (discreet) random variable with poissionian distribution
X = stats.poisson(3.5) # photon distribution for a coherent state with n=3.5 photons
n = arange(0,15)
fig, axes = subplots(3,1, sharex=True)
# plot the probability mass function (PMF)
axes[0].step(n, X.pmf(n))
# plot the commulative distribution function (CDF)
axes[1].step(n, X.cdf(n))
# plot histogram of 1000 random realizations of the stochastic variable X
axes[2].hist(X.rvs(size=1000));
# create a (continous) random variable with normal distribution
Y = stats.norm()
x = linspace(-5,5,100)
fig, axes = subplots(3,1, sharex=True)
# plot the probability distribution function (PDF)
axes[0].plot(x, Y.pdf(x))
# plot the commulative distributin function (CDF)
axes[1].plot(x, Y.cdf(x));
# plot histogram of 1000 random realizations of the stochastic variable Y
axes[2].hist(Y.rvs(size=1000), bins=50);
```
Statistics:
```
X.mean(), X.std(), X.var() # poission distribution
Y.mean(), Y.std(), Y.var() # normal distribution
```
### Statistical tests
Test if two sets of (independent) random data comes from the same distribution:
```
t_statistic, p_value = stats.ttest_ind(X.rvs(size=1000), X.rvs(size=1000))
print "t-statistic =", t_statistic
print "p-value =", p_value
```
Since the p value is very large we cannot reject the hypothesis that the two sets of random data have *different* means.
To test if the mean of a single sample of data has mean 0.1 (the true mean is 0.0):
```
stats.ttest_1samp(Y.rvs(size=1000), 0.1)
```
Low p-value means that we can reject the hypothesis that the mean of Y is 0.1.
```
Y.mean()
stats.ttest_1samp(Y.rvs(size=1000), Y.mean())
```
## Further reading
* http://www.scipy.org - The official web page for the SciPy project.
* http://docs.scipy.org/doc/scipy/reference/tutorial/index.html - A tutorial on how to get started using SciPy.
* https://github.com/scipy/scipy/ - The SciPy source code.
| github_jupyter |
[Original Notebook Downloaded From Kaggle](https://www.kaggle.com/bariskavus/diabetes-prediction-randomforestclassifier)
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
```
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;">Pima Indian</h1>
## This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
<center><img
src="https://www.legendsofamerica.com/wp-content/uploads/2018/12/PimaIndiansCarloGentile1870.jpg" style="width:50%;height:50%;">
</center>
### Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
<br>
* **Pregnancies: Number of times pregnant**
* **Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test**
* **BloodPressure: Diastolic blood pressure (mm Hg)**
* **SkinThickness: Triceps skin fold thickness (mm)**
* **Insulin: 2-Hour serum insulin (mu U/ml)**
* **BMI: Body mass index (weight in kg/(height in m)^2)**
* **DiabetesPedigreeFunction: Diabetes pedigree function**
* **Age: Age (years**)
* **Outcome: Class variable (0 or 1) 268 of 768 are 1, the others are 0**
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;">Load Data 📚</h1>
```
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
import missingno as msno
from sklearn import preprocessing
from sklearn.neighbors import LocalOutlierFactor
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import *
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore")
pd.set_option("display.float_format",lambda x: "%.5f" % x)
pd.set_option("display.max_rows",None)
pd.set_option("display.max_columns",None)
df = pd.read_csv("/kaggle/input/pima-indians-diabetes-database/diabetes.csv")
df.head()
```
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;">Check Data 🔎</h1>
```
def check_df(dataframe):
print("##################### Shape #####################")
print(dataframe.shape)
print("##################### Types #####################")
print(dataframe.dtypes)
print("##################### Head #####################")
print(dataframe.head(3))
print("##################### Tail #####################")
print(dataframe.tail(3))
print("##################### NA #####################")
print(dataframe.isnull().sum())
print("##################### Quantiles #####################")
print(dataframe.quantile([0, 0.05, 0.50, 0.95, 0.99, 1]).T)
check_df(df)
```
### It is not possible for BMI and some variables to be zero. Values equal to zero in the data set are missing values. NA should be written instead of these values.
```
# We convert values with zero in variables to NaN values.
cols = ["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI"]
for col in cols:
df[col].replace(0, np.NaN, inplace=True)
msno.bar(df);
msno.heatmap(df);
```
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;">Data Preprocessing 🛠️</h1>
```
# We can fill in the NaN values with a median relative to the target.
for col in df.columns:
df.loc[(df["Outcome"] == 0) & (df[col].isnull()), col] = df[df["Outcome"] == 0][col].median()
df.loc[(df["Outcome"] == 1) & (df[col].isnull()), col] = df[df["Outcome"] == 1][col].median()
# Outliers visualization
for col in df.columns:
if col != "Outcome":
sns.catplot("Outcome", col, data = df)
df.hist(figsize = (15,7));
# Outliers
def outlier_thresholds(dataframe, col_name, th1=0.05, th3=0.95):
quartile1 = dataframe[col_name].quantile(th1)
quartile3 = dataframe[col_name].quantile(th3)
interquantile_range = quartile3 - quartile1
up_limit = quartile3 + 1.5 * interquantile_range
low_limit = quartile1 - 1.5 * interquantile_range
return low_limit, up_limit
def check_outlier(dataframe, col_name):
low_limit, up_limit = outlier_thresholds(dataframe, col_name)
if dataframe[(dataframe[col_name] > up_limit) | (dataframe[col_name] < low_limit)].any(axis=None):
return True
else:
return False
def replace_with_thresholds(dataframe, col_name, th1=0.05, th3=0.95):
low_limit, up_limit = outlier_thresholds(dataframe, col_name, th1, th3)
if low_limit > 0:
dataframe.loc[(dataframe[col_name] < low_limit), col_name] = low_limit
dataframe.loc[(dataframe[col_name] > up_limit), col_name] = up_limit
else:
dataframe.loc[(dataframe[col_name] > up_limit), col_name] = up_limit
# Numerical columns
num_cols = [col for col in df.columns if df[col].dtypes in [int, float]
and df[col].nunique() > 10]
# Check Outliers
for col in df.columns:
print(check_outlier(df, col))
# Replace Outliers
for col in df.columns:
replace_with_thresholds(df, col)
# Check Outliers
for col in df.columns:
print(check_outlier(df, col))
```
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;"> Feature Engineering ⚙️</h1>
```
def label_encoder(dataframe, binary_col):
labelencoder = preprocessing.LabelEncoder()
dataframe[binary_col] = labelencoder.fit_transform(dataframe[binary_col])
return dataframe
def one_hot_encoder(dataframe, categorical_cols, drop_first=False):
dataframe = pd.get_dummies(dataframe, columns=categorical_cols, drop_first=drop_first)
return dataframe
def rare_analyser(dataframe, target, rare_perc):
rare_columns = [col for col in dataframe.columns if dataframe[col].dtypes == 'O'
and (dataframe[col].value_counts() / len(dataframe) < rare_perc).any(axis=None)]
for col in rare_columns:
print(col, ":", len(dataframe[col].value_counts()))
print(pd.DataFrame({"COUNT": dataframe[col].value_counts(),
"RATIO": dataframe[col].value_counts() / len (dataframe),
"TARGET_MEAN": dataframe.groupby(col)[target].mean()}), end="\n\n\n")
def rare_encoder(dataframe, rare_perc):
temp_df = dataframe.copy()
rare_columns = [col for col in temp_df.columns if temp_df[col].dtypes == 'O'
and (temp_df[col].value_counts() / len(temp_df) < rare_perc).any(axis=None)]
for var in rare_columns:
tmp = temp_df[var].value_counts() / len(temp_df)
rare_labbels = tmp[tmp < rare_perc].index
temp_df[var] = np.where(temp_df[var].isin(rare_labels), 'Rare', temp_df[var])
return temp_df
# New categorical BMI
df['NEW_BMI_CAT'] = pd.cut(x=df['BMI'], bins=[0, 18.4, 25.0, 30.0, 70.0],
labels=['weakness', 'normal', 'slightly_fat', 'obese']).astype('O')
# New categorical Glucose
df['NEW_GLUCOSE_CAT'] = pd.cut(x=df['Glucose'], bins=[0, 139, 200],
labels=['Normal', 'Prediabetes']).astype('O')
# New categorical BloodPressure
df['NEW_BLOOD_CAT'] = pd.cut(x=df['BloodPressure'], bins=[0, 79, 90, 123],
labels=['Normal', 'Hypertension_S1', 'Hypertension_S2']).astype('O')
# New categorical SkinThickness
df['NEW_SKINTHICKNESS_CAT'] = df['SkinThickness'].apply(lambda x: 1 if x <= 18.0 else 0)
# New categorical Insulin
df['NEW_INSULIN_CAT'] = df['Insulin'].apply(lambda x: 'Normal' if 16.0 <= x <=166 else 'Abnormal')
df.head()
# Label Encoding
label_cols = [col for col in df.columns if df[col].dtypes == 'O' and df[col].nunique() <= 2]
for col in label_cols:
label_encoder(df, col)
# One_hot Encoding
ohe_cols = [col for col in df.columns if 10 >= len(df[col].unique()) > 2]
df = one_hot_encoder(df, ohe_cols, drop_first=True)
df.columns = [col.upper() for col in df.columns]
df.head()
```
<a id='begin'></a>
# <h1 style="background-color:skyblue; font-family:newtimeroman; font-size:350%; text-align:center; border-radius: 15px 50px;"> Modeling 🧩</h1>
```
y = df[['OUTCOME']]
X = df.drop('OUTCOME', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)
rf = RandomForestClassifier().fit(X_train, y_train)
y_pred = rf.predict(X_test)
acc_random_forest = round(rf.score(X_test, y_pred) * 100, 2)
acc_random_forest
```
<center><img
src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTO00xMsiob_1AgrAfctXJ50--hHtXxBLg3uWJ1Guc4NGm9Y-61QnmuOagYXA2h0XaFkC0&usqp=CAU" style="width:50%;height:50%;">
</center>
# **Cross validation processes to prevent excessive learning.**
```
rf_params_ = {'max_depth': [3, 6, 10, None],
'max_features': [3, 5, 15],
'n_estimators': [100, 500, 700],
'min_samples_split': [2, 5, 8],
'min_samples_leaf': [1, 3, 5]}
rf_model = RandomForestClassifier(random_state=42)
rf_cv_model = RandomizedSearchCV(rf_model, rf_params_, cv=5, n_jobs=-1, verbose=1).fit(X_train, y_train)
rf_cv_model = RandomForestClassifier(**rf_cv_model.best_params_).fit(X_train, y_train)
y_pred = rf_cv_model.predict(X_test)
acc_random_forest = round(rf.score(X_test, y_pred) * 100, 2)
acc_random_forest
```
| github_jupyter |
# Portfolio Optimization
This notebook can be run online without installing any packages. Just click the logo:
[](https://mybinder.org/v2/gh/mcdeoliveira/pyoptimum-examples/master?filepath=examples%2Fportfolio.ipynb)
to run it on [binder](https://mybinder.org).
See this [case study](http://vicbee.net/portfolio.html) for more details.
## Import needed libraries
```
import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Initialize optimize.vicbee.net client
```
username = 'demo@optimize.vicbee.net'
password = 'optimize'
import pyoptimum # for easy access to the optimmize.vicbee.net api
client = pyoptimum.Client(username=username, password=password)
```
# Classic Markowitz Portfolio Optimization
In the classic Markowitz optimal portfolio optimization (Markowitz, 1952), one seeks to determine
* $x$: a vector of porfolio positions,
that minimize the total portfolio variance given:
* $r$: the vector of expected returns,
* $Q$: the covariance matrix of the returns,
* $\mu$: the desired portfolio return.
The problem can be formulated mathematically as the Quadratic Program (QP):
$$
\begin{aligned}
\min_{x} \quad & x^T Q x \\
\text{s.t.} \quad & r^T x = \mu \quad \sum_i{x_i} = 1
\end{aligned}
$$
In this notebook we will show how one can setup and solve such portfolio optimization problems using the [Optimize API](https://optmize.vicbee.net/api/ui).
Since we are using Python, we will take advantage of [numpy](https://numpy.org/) and [pandas](https://pandas.pydata.org/) to manipulate the data in this notebook. We will also use [matplotlib](https://matplotlib.org/) to plot the data. The use of such libraries or even Python is not a requirement for using the API, which can be accessed using <a href="https://en.wikipedia.org/wiki/Ajax_(programming)">AJAX calls</a>.
Data for the problem considered in this demo is taken from Elton and Gruber, 1995.
### References:
1. Harry M. Markowitz, “Portfolio Selection”. Journal of Finance, 7(1): 77 - 91, 1952.
2. Edwin J. Elton and Martin J. Gruber, Modern Portfolio Theory and Investment Analysis. Fifth Edition, John Wiley and Sons, 1995.
## Setup data using pandas
Consider the following set of assets with associated expected return and return stadard deviation given as percentages assembled as a pandas DataFrame:
```
data = [
['S & P',14.0,18.5],
['Bonds',6.5,5.0],
['Canadian',11.0,16.0],
['Japan',14.0,23.0],
['Emerging Markets',16.0,30.0],
['Pacific',18.0,26.0],
['Europe',12.0,20.0],
['Small Stocks',17.0,24.0]
]
assets = pd.DataFrame(data, columns=['Label', 'Return (%)', 'STD (%)'])
assets
```
The assets are associated with the following correlation coefficients:
```
n = len(assets)
data = np.array([
[1,.45,.7,.2,.64,.3,.61,.79],
[.45,1,.27,-.01,.41,.01,.13,.28],
[.7,.27,1,.14,.51,.29,.48,.59],
[.2,-.01,.14,1,.25,.73,.56,.13],
[.64,.41,.51,.25,1,.28,.61,.75],
[.3,.01,.29,.73,.28,1,.54,.16],
[.61,.13,.48,.56,.61,.54,1,.44],
[.79,.28,.59,.13,.75,.16,.44,1]
])
cov = pd.DataFrame(data=data, columns=assets.Label.values, index=assets.Label.values)
cov
```
## Solve problem using the Optmize API
Before solving the problem we need to assemble the required data, that is the vector of expected returns $r$, which can be taken directly from the data:
```
# vector of expected returns
r = assets['Return (%)'].values
```
the covariance matrix $Q$, which can be calculated from the given data as follows:
```
# covariance matrix
sigmas = np.diag(assets['STD (%)'].values)
Q = sigmas @ cov.values @ sigmas
```
In the context of the classic Markowitz portfolio optimization we should also specify a desired expected return
```
# expected return
mu = 14
```
and then select a portfolio corresponding to this return with the smallest possible variance.
The actual calculation of the optimal portfolio will be done on the cloud.
## Submit problem to the api
We assemble all the data in an object and make an api call to get the optimal portfolio:
```
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu
}
sol = client.call('portfolio', data)
```
Once an optimal solution is calculated we append it to the dataframe.
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Optimal (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
pd.options.display.float_format = '{:,.1f}'.format
assets['Optimal (%)'] = 100*np.array(sol['x'])
assets
```
By default, the solution vector adds to one, each number being interpreted as the fraction of the available budget that should be invested in order to realize the optimal portfolio. We use numpy to convert that to a percentage before displaying the solution.
## Solution without short sales
The minimum variance portfolio that meets a required level of expected return is often one in which the positions can be negative. Negative positions are realized as *short sales*.
For example, in the problem solved above
```
x = assets['Optimal (%)'];
print('{:.1f}%'.format(-x[x < 0].sum()))
```
of the portfolio is comprised of short sales.
One can add constraints to the optimization problem in order to prevent short sales.
This can be done by adding the following option:
```
# options
options = {
'short': False
}
```
and calling the API once again
```
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'options': options
}
sol = client.call('portfolio', data)
```
The new solution
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['No Shorts (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
assets['No Shorts (%)'] = 100*np.array(sol['x'])
assets
```
is one without short sales:
```
x = assets['No Shorts (%)']
print('{:.1f}%'.format(-x[x < 0].sum()))
```
in which each every position is an asset purchase:
```
print('{:.1f}%'.format(x.sum()))
```
Note that the new portofolio is *riskier* than the optimal one with short sales.
## Accounting for cashflow
Now suppose that one has an initial portfolio that they would like to rebalance, or buy or sell more assets.
In this case, one starts with an initial positions, for example:
```
assets['Initial (%)'] = [20, 30, 0, 20, 0, 10, 0, 20]
assets
```
An optimal portfolio will not change depending on whether one holds assets or not, so the answers obtained so far should not be affected by the existing holdings.
However, imagine that one has cash to invest and would like to know what is the best solution for investing the additional money without selling any of their assets.
One can take that scenario into account by adding an initial portfolio:
```
# initial portfolio
x0 = assets['Initial (%)'].values/100
```
and the desired *cashflow*, say equivalent to 40% of the existing portfolio,
```
# cashflow
cashflow = 40/100
```
along with the other data:
```
# expected return
mu = 14
# options
options = {
'sell': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'x0': x0.tolist(),
'cashflow': cashflow,
'options': options
}
sol = client.call('portfolio', data)
```
The optimal solution now is in the form of additional *buys* that need to be realized within the specified cashflow to achieve the desired return:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Buys (%)'] = np.nan
assets['After Cashflow (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Buys (%)'] = 100*(x - x0)/cashflow
assets['After Cashflow (%)'] = 100*x/x.sum()
assets
```
## Infeasible portfolio problems
As one add more constraints, it is often the case that certain problems become unsolvable.
For example, in the above cashflow problem, a solution does not exist if one lowers the ammount available for investment. For example, lowering the available cashflow to 10% or even 20% as in:
```
# initial portfolio
x0 = assets['Initial (%)'].values/100
# cashflow
cashflow = 20/100
# expected return
mu = 14
# options
options = {
'sell': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'x0': x0.tolist(),
'cashflow': cashflow,
'options': options
}
sol = client.call('portfolio', data)
```
results in an *infeasible* problem:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Buys (%)'] = np.nan
assets['After Cashflow (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Buys (%)'] = 100*(x - x0)/cashflow
assets['After Cashflow (%)'] = 100*x/x.sum()
assets
```
Of course the reason why the problem has no solution is because the return of the initial portfolio:
```
print('{:.1f}%'.format(r @ x0))
```
is much lower than the return asked of the optimization, which was of 14%.
In this particular case, one should expect to obtain solutions with smaller cashflows if one asks for more modest returns.
For example, a modest improvement in return from 12.75% to 13% is available with a 10% or 20% cashflow:
```
# initial portfolio
x0 = assets['Initial (%)'].values/100
# cashflow
cashflow = 20/100
# lower return
mu = 13
# options
options = {
'sell': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'x0': x0.tolist(),
'cashflow': cashflow,
'options': options
}
sol = client.call('portfolio', data)
```
The obtained optimal portfolio:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Buys (%)'] = np.nan
assets['After Cashflow (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Buys (%)'] = 100*(x - x0)/cashflow
assets['After Cashflow (%)'] = 100*x/x.sum()
assets
```
is a step toward improving the returns with the available cashflow.
We will revisit this problem later in this notebook after showing how to calculate efficient frontiers.
## General allocation constraints
It is also possible to specify arbitrary constraints on sets of assets.
For example, consider the that one desires an optimal portfolio with no cashflow, no shorts, but at least 50% of their assets on the combined S & P and Bond markets. Such a problem can be formulated after defining a set and corresponding constraint:
```
constraint1 = {'label': 's&p_bonds', 'set': [0, 1], 'bounds': [.5, np.inf]}
```
Note how sets are defined using numeric indices (starting from 0), and the constraint bounds estalish the lower and upper bound on the sum of the assets in the set.
The resulting constrained problem can be solved as before:
```
# Expected return
mu = 14
# options
options = {
'short': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'constraints': [constraint1],
'options': options
}
sol = client.call('portfolio', data)
```
The obtained optimal portfolio:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['S&P + Bonds >= 50 (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['S&P + Bonds >= 50 (%)'] = 100*x
assets
```
satisfies the constraint that
```
print('{:.1f}%'.format(assets['S&P + Bonds >= 50 (%)'][0:2].sum()))
```
is not less than 50%.
One might find it easier to formulate constraints based on the assets' labels. This is possible if a property label is added to the problem formulation:
```
# Expected return
mu = 14
# options
options = {
'short': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'labels': assets['Label'].tolist(),
'constraints': [{'label': 's&p_bonds', 'set': ['S & P', 'Bonds'], 'bounds': [.5, np.inf]}],
'options': options
}
sol = client.call('portfolio', data)
```
Note that no data is stored in the servers, and that labels can be easily anonymized if desired.
The above formulation produces the same optimal portfolio:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['S&P + Bonds >= 50 (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['S&P + Bonds >= 50 (%)'] = 100*x
assets
```
satisfying the constraint
```
print('{:.1f}%'.format(assets['S&P + Bonds >= 50 (%)'][0:2].sum()))
```
## Leverage constraints
Another type of constraint that can be easily imposed is leverage constraints.
We have already determined the optimal portfolio with and without short sales. The optimal portfolio has leverage at aroud 27%:
```
print('{:.1f}%'.format(assets['Optimal (%)'][assets['Optimal (%)'] <= 0].sum()))
```
while the optimal portfolio without short sales has, of course, 0% leverage.
How about an optimal intermediate portfolio with, say, 10% leverage? It can be determined by adding a *leverage constraint*:
```
constraint = {'label': '10% leverage', 'function': 'leverage', 'set': ':', 'bounds': .1}
```
Note the use of the function `leverage` and the set `:`, which is a shortcut representing all assets. The associated optimal porfolio is calculated below.
```
# Expected return
mu = 14
# options
options = {
'short': True
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'labels': assets['Label'].tolist(),
'constraints': [constraint],
'options': options
}
sol = client.call('portfolio', data)
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Leverage <= 10 (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Leverage <= 10 (%)'] = 100*x
assets
```
which satisfies the desireded leverage requirements:
```
print('{:.1f}%'.format(assets['Leverage <= 10 (%)'][assets['Leverage <= 10 (%)'] <= 0].sum()))
```
## Group leverage constraints
Leverage constraints can also be imposed in groups of assets.
The next example calculates an optimal portolio with at most 10% leverage in the Pacific and Europe regions.
```
# constraint
constraint = {'label': '10% group leverage', 'function': 'leverage', 'set': ['Pacific', 'Europe'], 'bounds': .1}
# Expected return
mu = 14
# options
options = {
'short': True
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'labels': assets['Label'].tolist(),
'constraints': [constraint],
'options': options
}
sol = client.call('portfolio', data)
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Group Levegare (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Group Leverage (%)'] = 100*x
assets
```
Note that leverage is calculated within the group's positions. In this case the group leverage is:
```
group = assets['Group Leverage (%)'][5:7]
print('{:.1f}%'.format(100*group[group <= 0].sum()/group.sum()))
```
as desired.
## Sales constraints
Similarly to leverage constraints, it is possible to enforce constraints on the total amount of sales in a portfolio.
Such problems are relevant in the context of cashflow problem in the presence of nonzero initial positions. With zero initial positions such sales constraints are equivalent to the leverage constraints introduced above.
For example, let us revisit the above cashflow problem for which we already know that a return of 14% is not possible with an injection of 20% of capital without selling assets. Next, we attempt to solve this same problem but this time imposing no short sales and a maximum of 25% in sales:
```
# constraint
constraint = {'label': '10% sales constraint', 'function': 'sales', 'set': ':', 'bounds': .25}
# initial portfolio
x0 = assets['Initial (%)'].values/100
# cashflow
cashflow = 20/100
# expected return
mu = 14
# options
options = {
'sell': True,
'short': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'x0': x0.tolist(),
'cashflow': cashflow,
'constraints': [constraint],
'options': options
}
sol = client.call('portfolio', data)
```
The optimal solution is:
```
if sol['status']=='infeasible':
print('> Problem is infeasible')
assets['Trades (%)'] = np.nan
assets['After Trades (%)'] = np.nan
else:
print('> Problem is feasible')
print(' * mu = {:.3f}%, std = {:.3f}% ({})'.format(mu, math.sqrt(sol['obj']), sol['status']))
x = np.array(sol['x'])
assets['Trades (%)'] = 100*(x - x0)/cashflow
assets['After Trades (%)'] = 100*x/x.sum()
assets
```
which satisfies both the no short sale constraints as well as the 25% cap on sales:
```
trades = assets['Trades (%)']
print('{:.1f}%'.format(100*trades[trades <= 0].sum()/trades.sum()))
```
# The efficient frontier
The efficient frontier is the set of minimal variance portfolios obtained after varying the expected returns. A graphical representation of the frontier can be ploter in terms of the expected return versus risk (variance) for a given asset data. The Optimize API has a method for obtaining efficient frontiers by successively calculting optimal portfolios for varying levels of return. By default, the smallest return is the one attained by the minimum variance portfolio, and the largest return is that of the largest individual asset.
Calling the frontier api method is very simlar to performing a single portfolio optimization. The following call:
```
# vector of expected returns
r = assets['Return (%)'].values
# covariance matrix
sigmas = np.diag(assets['STD (%)'].values)
Q = sigmas @ cov.values @ sigmas
# options
options = {
'number_of_points': 20
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'options': options
}
sol = client.call('frontier', data)
```
will produce 20 points on the efficient frontier of the classic Markowitz portfolio optimization propblem, that is one without any additional constraints on the portfolio or the positions:
```
frontier = sol['frontier']
m = len(frontier)
mu0 = np.zeros((m,))
var0 = np.zeros((m,))
pos0 = np.zeros((m, n))
for i, e in enumerate(frontier):
point = e['sol']
mu0[i] = e['mu']
var0[i] = point['obj']
pos0[i, :] = point['x']
if point['obj'] is not None:
print(' {}. mu = {:.3f}%, std = {:.3f}% ({})'.format(i+1, mu0[i], math.sqrt(var0[i]), point['status']))
else:
print(' {}. mu = {:.3f}%, std = --- ({})'.format(i+1, mu0[i], point['status']))
```
These points can be combine to produce a plot of the frontier:
```
rmax = np.max(r)
rmin = np.min((np.min(r), np.min(mu0)))
dr = rmax - rmin
colors = np.array(plt.rcParams['axes.prop_cycle'].by_key()['color'])
# plot frontier
xlim = ((rmin-0.1*dr), (rmax+0.1*dr))
for i, label in enumerate(assets['Label'].values):
plt.plot(np.sqrt(Q[i,i]), r[i], 'x', color=colors[np.mod(i, len(colors))])
plt.plot(np.sqrt(var0), mu0, 'k')
plt.xlabel('std (%)')
plt.ylabel('return (%)')
plt.title('Efficient frontier')
plt.ylim(xlim)
plt.legend(assets['Label'].values)
plt.grid()
plt.show()
```
## The frontier without short sales
The Optimize API let's one add any constraint to the calculation of the points in the frontier.
For example:
```
# vector of expected returns
r = assets['Return (%)'].values
# covariance matrix
sigmas = np.diag(assets['STD (%)'].values)
Q = sigmas @ cov.values @ sigmas
# options
options = {
'number_of_points': 20,
'short': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'options': options
}
sol = client.call('frontier', data)
```
will produce 20 points on the efficient frontier obtained for portfolios for which short sales are not permitted:
```
frontier = sol['frontier']
m = len(frontier)
mu1 = np.zeros((m,))
var1 = np.zeros((m,))
pos1 = np.zeros((m, n))
for i, e in enumerate(frontier):
point = e['sol']
mu1[i] = e['mu']
var1[i] = point['obj']
pos1[i, :] = point['x']
if point['obj'] is not None:
print(' {}. mu = {:.3f}%, std = {:.3f}% ({})'.format(i+1, mu1[i], math.sqrt(var1[i]), point['status']))
else:
print(' {}. mu = {:.3f}%, std = --- ({})'.format(i+1, mu1[i], point['status']))
```
Again these points can be combined to produce a plot of the frontier:
```
rmax = np.max(r)
rmin = np.min((np.min(r), np.min(mu0)))
dr = rmax - rmin
colors = np.array(plt.rcParams['axes.prop_cycle'].by_key()['color'])
# plot frontier
ylim = ((rmin-0.1*dr), (rmax+0.1*dr))
for i, label in enumerate(assets['Label'].values):
plt.plot(np.sqrt(Q[i,i]), r[i], 'x', color=colors[np.mod(i, len(colors))])
plt.plot(np.sqrt(var0), mu0, 'k--')
plt.plot(np.sqrt(var1), mu1, 'k')
plt.xlabel('std (%)')
plt.ylabel('return (%)')
plt.title('Efficient frontier')
plt.ylim(ylim)
plt.legend(assets['Label'].values)
plt.grid()
plt.show()
```
We have combined in this plot also the standard frontier with short sales (dashed) for comparison.
## The frontier with general constraints
All options available for calculation of optimal portfolios also apply to the frontier.
For example, the requirement of no short sales plus at least 50% on S & P and Bonds can be applied to the frontier as follows:
```
# options
options = {
'number_of_points': 20,
'short': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'mu': mu,
'labels': assets['Label'].tolist(),
'constraints': [{'label': 's&p_bonds', 'set': ['S & P', 'Bonds'], 'bounds': [.5, np.inf]}],
'options': options
}
sol = client.call('frontier', data)
frontier = sol['frontier']
m = len(frontier)
mu2 = np.zeros((m,))
var2 = np.zeros((m,))
pos2 = np.zeros((m, n))
for i, e in enumerate(frontier):
point = e['sol']
mu2[i] = e['mu']
var2[i] = point['obj']
pos2[i, :] = point['x']
if point['obj'] is not None:
print(' {}. mu = {:.3f}%, std = {:.3f}% ({})'.format(i+1, mu2[i], math.sqrt(var2[i]), point['status']))
else:
print(' {}. mu = {:.3f}%, std = --- ({})'.format(i+1, mu2[i], point['status']))
```
Note how the additional constraints lead to infeasible portfolios for higher levels of return.
This frontier's graphical representation can be calculated as in:
```
rmax = np.max(r)
rmin = np.min((np.min(r), np.min(mu0)))
dr = rmax - rmin
colors = np.array(plt.rcParams['axes.prop_cycle'].by_key()['color'])
# plot frontier
ylim = ((rmin-0.1*dr), (rmax+0.1*dr))
for i, label in enumerate(assets['Label'].values):
plt.plot(np.sqrt(Q[i,i]), r[i], 'x', color=colors[np.mod(i, len(colors))])
plt.plot(np.sqrt(var0), mu0, 'k--')
plt.plot(np.sqrt(var1), mu1, 'k:')
plt.plot(np.sqrt(var2), mu2, 'k')
plt.xlabel('std (%)')
plt.ylabel('return (%)')
plt.title('Efficient frontier')
plt.ylim(ylim)
plt.legend(assets['Label'].values)
plt.grid()
plt.show()
```
## A strongly constrained frontier
Consider now the cashflow problem we discussed above in which one would like to invest additional funds to improve their returns without selling any assets. The corresponding frontier can be obtained as follows:
```
# initial portfolio
x0 = assets['Initial (%)'].values/100
# cashflow
cashflow = 20/100
# options
options = {
'number_of_points': 20,
'sell': False
}
# prepare data for submitting request to api
data = {
'Q': Q.tolist(),
'r': r.tolist(),
'x0': x0.tolist(),
'cashflow': cashflow,
'options': options
}
sol = client.call('frontier', data)
```
Note that in this case many returns lead to infeasible portfolios, and the efficient frontier covers a much smaller range of returns:
```
frontier = sol['frontier']
m = len(frontier)
mu3 = np.zeros((m,))
var3 = np.zeros((m,))
pos3 = np.zeros((m, n))
for i, e in enumerate(frontier):
point = e['sol']
mu3[i] = e['mu']
var3[i] = point['obj']
pos3[i, :] = point['x']
if point['obj'] is not None:
print(' {}. mu = {:.3f}%, std = {:.3f}% ({})'.format(i+1, mu3[i], math.sqrt(var3[i]), point['status']))
else:
print(' {}. mu = {:.3f}%, std = --- ({})'.format(i+1, mu3[i], point['status']))
```
as visualized in the following plot:
```
rmax = np.max(r)
rmin = np.min((np.min(r), np.min(mu0)))
dr = rmax - rmin
colors = np.array(plt.rcParams['axes.prop_cycle'].by_key()['color'])
# plot frontier
ylim = ((rmin-0.1*dr), (rmax+0.1*dr))
for i, label in enumerate(assets['Label'].values):
plt.plot(np.sqrt(Q[i,i]), r[i], 'x', color=colors[np.mod(i, len(colors))])
plt.plot(np.sqrt(var0), mu0, 'k--')
plt.plot(np.sqrt(var1), mu1, 'k:')
plt.plot(np.sqrt(var2), mu2, 'k-.')
plt.plot(np.sqrt(var3), mu3, 'k')
plt.xlabel('std (%)')
plt.ylabel('return (%)')
plt.title('Efficient frontier')
plt.ylim(ylim)
plt.legend(assets['Label'].values)
plt.grid()
plt.show()
```
Zooming in:
```
# plot frontier
ylim = (11.5, 13.5)
xlim = (9, 13)
for i, label in enumerate(assets['Label'].values):
plt.plot(np.sqrt(Q[i,i]), r[i], 'x', color=colors[np.mod(i, len(colors))])
plt.plot(np.sqrt(var0), mu0, 'k--')
plt.plot(np.sqrt(var1), mu1, 'k:')
plt.plot(np.sqrt(var2), mu2, 'k-.')
plt.plot(np.sqrt(var3), mu3, 'k')
plt.xlabel('std (%)')
plt.ylabel('return (%)')
plt.title('Efficient frontier')
plt.ylim(ylim)
plt.xlim(xlim)
plt.grid()
plt.show()
```
helps to visualize the three frontiers.
Another interesting visualizion is that of the resulting trades. The following plot shows the frontier, the corresponding positions as a percentage of the total holdings and the corresponding trades as a percentage of the cashflow.
```
mu = mu3
var = var3
pos = pos3
trades = (pos - x0) / cashflow
# plot frontier
plt.figure(figsize=(10, 10))
ax = plt.subplot(3, 1, 1)
for i, label in enumerate(assets['Label'].values):
plt.plot(r[i], np.sqrt(Q[i,i]), 'x', color=colors[np.mod(i, len(colors))])
ax.plot(mu0, np.sqrt(var0), 'k--')
ax.plot(mu1, np.sqrt(var1), 'k:')
ax.plot(mu2, np.sqrt(var2), 'k-.')
ax.plot(mu, np.sqrt(var), 'k')
ax.set_ylabel('std (%)')
ax.set_title('Efficient frontier')
ax.set_xlim(ylim)
ax.set_ylim(xlim)
ax.grid()
ax = plt.subplot(3, 1, 2)
ax.plot(mu, 100*pos)
ax.set_ylabel('positions')
ax.set_title('Positions')
ax.set_xlim(ylim)
ax.legend(assets['Label'].values)
ax.grid()
ax = plt.subplot(3, 1, 3)
ax.plot(mu, 100*trades)
ax.set_xlabel('return (%)')
ax.set_ylabel('trades')
ax.set_title('Buy/Sell')
ax.set_xlim(ylim)
ax.grid()
plt.subplots_adjust(hspace=.4)
plt.show()
```
Note that the frontier in this case is represented with the coordinates reversed, that is risk versus return, as opposed to the standard return versus risk.
| github_jupyter |
3.11 模型选择、欠拟合和过拟合
在前几节基于Fashion-MNIST数据集的实验中,我们评价了机器学习模型在训练数据集和测试数据集上的表现。如果你改变过实验中的模型结构或者超参数,你也许发现了:当模型在训练数据集上更准确时,它在测试数据集上却不一定更准确。这是为什么呢?
3.11.1 训练误差和泛化误差
在解释上述现象之前,我们需要区分训练误差(training error)和泛化误差(generalization error)。通俗来讲,前者指模型在训练数据集上表现出的误差,后者指模型在任意一个测试数据样本上表现出的误差的期望,并常常通过测试数据集上的误差来近似。计算训练误差和泛化误差可以使用之前介绍过的损失函数,例如线性回归用到的平方损失函数和softmax回归用到的交叉熵损失函数。
让我们以高考为例来直观地解释训练误差和泛化误差这两个概念。训练误差可以认为是做往年高考试题(训练题)时的错误率,泛化误差则可以通过真正参加高考(测试题)时的答题错误率来近似。假设训练题和测试题都随机采样于一个未知的依照相同考纲的巨大试题库。如果让一名未学习中学知识的小学生去答题,那么测试题和训练题的答题错误率可能很相近。但如果换成一名反复练习训练题的高三备考生答题,即使在训练题上做到了错误率为0,也不代表真实的高考成绩会如此。
在机器学习里,我们通常假设训练数据集(训练题)和测试数据集(测试题)里的每一个样本都是从同一个概率分布中相互独立地生成的。基于该独立同分布假设,给定任意一个机器学习模型(含参数),它的训练误差的期望和泛化误差都是一样的。例如,如果我们将模型参数设成随机值(小学生),那么训练误差和泛化误差会非常相近。但我们从前面几节中已经了解到,模型的参数是通过在训练数据集上训练模型而学习出的,参数的选择依据了最小化训练误差(高三备考生)。所以,训练误差的期望小于或等于泛化误差。也就是说,一般情况下,由训练数据集学到的模型参数会使模型在训练数据集上的表现优于或等于在测试数据集上的表现。由于无法从训练误差估计泛化误差,一味地降低训练误差并不意味着泛化误差一定会降低。
机器学习模型应关注降低泛化误差。
3.11.2 模型选择
在机器学习中,通常需要评估若干候选模型的表现并从中选择模型。这一过程称为模型选择(model selection)。可供选择的候选模型可以是有着不同超参数的同类模型。以多层感知机为例,我们可以选择隐藏层的个数,以及每个隐藏层中隐藏单元个数和激活函数。为了得到有效的模型,我们通常要在模型选择上下一番功夫。下面,我们来描述模型选择中经常使用的验证数据集(validation data set)。
3.11.2.1 验证数据集
从严格意义上讲,测试集只能在所有超参数和模型参数选定后使用一次。不可以使用测试数据选择模型,如调参。由于无法从训练误差估计泛化误差,因此也不应只依赖训练数据选择模型。鉴于此,我们可以预留一部分在训练数据集和测试数据集以外的数据来进行模型选择。这部分数据被称为验证数据集,简称验证集(validation set)。例如,我们可以从给定的训练集中随机选取一小部分作为验证集,而将剩余部分作为真正的训练集。
然而在实际应用中,由于数据不容易获取,测试数据极少只使用一次就丢弃。因此,实践中验证数据集和测试数据集的界限可能比较模糊。从严格意义上讲,除非明确说明,否则本书中实验所使用的测试集应为验证集,实验报告的测试结果(如测试准确率)应为验证结果(如验证准确率)。
3.11.2.3 K折交叉验证
由于验证数据集不参与模型训练,当训练数据不够用时,预留大量的验证数据显得太奢侈。一种改善的方法是K折交叉验证(K-fold cross-validation)。在K折交叉验证中,我们把原始训练数据集分割成K个不重合的子数据集,然后我们做K次模型训练和验证。每一次,我们使用一个子数据集验证模型,并使用其他K−1个子数据集来训练模型。在这K次训练和验证中,每次用来验证模型的子数据集都不同。最后,我们对这K次训练误差和验证误差分别求平均。
3.11.3 欠拟合和过拟合
接下来,我们将探究模型训练中经常出现的两类典型问题:一类是模型无法得到较低的训练误差,我们将这一现象称作欠拟合(underfitting);另一类是模型的训练误差远小于它在测试数据集上的误差,我们称该现象为过拟合(overfitting)。在实践中,我们要尽可能同时应对欠拟合和过拟合。虽然有很多因素可能导致这两种拟合问题,在这里我们重点讨论两个因素:模型复杂度和训练数据集大小。
3.11.3.1 模型复杂度
为了解释模型复杂度,我们以多项式函数拟合为例。给定一个由标量数据特征x和对应的标量标签y组成的训练数据集,多项式函数拟合的目标是找一个K阶多项式函数
为了解释模型复杂度,我们以多项式函数拟合为例。给定一个由标量数据特征 x 和对应的标量标签 y 组成的训练数据集,多项式函数拟合的目标是找一个 K 阶多项式函数
\hat{y} = b + \sum_{k=1}^K x^k w_k
来近似 y。在上式中,wk是模型的权重参数,b是偏差参数。与线性回归相同,多项式函数拟合也使用平方损失函数。特别地,一阶多项式函数拟合又叫线性函数拟合。
因为高阶多项式函数模型参数更多,模型函数的选择空间更大,所以高阶多项式函数比低阶多项式函数的复杂度更高。因此,高阶多项式函数比低阶多项式函数更容易在相同的训练数据集上得到更低的训练误差。给定训练数据集,模型复杂度和误差之间的关系通常如图3.4所示。给定训练数据集,如果模型的复杂度过低,很容易出现欠拟合;如果模型复杂度过高,很容易出现过拟合。应对欠拟合和过拟合的一个办法是针对数据集选择合适复杂度的模型。
3.11.3.2 训练数据集大小
影响欠拟合和过拟合的另一个重要因素是训练数据集的大小。一般来说,如果训练数据集中样本数过少,特别是比模型参数数量(按元素计)更少时,过拟合更容易发生。此外,泛化误差不会随训练数据集里样本数量增加而增大。因此,在计算资源允许的范围之内,我们通常希望训练数据集大一些,特别是在模型复杂度较高时,例如层数较多的深度学习模型。
3.11.4 多项式函数拟合实验
为了理解模型复杂度和训练数据集大小对欠拟合和过拟合的影响,下面我们以多项式函数拟合为例来实验。首先导入实验需要的包或模块。
```
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Conv2D,BatchNormalization,Activation
import sys
import matplotlib.pyplot as plt
import d2lzh as d2l
%matplotlib inline
```
3.11.4.1 生成数据集
我们将生成一个人工数据集。在训练数据集和测试数据集中,给定样本特征x,我们使用如下的三阶多项式函数来生成该样本的标签:y = 1.2x - 3.4x^2 + 5.6x^3 + 5 + epsilon 其中噪声项ϵ服从均值为0、标准差为0.01的正态分布。训练数据集和测试数据集的样本数都设为100。
```
n_train, n_test, true_w, true_b = 100, 100, [1.2, -3.4, 5.6], 5
features = tf.random.normal(shape=(n_train + n_test, 1))
poly_features = tf.concat([features, tf.pow(features, 2), tf.pow(features, 3)],1)
print(poly_features.shape)
labels = (true_w[0] * poly_features[:, 0] + true_w[1] * poly_features[:, 1]+ true_w[2] * poly_features[:, 2] + true_b)
print(tf.shape(labels))
# labels += tf.random.normal(labels.shape,0,0.1)
print(tf.shape(labels))
```
看一看生成的数据集的前两个样本。
```
features[:2], poly_features[:2], labels[:2]
```
3.11.4.2. 定义、训练和测试模型¶
我们先定义作图函数semilogy,其中 y 轴使用了对数尺度。
```
def semilogy(x_vals, y_vals, x_label, y_label, x2_vals=None, y2_vals=None,
legend=None, figsize=(3.5, 2.5)):
d2l.set_figsize(figsize)
d2l.plt.xlabel(x_label)
d2l.plt.ylabel(y_label)
d2l.plt.semilogy(x_vals, y_vals)
if x2_vals and y2_vals:
d2l.plt.semilogy(x2_vals, y2_vals, linestyle=':')
d2l.plt.legend(legend)
```
和线性回归一样,多项式函数拟合也使用平方损失函数。因为我们将尝试使用不同复杂度的模型来拟合生成的数据集,所以我们把模型定义部分放在fit_and_plot函数中。多项式函数拟合的训练和测试步骤与3.6节(softmax回归的从零开始实现)介绍的softmax回归中的相关步骤类似。
```
num_epochs=100
def fit_and_plot(train_features, test_features, train_labels, test_labels):
net = tf.keras.Sequential([tf.keras.layers.Dense(1)])
batch_size = min(10, train_labels.shape[0])
# batch_size = tf.cast(batch_size, 'int64')
train_iter = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(10)
optimizer = tf.keras.optimizers.Adam()
train_ls, test_ls, loss_history = [], [], []
for _ in range(num_epochs):
for X, y in train_iter:
with tf.GradientTape() as tape:
logits = net(X, training=True)
l = tf.keras.losses.mse(logits, y)
print(l)
# 获取本批数据梯度
grads = tape.gradient(l, net.trainable_variables)
# 反向传播优化
optimizer.apply_gradients(zip(grads, net.trainable_variables))
train_ls.append(tf.keras.losses.mse(net(train_features), train_labels).numpy().mean())
test_ls.append(tf.keras.losses.mse(net(test_features),test_labels).numpy().mean())
print('final epoch: train loss', train_ls[-1], 'test loss', test_ls[-1])
semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
range(1, num_epochs + 1), test_ls, ['train', 'test'])
print('weight:', net.get_weights()[0],
'\nbias:', net.get_weights()[1])
```
3.11.4.3. 三阶多项式函数拟合(正常)¶
我们先使用与数据生成函数同阶的三阶多项式函数拟合。实验表明,这个模型的训练误差和在测试数据集的误差都较低。训练出的模型参数也接近真实值: w1=1.2,w2=−3.4,w3=5.6,b=5 。
```
fit_and_plot(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:])
```
3.11.4.4. 线性函数拟合(欠拟合)¶
我们再试试线性函数拟合。很明显,该模型的训练误差在迭代早期下降后便很难继续降低。在完成最后一次迭代周期后,训练误差依旧很高。线性模型在非线性模型(如三阶多项式函数)生成的数据集上容易欠拟合。
```
fit_and_plot(poly_features[0:2, :], poly_features[n_train:, :], labels[0:2],
labels[n_train:])
```
| github_jupyter |
# 1. Formulate your questions
Are there party-level differences in House expenditures?
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import numpy as np
import pandas as pd
```
# 2. Read in your data
From ProPublica's [House Office Expenditure Data](https://projects.propublica.org/represent/expenditures).
```
df = pd.read_csv('2018Q3-house-disbursements.csv')
```
# 3. Check the packaging
```
df.shape
```
# 4. Look at the top and bottom of data
```
df.head()
df.tail()
```
# 5. Check the "n"s
There should only be Q3 data in here.
```
df['QUARTER'].value_counts()
```
There should only be 2018 data in here.
```
df['YEAR'].value_counts()
```
There should be around 435 members of Congress.
```
len(set(df['BIOGUIDE_ID'].values))
```
# 6. Validate against an external data source
What are some of the most common purposes?
```
df['PURPOSE'].value_counts().head(20)
```
Student loans, that looks interesting.
```
student_loans_df = df[df['PURPOSE'] == "STUDENT LOANS"]
```
Congress spent a total of $3.6 million on student loan payments in 2018Q3.
```
'${0:,}'.format(student_loans_df['AMOUNT'].sum())
```
Is the distribution of payments typical? Mostly.
```
student_loans_df['AMOUNT'].describe()
```
# 7. Make a plot
```
student_loans_df['AMOUNT'].hist(bins=25)
```
# 8. Try an easy solution
Load data on the 115th Congress from [CivilServiceUSA](https://github.com/CivilServiceUSA/us-house), which include party data (as well as a bunch of other valuable information).
```
house_df = pd.read_csv('https://raw.githubusercontent.com/CivilServiceUSA/us-house/2e95fa2bf8e9f0f0f1e0aa252d19f962db283c98/us-house/data/us-house.csv')
house_df.head()
```
Join the `house_df` party data into the `df` expenditure data.
```
_df1 = df.dropna(subset=['BIOGUIDE_ID'])
_df2 = house_df[['bioguide','party','gender']]
joined_df = pd.merge(_df1,_df2,left_on='BIOGUIDE_ID',right_on='bioguide',how='inner')
joined_df.head(1)
```
Count up how much each member spent, preserving their party and gender information.
```
agg_d = {'AMOUNT':np.sum,
'party':'first',
'gender':'first'
}
joined_agg_df = joined_df.groupby('BIOGUIDE_ID').agg(agg_d)
joined_agg_df.head()
pd.pivot_table(joined_agg_df,
values='AMOUNT',
columns='party',
aggfunc='mean').style.format('${:,.0f}')
pd.pivot_table(joined_agg_df,
values='AMOUNT',
columns='gender',
aggfunc='mean').style.format('${:,.0f}')
pd.pivot_table(joined_agg_df,
values='AMOUNT',
columns='gender',
index='party',
aggfunc='mean').style.format('${:,.0f}')
```
Are these differences statistically significant? No.
```
from scipy import stats
r_amount = joined_agg_df.loc[joined_agg_df['party'] == 'republican','AMOUNT']
d_amount = joined_agg_df.loc[joined_agg_df['party'] == 'democrat','AMOUNT']
stats.ttest_ind(r_amount,d_amount)
m_amount = joined_agg_df.loc[joined_agg_df['gender'] == 'male','AMOUNT']
f_amount = joined_agg_df.loc[joined_agg_df['gender'] == 'female','AMOUNT']
stats.ttest_ind(m_amount,f_amount)
```
| github_jupyter |
# NumPy for Python in Jupyter Notebook
```
# NumPy (Numerical Python)
import numpy as np
```
### Creating an array
```
a=[1,2,3,4,5]
print("This is a list:",a)
b=np.array(a)
print("\nArray created from list:",b)
print("Class of array:",type(b)) # not a list
print("Datatype of array:",b.dtype) # dtype attribute returns the datatype of elements in array
print("\nArray after Appending elements:",np.append(b,[6,7,8])) # appending elements in the array
# create a one-dimensional array
vector_row=np.array([1,2,3])
print(f"Row vector:{vector_row}")
print(f'\nThird element of row vector: {vector_row[2]}')
vector_column=np.array([[1],[2],[3]])
print(f"\nColumn vector:\n{vector_column}")
print(f'\nAll elements of column vector:\n{vector_column[:]}')
```
### Creating a matrix
```
# create a two-dimensional array
c=[[1,2,3],[4,5,6],[7,8,9]]
print("This is a list of lists:",c)
d=np.array(c)
print("\n","Two-dimensional array (matrix) created from list:\n",d)
print("\nExtracting Second Element: ",d[1])
print("\nExtracting a sub-matrix using indexing:\n",d[:2,1:]) # rows 0 & 1, columns 1 & 2
print(f'\nSecond row, second column: {d[1,1]}')
```
### Generating an array of numbers using arange()
```
print("This is a list",list(range(0,5))," using range()")
a=np.arange(0,5) # works exactly similar to range()
print("This is an array",a," using arange()")
```
### Creating arrays & matrices of zeros, ones & any other number using zeros() and ones()
```
print("One dimensional zeros: ",np.zeros(5))
print("\nTwo-dimensional ones:(2x3)\n",np.ones((2,3))) # (rows x columns)
print("\nTwo-dimensional fives: (3x5)\n",(5*np.ones((3,5))))
```
### Creating a Compressed Sparse Row (CSR) Matrix
```
from scipy import sparse
matrix=np.array([[0,0],[0,1],[3,0]])
matrix_sparse=sparse.csr_matrix(matrix) # Create compressed sparse row (CSR) matrix
print(f"Matrix:\n{matrix}")
print(f"\nSparse Matrix:\n{matrix_sparse}") # View sparse matrix
matrix_large=np.array([[0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0],[3,0,0,0,0,0,0,0,0,0]]) # Create larger matrix
matrix_large_sparse=sparse.csr_matrix(matrix_large) # Create compressed sparse row (CSR) matrix
print(f"\nMatrix:\n{matrix_large}")
print(f"\nSparse Matrix:\n{matrix_large_sparse}")
```
### Generating linear equally spaced numbers using linspace()
```
print("Linearly spaced numbers:",np.linspace(0,10,6)) # linspace(first number, last number, total elements)
print("\nA number line having points from [-5,5]:",np.linspace(-5,5,11))
```
### Identity matrix using eye()
```
print("An identity matrix is defined using eye():\n",np.eye(5))
```
### Generating random numbers using rand() for uniform distribution and randn() for normal distribution
```
# generating a random number using uniform distribution (probability of each number between 0 and 1 is the same)
print("One-dimensional random numbers using rand(): ",np.random.rand(4))
print("\nTwo-dimensional (3x5) random numbers using rand(): \n",np.random.rand(3,5))
# generating a random number using normal distribution (Gaussian distribution with mean=0 and variance=1)
print("\n\nOne-dimensional random numbers using randn(): ",np.random.randn(4))
print("\nTwo-dimensional random numbers using randn(): \n",np.random.randn(3,5))
```
### Generating Integer random numbers between low and high values using randint()
```
a=np.random.randint(1,100,20)
print("20 numbers generated randomly between [1,100) are: ",a)
print("Shape:",a.shape)# shape attribute returns the shape
print("Size:",a.size)# size attribute returns the number of elements
print("Dimension:",a.ndim)# ndim attribute returns the dimension or axes
```
### Finding maximum & minimum in the array with their respective index positions using argmax() and argmin()
```
a=np.random.randint(1,100,20)
print("\nReturning maximum number using max(): ",a.max())
print("Returning minimum number using min(): ",a.min())
print("Returning index position of maximum number using argmax(): ",a.argmax())
print("Returning index position of minimum number using argmin(): ",a.argmin())
```
### Sorting Array using sort() and storing their indexes using argsort()
```
a=np.random.randint(1,100,20)
print('Array:',a)
print('\nSorted Array:',np.sort(a))
print('Array containing sorted indexes:',np.argsort(a))
```
### Reshaping an array to create a matrix using reshape()
```
a=np.random.randint(1,100,20)
print("Array 'a': ",a)
# reshape() is used to change the dimension
b=a.reshape(4,5)
print("\nThe array 'a' is re-arranged into 4x5 using reshape():\n",b)
print("\nShape:",b.shape)
print("Size:",b.size)
print("Dimension:",b.ndim)
# the number of elements in the original array 'a' and the new reshaped array should be same, i.e. 2*10=20 elements
c=a.reshape(2,-1) # -1 indicates that there could be any no. of columns, 2 represents two rows
print("\n\nThe array 'a' is re-arranged using reshape():\n",c)
print("\nShape:",c.shape)
print("Size:",c.size)
print("Dimension:",c.ndim)
```
### Transposing a matrix and creating a new matrix containing ones of the same shape as an existing matrix
```
a=np.random.rand(2,5)
print("Matrix 'a' (2x5):\n",a)
# Transpose
print("\nMatrix after transposing: (5x2)\n",a.T) # T attribute is used to transpose the matrix
b=np.ones_like(a)
print("\nCreating a matrix containing ones having the same shape and size as array 'a' (2x5):\n",b)
```
### NumPy Operations
```
a=np.random.randint(1,100,10)
print("Original Array (a): ",a)
print("\nAddition (a+a): ",a+a)
print("Subtraction (a-10): ",a-10)
print("Multiplication (a*2): ",a*2)
print("Division (a/5): ",a/5)
print("Exponent (a**2): ",a**2)
```
### Rounding off numbers using ceil(), floor() and around()
```
pi=np.pi
print("Value of pi: ",pi)
print("Ceil Value of pi: ",np.ceil(pi)) # ceil() is used to approximate the value on the higher side
print("Floor Value of pi: ",np.floor(pi)) # floor() is used to approximate the value on the lower side
print("Around Value of pi: ",np.around(pi,decimals=4)) # around() is used to round the result into specified decimals places.
```
### Numpy functions on array
```
a=np.random.randint(1,100,5)
print("Original Array: ",a)
print("\nMaximum: ",np.max(a))
print("Minimum: ",np.min(a))
print("Mean: ",np.mean(a))
print("Standard Deviation: %.2f"%(np.std(a)))
print("Variance: ",np.var(a))
print("Sum: ",np.sum(a))
print("Median: ",np.median(a))
print("\nSinusoid: ",np.around(np.sin(a),decimals=3))
print("Logarithm: ",np.around(np.log(a),decimals=3))
print("Power: ",np.power(a,2)) # power is used to find the power of an array
print("Square Root:",np.around(np.sqrt(a),decimals=3))# sqrt() is used to determine the square root of array
print("\nExponential:",np.exp(a))# Exponential
```
### Conditional Selection
```
a=np.random.randint(1,100,10)
print("Original Array:",a)
# Method_1 (MASKING)
print("\nFiltering Array directly:",a[a>50])
# Method_2
b=a>50
print("\nResult of Array when condition is applied:",b)
print("Filtered Array:",a[b]) # returns true values in the array
print("Another way to extract array:",np.extract(b,a)) # extract(condition,array)
# Method_3 using where(), it returns the indexes of those elements where condition is matched
print("\nArray of indexes where condition is matched:",np.where(a>50))
print("Final result:",a[np.where(a>50)])
```
### Functions on Matrices for filtering
```
a=np.random.randint(1,50,20).reshape(5,4)
print("Matrix:\n",a)
print("\nTotal elements less than 10:",np.count_nonzero(a<10)) # count_nonzero() counts the total no. of occurences
print("Returning total elements which are less than 10:",np.sum(a<10))
print("Returning an array of total elements in each row which are less than 10:",np.sum(a<10,axis=1))
print("\nReturns the boolean value which verifies any element less than 10:",np.any(a<10))
print("Returns the boolean value which verifies all element less than 10:",np.all(a<10))
print("Returns an array of boolean values which verifies all element greater than 10:",np.all(a>10,axis=1))
```
### Splitting arrays using split()
```
a=np.random.randint(1,50,20)
print("Original Array:",a)
b,c=np.split(a,2)
print("Arrays after splitting:",b,c)
print("\nArrays after splitting into specified indexes:\n",np.split(a,[4,9,15])) # splitting array on the specified index
```
### Functions along particular axis
```
a=np.arange(1,7).reshape(2,3)
print("Matrix: (2x3)\n",a)
print("\nSum of elements (axis=0): \n",a.sum(axis=0)) # axis=0 means columns=fixed, rows=variable
print("Sum of elements (axis=1): \n",a.sum(axis=1)) # axis=1 is means columns=variable, rows=fixed
print("Sum of all elements: \n",a.sum())
print("\nMaximum along each column: ",np.amax(a,1)) # amax(array,axis) is used to determine the max. along the specified axis
print("Minimum along each row: ",np.amin(a,0)) # amin(array,axis) is used to determine the min. along the specified axis
print("Range along each column: ",np.ptp(a,1)) # ptp(array,axis) is used to determine the range along the specified axis
print("50 Percentile along each row: ",np.percentile(a,50,0)) # percentile(array,percentile,axis) is used to determine the percentile along the specified axis
print("Overall 40 Percentile: ",np.percentile(a,40))
print("Median along each column: ",np.median(a,1)) # median(array,axis) is used to determine the median along the specified axis
print("Mean along each row: ",np.mean(a,0)) # mean(array,axis) is used to determine the mean along the specified axis
print("\nAppending elements along axis 0:\n",np.append(a,[[7,8,9],[10,11,12]],axis=0))
print("Appending elements along axis 1 :\n",np.append(a,[[7,8],[9,10]],axis=1))
print("Appending elements along axis 1 :\n",np.append(a,[[7],[8]],axis=1))
```
### Vectorize()
Vectorize class converts a function into a function that can apply to all elements in an array or slice of an array.
Basically, it is a for loop over the elements and does not increase performance.
```
matrix=np.array([[1,2,3],[4,5,6],[7,8,9]])
print(f"Matrix:\n{matrix}")
add_sub_100=lambda i:i+100 if i%2==0 else i-100
vectorized_add_100=np.vectorize(add_sub_100) # Creat vectorized function
new_matrix=vectorized_add_100(matrix) # Apply function to all elements in matrix
print(f"\nResult after applyting function:\n{new_matrix}")
```
### Normalization
```
# commonly used for vectors
x=np.array([12,5])
print("x:",x)
print("Pythagoras Value using np.linalg.norm(x):",np.linalg.norm(x)) # norm() is basically the pythagoras value
y=np.array([3,4])
print("\ny:",y)
print("Pythagoras Value using np.linalg.norm(y):",np.linalg.norm(y))
```
### Numpy functions on matrix
```
x=np.array([[6,2,0],[2,5,6],[3,7,9]])
print("x is:\n",x)
print("\nDeterminant of x: ",np.linalg.det(x))
print("Diagonal of x:",np.diag(x)) # np.diag(matrix) provides us with the array of diagonal elements
print("Other method to find Diagonal of x:",x.diagonal())
print("Sum of Diagonal elements of x:",np.trace(x)) # np.trace(matrix) provides us with the sum of diagonal elements
# np.diag(array/list) will provide us the diagonal matrix with other elements as 0
print("\nCreating Diagonal matrix:\n",np.diag(np.diag(x)))
# get a diagonal off from the main diagonal by using the offset parameter
print(f'\nReturn diagonal one above the main diagonal: {x.diagonal(offset=1)}')
print(f'Return diagonal one below the main diagonal: {x.diagonal(offset=-1)}')
print("\nInverse of x:\n",np.linalg.inv(x))
print("\nFlattening of x by row:\n",x.flatten()) # by default, order ='C' (meaning row-wise)
print("\nFlattening of x by column:\n",x.flatten(order='F'))# order ='F' (meaning column-wise)
```
### Rank of a matrix
It is the dimensions of the vector space spanned by its columns or rows. The maximum number of linearly independent rows in a matrix A is called the row rank of A, and the maximum number of linarly independent columns in A is called the column rank of A.
```
matrix1=np.array([[1,2,3],[2,4,6],[3,8,9]])
print(f'Matrix1:\n{matrix1}')
print(f'\nRank of matrix1: {np.linalg.matrix_rank(matrix1)}') # Return matrix_rank
matrix2=np.array([[1,-1,1,-1],[-1,1,-1,1],[1,-1,1,-1],[-1,1,-1,1]])
print(f'\nMatrix2:\n{matrix2}')
print(f'\nRank of matrix2: {np.linalg.matrix_rank(matrix2)}') # Return matrix_rank
```
### Splitting Matrices using vsplit() and hsplit()
```
x=np.random.randint(1,50,24).reshape(2,12)
print("Original Matrix:\n",x)
y1,y2=np.hsplit(x,2)
print("\nMatrix after splitting horizontally into two parts:\n",y1,'\n\n',y2)
print("\nTransposed Matrix:\n",x.T)
z1,z2,z3=np.vsplit(x.T,3)
print("\nMatrix after splitting vertically into three parts:\n",z1,'\n\n',z2,'\n\n',z3)
```
### Other Operations on Matrices
```
x=np.array([[1,3],[4,2]])
y=np.array([[2,6],[2,7]])
print("x is:\n",x)
print("y is:\n",y)
# Element-wise operations
print("\nx+y is:\n",x+y)
print("x-y is:\n",x-y)
print("x*y is:\n",x*y) # element-wise product
print("x.y is:\n",np.dot(x,y)) # matrix multiplication using dot()
print("Alternative way of matrix multiplication of two matrix:\n",x@y)
print("\nDot Product of two vectors: ",np.vdot(x,y)) # vdot() is used for dot product of vectors, adding individual elements of x*y
print("Inner product is similar to the dot product at each position of the matrix:\n",np.inner(x,y)) # inner() is same as vdot(), row to row multiplication
print("Outer product:\n",np.outer(x,y)) # outer product is the mapping of y on x, size of the result is (No. of elements in x * No. of elements in y)
print("\nConcatenating x & y along axis 0:\n",np.concatenate((x,y))) # concatenating along axis=0 (default)
print("Concatenating x & y along axis 1:\n",np.concatenate((x,y),axis=1)) # concatenating along axis=1
print("Horizontal Stacking of x & y:\n",np.hstack((x,y)))
print("Vertical Stacking of x & y:\n",np.vstack((x,y)))
print("Sort along each row:\n",np.sort(np.vstack((x,y)),0)) # sort(array,axis) is used to sort elements along the specified axis
```
### Solving Linear System (Ax=b) -> (x=(inverse of A).b)
```
A=np.floor(np.random.random((2,2))*20)
print("a is:\n",A)
b=np.ceil(np.random.random(2)*10)
print("b is:\n",b)
# Method 1
a_inv=np.linalg.inv(A)
print("\na_inverse is:\n",a_inv)
x=np.dot(a_inv,b)
print("\nResult is: ",x)
# Method 2
x=np.linalg.solve(A,b) # solve() is used to find the solution of the linear system
print("\nResult using solve() is: ",x)
```
### Example: a.b=|a||b|cos(theta)
```
# Generating two matrices 'a' and 'b'
a=np.around((np.random.randint(-10,10,9)-5))
print("Matrix a:\n",a.reshape(3,3))
b=np.ceil(np.random.randn(3,3)*10)
print("\nMatrix b:\n",b)
dp=np.vdot(a,b) # dot product
print("\nDot Product of a and b: ",dp)
x=np.linalg.norm(a)
y=np.linalg.norm(b)
print("Absolute Value of a is {} and b is {}: ".format(x,y))
costheta=dp/(x*y)
angle=np.rad2deg(np.arccos(costheta))
print("Angle Theta is: ",np.ceil(angle)," degrees")
```
### Arrays with structured data
```
# initializing array
data=np.zeros(4,dtype={'names':('Name','Id','Price','Quantity'),'formats':('U10','i4','f8','i8')})
print('Array:',data)
# inserting values in array
data['Name']=['Store_A','Store_B','Store_C','Store_D']
data['Id']=[31,32,33,34]
data['Price']=[50.4,38.3,22,78.53]
data['Quantity']=[36,25,72,21]
print("\nArray after inserting data:\n",data)
```
### I/O wth numpy
```
a=np.arange(1,11)
np.save("Out_1",a) # creating a file 'Out_1' for storing data using save()
b=np.load("Out_1.npy") # loading the file 'Out_1' containing stored data using load()
print("Data: ",b)
np.savetxt("Out_2.txt",b) # storing data in simple text file
c=np.loadtxt("Out_2.txt") # retrieving data from a simple text file
print("Data as text: ",c)
```
| github_jupyter |
# Network Analysis
---
## Introduction
Networks are mathematical or graphical representations of patterns of relationships between entities. These relationships are defined by some measure of "closeness" between individuals, and can exist in an abstract or actual space (for example, whether you are related to someone versus how far away you live from each other). Networks have been used to model everything from airplane traffic to supply chains, and even amorphous materials like window glass, cells, and proteins. They can also be used to model relationships among people. Social networks are patterns of relationships among people or organizations that affect and are affected by actions of individuals within the network. Network analysis captures the effect of the complete pattern of connections among individuals in a group to help us perform structural analysis of outcomes of interest for individuals and the group as a whole.
Networks can be represented as **graphs**, where a graph is made up of **nodes** connected by **ties**. The flexibility of network analysis means that the first step toward analysis is to clearly define what constitutes a node and what constitutes a tie in your network. There are several type of graphs: connected, unconnected, directional, and many more (see [glossary](#glossary-of-terms) for a list of terms).
This tutorial is based on Chapter 8 of [Big Data and Social Science](https://github.com/BigDataSocialScience).
## Glossary of Terms
- A **node** is an individual entity within a graph.
- A **tie** is a link between nodes. Ties can be **undirected**, meaning they represent a symmetrical
relationship, or **directed**, meaning they represent an asymmetrical relationship (one that doesn't necessarily
go both ways).
- A directed tie is known as an **arc**. An undirected tie is known as an **edge**.
tth Barack Obama, then he is also Facebook friends with me.
- A **cutpoint** is a *node* that cannot be removed without disconnecting the network.
- A **bridge** is a *tie* that cannot be removed without disconnecting the network.
- Two nodes are said to be **reachable** when they are connected by an unbroken chain of relationships through
other nodes.
- **Network density** is the number of *actual* connections in a network divided by the number of *potential*
connections in that network.
- **Average distance** is the average path length between nodes in a graph. It is a measure of how many nodes
it takes to transmit information across the network. This metric is only valid for fully connected graphs.
- **Centrality** is the degree to which a given node influences the entire network.
## Table of Contents
1. [Loading the Data](#Loading-the-data)
2. [Representations of Networks](#Representations-of-Networks)
1. [Adjacency Matrix](#Adjacency-matrix)
2. [List of Edges](#List-of-edges)
3. [Graphs](#Graphs)
3. [Network Measures](#network-measures)
1. [Summary Statistics](#summary-statistics)
2. [Degree Distribution](#Degree-Distribution)
3. [Components and Reachability](#Components-and-reachability)
4. [Path Length](#Path-Length)
4. [Centrality Metrics](#Centrality-metrics)
1. [Degree Centrality](#Degree-Centrality)
2. [Closeness Centrality](#Closeness-Centrality)
3. [Betweenness Centrality](#Betweenness-Centrality)
5. [Cliques](#Cliques)
6. [Community Detection](#Community-Detection)
7. [Exercises](#Exercises)
8. [Resources](#resources)
```
%pylab inline
from __future__ import print_function
import sys
import community
import networkx as nx
import seaborn as sns
import pandas as pd
from sqlalchemy import create_engine
```
# Creating a Network
The first step in creating a network is defining the question or questions we want to explore using the network. This then allows us to define what a *node* and *tie* will be.
# Loading the Data
In this tutorial we will explore graphical representations of this network, degree metrics, centrality metrics, how to calculate the shortest path between nodes, and community detection. We will be using the [NetworkX Python Library](https://networkx.github.io) developed at Los Alamos National Laboratory (LANL).
First we have to load the data from the database. *Note we did the hard work of creating the network in SQL and now doing our more complex analysis in Python.*
```
engine = create_engine("")
df_network = pd.read_sql('',
engine)
df_network.head()
network = list(zip(df_network.ssn_l, df_network.ssn_r))
G = nx.Graph()
G.add_edges_from(network)
```
# Representations of Networks
## Adjacency Matrix
One way to represent networks is an **adjacency matrix**, a binary (all entries either 0 or 1) square matrix. Each row represents the connections between one node and the other nodes in the network. For instance, the first row represents the first node. Each entry in a row corresponding to a node represents possible connections to the other nodes as indicated by 1 (connected) or 0 (not connected).
```
plt.figure(figsize=(30,30))
plt.spy(nx.adjacency_matrix(G))
```
## List of Edges
Graphs can also be represented as **edge lists**, where you list the connections between nodes exhaustively. If we know the graph is undirected, we only need to list each relationship one time. For example, we say that 1 is connected to 32, but it would be redundant to also say that 32 is connected to 1. Representing a network as an edge list is typically preferable to an adjacency matrix in the case of a sparse matrix -- where most of the entries of the matrix are 0 due to taking much less space to store. An edge list is typically how a network is stored in a database.
```
network[:10]
```
## Graphs
Networks can also be displayed as graphs, which is probably the most intuitive way to visualize them. The top visualization below emphasizes the nodes, or individuals, how close they are to one another, and the groups that emerge.
The visualization below emphasizes the edges, or the connections themselves. *Note: this network is too large to visualize*
Due to the large number of nodes this visualization is not helpful. Given that we can't derive much information from this particular visualization we need to turn to other network measures.
# Network Measures
It is useful to know the size (in terms of nodes and ties) of the network, both to have an idea of the size and connectivity of the network, and because most of the measures you will use to describe the network will need
to be standardized by the number of nodes or the number of potential connections.
One of the most important things to understand about larger networks is the pattern of indirect connections among nodes, because it is these chains of indirect connections that make the network function as a whole, and make networks a
useful level of analysis. Much of the power of networks is due to indirect ties that create **reachability.** Two nodes can reach each other if they are connected by an unbroken chain of relationships, often called **indirect ties**.
Structural differences between node positions, the presence and characteristics of smaller "communities" within larger networks, and properties of the structure of the whole group can be quantified using different **network measures.**
## Summary Statistics
```
# Print out some summary statistics on the network
print( nx.info(G) )
```
We see that there are 568892 ties (relationships) and 13716 nodes (individuals).
The **average degree** of the network is the average number of edges connected to each node.
We see that the average degree of this network is 83., meaning that the average individual in the network is connected to 83 individuals.
```
# Print out the average density of the netwo
print(nx.density(G))
```
The average density is calculated as the $$\text{average density} = \frac{\text{actual ties}}{\text{possible number of ties}} $$
where the possible number of ties for an undirected graph (if every node had a tie to every other node) is $\frac{n(n-1)}{2}$.
If every node were connected to every other node, the average density would be 1. If there were no ties between any of the nodes, the average density would be 0. The average density of this network is 0.0006, which indicates it is not a very dense network. In this example, we can interpret this to mean that individuals are mostly in small groups, and the groups don't overlap very much.
Now that we have looked at some summary statistics as a whole we are going to drill down to the individual actors in our network.
## Degree Distribution (Who has the most relationships?)
We can cast this question as a network analysis problem by asking *which node has the most ties*.
```
dict_degree = G.degree()
df_degree = pd.DataFrame.from_dict(dict_degree, orient='index')
df_degree.columns=['degree']
df_degree.index.name = 'node_id'
sns.set_style("whitegrid")
plt.figure(figsize=(22, 12))
sns.set_context("poster", font_scale=1.00, rc={"lines.linewidth": 1.00,"lines.markersize":8})
df_degree.sort_values(by='degree', ascending=False)[:10].plot(kind='barh')
df_degree.sort_values(by='degree', ascending=False)[:10]
G.neighbors()
```
## Components and Reachability
Two nodes are said to be **reachable** when they are connected by an unbroken chain of relationships through other nodes. Networks in which more of the possible connections (direct and indirect) among nodes are realized are denser and more cohesive than networks in which fewer of these connections are realized.
The reachability of individuals in a network is determined by membership in **components**, which are subsets of the
larger network in which every member of the group is indirectly connected to every other. Imagining the standard node and line drawing of a graph, a component is a portion of the network where you can trace a path between every pair of nodes without ever lifting your pen.
Many larger networks consist of a single dominant component including anywhere from 50% to 90% of the individuals, and a few smaller components that are not connected. In this case, is common to perform analysis on only the main connected component of the graph, because there is not a convenient way to mathematically represent how "far away" unconnected nodes are. In our karate class example, our graph is connected, meaning that you can reach any individual from any other individual by moving along the edges of the graph, so we don't need to worry about that problem.
## Path Length
A **shortest path** between two nodes is a path from one node to the other, not repeating any nodes. One way to think of a shortest path between two individuals is how many people it would take to broker an introduction between them (think [six degrees of Kevin Bacon](https://en.wikipedia.org/wiki/Six_Degrees_of_Kevin_Bacon)).
Most pairs will have several "shortest paths" between them; the *longest shortest path* is called the **geodesic**.
```
# Calculate the length of the shortest path between 12 and 15
ls_path = nx.shortest_path(G,)
print('The path length from {} to {} is {}.'.format(
'',
'',
len(ls_path)))
print('path length: ', ls_path)
```
In this case there is no path between the two nodes.
```
# Calculate the length of the shortest path between 12 and 15
ls_path = nx.shortest_path(G, '',
'')
print('The path length from {} to {} is {}.'.format(
'',
'',
len(ls_path)))
print('path length: ', ls_path)
```
The **average shortest path length** describes how quickly information or goods can disburse through the network.
The average shortest length $l$ is defined as $$ l = \frac{1}{n(n-1)} \sum_{i \ne j}d(v_{i},v_{j}) $$ where $n$ is the number of nodes in the graph and $d(v_{i},v_{j})$ is the shortest path length between nodes $i$ and $j$.
```
print(nx.average_shortest_path_length(G))
```
In this case, we cannot calculate the average shortest path, since our network is not fully connected (the network has islands within it that are cut off from the rest of the network). Since there is no way to calculate the distance between two nodes that can't be reached from one another, there is no way to calculate the average shortest distance across all pairs.
# Centrality Metrics
Centrality metrics measure how important, or "central," a node is to the network. These can indicate what individual has the most social contacts, who is closest to people, or the person where information most transfers through. There are many **centrality metrics** -- degree centrality, betweenness centrality, closeness centrality, eigenvalue centrality, percolation centrality, PageRank -- all capturing different aspects of a node's contribution to a network.
Centrality measures are the most commonly used means to explore network effects at the level of certain individual participants. Typically, these metrics identify and describe a few important nodes, but don't tell us much about the rest of the nodes in the network. This is akin to Google's search results: the first few matches are the most relevant, but if you go a few pages in to the search results, you might as well have been searching for something else entirely.
## Degree Centrality (Who has the most relationships?)
The most basic and intuitive measure of centrality, **degree centrality**, simply counts the number of ties that each node has. Degree centrality represents a clear measure of the prominence or visibility of a node. The degree centrality $C_{D}(x)$ of a node $x$ is
$$C_{D}(x) = \frac{deg(x)}{n-1}$$
where $deg(x)$ is the number of connections that node $x$ has, and $n-1$ is a normalization factor for the total amount of possible connections.
If a node has no connections to any other nodes, its degree centrality will be 0. If it is directly connected to every other node, its degree centrality will be 1.
```
dict_degree_centrality = nx.degree_centrality(G)
df_degree_centrality = pd.DataFrame.from_dict(dict_degree_centrality, orient='index')
df_degree_centrality.columns=['degree_centrality']
df_degree_centrality.index.name = 'node_id'
df_degree_centrality.sort_values(by='degree_centrality',
ascending=False)[:10].plot(kind='barh')
```
As we can see, this is simply a recasting of the [degree distribution](#degree-distribution).
## Closeness Centrality (Who has the shortest of shortest paths going between them?)
**Closeness centrality** is based on the idea that networks position some individuals closer to or farther away
from other individuals, and that shorter paths between actors increase the likelihood of communication, and
consequently the ability to coordinate complicated activities. The closeness centrality $C_C(x)$ of a node $x$ is calculated as:
$$C_C(x) = \frac{n-1}{\sum_{y}d(x,y)} $$
where $d(x,y)$ is the length of the geodesic between nodes $x$ and $y$.
```
dict_closeness_centrality = {}
for ssn_hash in zip(*network[:25])[0]:
dict_closeness_centrality[ssn_hash] = nx.closeness_centrality(G,u=ssn_hash)
df_closeness_centrality = pd.DataFrame.from_dict(dict_closeness_centrality,
orient='index')
df_closeness_centrality.columns=['closeness_centrality']
df_closeness_centrality.index.name = 'node_id'
df_closeness_centrality.sort_values(by='closeness_centrality',
ascending=False)[:10].plot(kind='barh')
```
## Betweenness Centrality (Who has the most shortest paths between them?)
Where closeness assumes that communication and information flow increase with proximity, **betweenness centrality**
captures "brokerage," or the idea that a node that is positioned "in between" many other pairs of nodes gains some individual advantage. To calculate betweenness, we must assume that when people search for new
information through networks, they are capable of identifying the shortest path (so that we know that the path between two nodes actually includes the "in between" node); additionally, we must assume
that when multiple shortest paths exist, each path is equally likely to be chosen.
The betweenness centrality $C_B(x)$ of a node $x$ is given by
$$ C_B{x} = \sum_{s,t} \frac{\sigma_{st}(x)}{\sigma_{st}}$$
where $\sigma_{st}$ is the number of shortest paths from node $s$ to node $t$ and $\sigma_{st}(x)$ is the number of shortest paths $\sigma_{st}$ passing through node $x$. Intuitively, for each node, we look at how many of the shortest paths between every other pair of nodes includes that node.
```
dict_betweenness_centrality = nx.betweenness_centrality(G, k=50)
df_betweenness_centrality = pd.DataFrame.from_dict(dict_betweenness_centrality,
orient='index')
df_betweenness_centrality.columns=['betweeness_centrality']
df_betweenness_centrality.index.name = 'node_id'
df_betweenness_centrality.sort_values(by='betweeness_centrality',
ascending=False)[:10].plot(kind='barh')
```
Given the small values for betweenness centrality, it appears that there is no large single broker in this network.
# Cliques
A clique is a maximally connected sub-network, or a group of individuals who are all connected to one another.
In our case, this would be a group of individuals that are all connected to each other: all released in the same month, to the same zipcode, with the same gang affiliation. We might expect to see a lot of cliques in this network, because we defined the relationships within our network based on these groupings.
```
cliques = list(nx.find_cliques(G))
import functools
#summary stats of cliques
num_cliques = len(cliques)
ls_len_cliqs = [len(cliq) for cliq in cliques ]
max_clique_size = max(ls_len_cliqs)
avg_clique_size = np.mean(ls_len_cliqs)
max_cliques = [c for c in cliques if len(c) == max_clique_size]
max_clique_sets = [set(c) for c in max_cliques]
people_in_max_cliques = list(functools.reduce(lambda x,y: x.intersection(y), max_clique_sets))
print(num_cliques)
print(max_clique_size)
print(avg_clique_size)
```
There are *2231* cliques in the network. The maximum clique size is *689* people and the average clique size is *7.60*, ~8 people.
Let's see what the maximum cliques look like
```
max_cliques
Graph_max_clique1 = G.subgraph(max_cliques[0])
nx.draw(Graph_max_clique1, with_labels=False)
df_network[ df_network['ssn_l'].isin(max_cliques[0]) & df_network['ssn_r'].isin(max_cliques[0])]
```
# Community Detection (This may take some time)
In **community detection**, we try to find sub-networks, or communities, of densely populated connections. Community detection is similar to clustering, in that strong communities will display an abundance of intra-community connections and few inter-communityk connections.
The technical implementation of the algorithm can be found [here](https://arxiv.org/pdf/0803.0476v2.pdf).
```
dict_clusters = community.best_partition(G)
clusters = [dict_clusters.get(node) for node in G.nodes()]
plt.axis("off")
#nx.draw_networkx(G,
# cmap = plt.get_cmap("terrain"),
# node_color = clusters,
# node_size = 600,
# with_labels = True,
# fontsize=200)
dict_clusters
```
[Back to Table of Contents](#Table-of-Contents)
# Resources
- [International Network for Social Network Analysis](http://www.insna.org/) is a large, interdisciplinary association
dedicated to network analysis.
- [Pajek](http://mrvar.fdv.uni-lj.si/pajek/) is a freeware package for network analysis and visualization.
- [Gephi](https://gephi.org/) is another freeware package that supports large-scale network visualization.
- [Network Workbench](http://nwb.cns.iu.edu/) is a freeware package that supports extensive analysis and
visualization of networks.
- [NetworkX](https://networkx.github.io/) is the Python package used in this tutorial to analyze and visualize networks.
- [iGraph](http://igraph.org/) is a network analysis package with implementations in R, Python, and C libraries.
- [A Fast and Dirty Intro to NetworkX (and D3)](http://www.slideshare.net/arnicas/a-quick-and-dirty-intro-to-networkx-and-d3)
| github_jupyter |
# Ensemble NMS - Detectron2 [Inference]
### Hi kagglers, This is `Ensemble NMW - Detectron2 [Inference]` notebook.
* [Sartorius Segmentation - Detectron2 [training]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-detectron2-training)
* [Sartorius Segmentation - Detectron2 [Inference]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-detectron2-inference)
* [K-fold CrossValidation COCO Dataset Generator](https://www.kaggle.com/ammarnassanalhajali/k-fold-crossvalidation-coco-dataset-generator)
### Please if this kernel is useful, <font color='red'>please upvote !!</font>
## Other notebooks in this competition
- [Sartorius Segmentation - Keras U-Net[Training]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-keras-u-net-training)
- [Sartorius Segmentation - Keras U-Net[Inference]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-keras-u-net-inference/edit)
# Intro
Ensembling multiple weaker performing models can help to get the results that you want.
## Install and import libraries
```
!pip install ../input/detectron-05/whls/pycocotools-2.0.2/dist/pycocotools-2.0.2.tar --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/fvcore-0.1.5.post20211019/fvcore-0.1.5.post20211019 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/antlr4-python3-runtime-4.8/antlr4-python3-runtime-4.8 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/detectron2-0.5/detectron2 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/ensemble-boxes-104/ensemble_boxes-1.0.4/ -f ./ --no-index
import os
import cv2
import json
import time
import numpy as np
import pandas as pd
import torch
import detectron2
from tqdm.auto import tqdm
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data.datasets import register_coco_instances
from detectron2.evaluation import inference_on_dataset
from detectron2.evaluation.evaluator import DatasetEvaluator
from detectron2.data import DatasetCatalog, build_detection_test_loader
import pycocotools.mask as mask_util
from PIL import Image
import matplotlib.pyplot as plt
from fastcore.all import *
from ensemble_boxes import *
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
if torch.cuda.is_available():
DEVICE = torch.device('cuda')
print('GPU is available')
else:
DEVICE = torch.device('cpu')
print('CPU is used')
print('detectron ver:', detectron2.__version__)
```
## My Models
```
best_model=(
{'file': 'R50-306.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.306,'ths':[.18, .38, .58]},
{'file': '50_FPN_3x_F3_R82_300.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.300,'ths':[.18, .38, .58]},
{'file': '32x8d_FPN_3x_F3_R57_295.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml', 'LB score': 0.295,'ths':[.18, .38, .58]},
{'file': '50_FPN_3x_F5_ATTT32_300.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.300,'ths':[.19, .39, .57]}
)
#config_name = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
mdl_path = "../input/dtectron2-models-5fold"
DATA_PATH = "../input/sartorius-cell-instance-segmentation"
MODELS = []
BEST_MODELS =[]
THSS = []
ID_TEST = 0
SUBM_PATH = f'{DATA_PATH}/test'
SINGLE_MODE = False
NMS = True
MIN_PIXELS = [75, 150, 75]
IOU_TH = 0.3
for b_m in best_model:
model_name=b_m["file"]
model_ths=b_m["ths"]
config_name=b_m["config_name"]
BEST_MODELS.append(model_name)
THSS.append(model_ths)
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(config_name))
cfg.INPUT.MASK_FORMAT = 'bitmask'
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.MODEL.WEIGHTS = f'{mdl_path}/{model_name}'
cfg.TEST.DETECTIONS_PER_IMAGE = 1000
MODELS.append(DefaultPredictor(cfg))
print(f'all loaded:\nthresholds: {THSS}\nmodels: {BEST_MODELS}')
MODELS
```
## Utils
```
def rle_decode(mask_rle, shape=(520, 704)):
'''
mask_rle: run-length as string formated (start length)
shape: (height,width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int)
for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo : hi] = 1
return img.reshape(shape) # Needed to align to RLE direction
def rle_encode(img):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels = img.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def pred_masks(file_name, path, model, ths, min_pixels):
img = cv2.imread(f'{path}/{file_name}')
output = model(img)
pred_classes = output['instances'].pred_classes.cpu().numpy().tolist()
pred_class = max(set(pred_classes), key=pred_classes.count)
take = output['instances'].scores >= ths[pred_class]
pred_masks = output['instances'].pred_masks[take]
pred_masks = pred_masks.cpu().numpy()
result = []
used = np.zeros(img.shape[:2], dtype=int)
for i, mask in enumerate(pred_masks):
mask = mask * (1 - used)
if mask.sum() >= min_pixels[pred_class]:
used += mask
result.append(rle_encode(mask))
return result
def ensemble_preds(file_name, path, models, ths):
img = cv2.imread(f'{path}/{file_name}')
classes = []
scores = []
bboxes = []
masks = []
for i, model in enumerate(models):
output = model(img)
pred_classes = output['instances'].pred_classes.cpu().numpy().tolist()
pred_class = max(set(pred_classes), key=pred_classes.count)
take = output['instances'].scores >= ths[i][pred_class]
classes.extend(output['instances'].pred_classes[take].cpu().numpy().tolist())
scores.extend(output['instances'].scores[take].cpu().numpy().tolist())
bboxes.extend(output['instances'].pred_boxes[take].tensor.cpu().numpy().tolist())
masks.extend(output['instances'].pred_masks[take].cpu().numpy())
assert len(classes) == len(masks) , 'ensemble lenght mismatch'
#scores, classes, bboxes, masks = zip(*sorted(zip(scores, classes, bboxes, masks),reverse=True))
return classes, scores, bboxes, masks
def nms_predictions(classes, scores, bboxes, masks,
iou_th=.5, shape=(520, 704)):
he, wd = shape[0], shape[1]
boxes_list = [[[x[0] / wd, x[1] / he, x[2] / wd, x[3] / he] for x in bboxes]]
scores_list = [[x for x in scores]]
classes_list = [[x for x in classes]]
nms_bboxes, nms_scores, nms_classes = non_maximum_weighted(
boxes_list,
scores_list,
classes_list,
weights=None,
iou_thr=0.3,skip_box_thr=0.0001
)
nms_masks = []
for s in nms_scores:
nms_masks.append(masks[scores.index(s)])
nms_scores, nms_classes, nms_masks = zip(*sorted(zip(nms_scores, nms_classes, nms_masks), reverse=True))
return nms_classes, nms_scores, nms_masks
def ensemble_pred_masks(masks, classes, min_pixels, shape=(520, 704)):
result = []
#pred_class = max(set(classes), key=classes.count)
pred_class = int(max(set(classes), key=classes.count).item())
used = np.zeros(shape, dtype=int)
for i, mask in enumerate(masks):
mask = mask * (1 - used)
if mask.sum() >= min_pixels[pred_class]:
used += mask
result.append(rle_encode(mask))
return result
```
## Demo inference
```
test_names = os.listdir(SUBM_PATH)
print('test images:', len(test_names))
encoded_masks_single = pred_masks(
test_names[ID_TEST],
path=SUBM_PATH,
model=MODELS[0],
ths=THSS[0],
min_pixels=MIN_PIXELS
)
classes, scores, bboxes, masks = ensemble_preds(
file_name=test_names[ID_TEST] ,
path=SUBM_PATH,
models=MODELS,
ths=THSS
)
if NMS:
classes, scores, masks = nms_predictions(
classes,
scores,
bboxes,
masks, iou_th=IOU_TH
)
encoded_masks = ensemble_pred_masks(masks, classes, min_pixels=MIN_PIXELS)
_, axs = plt.subplots(2, 2, figsize=(14, 8))
axs[0][0].imshow(cv2.imread(f'{SUBM_PATH}/{test_names[ID_TEST]}'))
axs[0][0].axis('off')
axs[0][0].set_title(test_names[ID_TEST])
for en_mask in encoded_masks_single:
dec_mask = rle_decode(en_mask)
axs[0][1].imshow(np.ma.masked_where(dec_mask == 0, dec_mask))
axs[0][1].axis('off')
axs[0][1].set_title('single model')
axs[1][0].imshow(cv2.imread(f'{SUBM_PATH}/{test_names[ID_TEST]}'))
axs[1][0].axis('off')
axs[1][0].set_title(test_names[ID_TEST])
for en_mask in encoded_masks:
dec_mask = rle_decode(en_mask)
axs[1][1].imshow(np.ma.masked_where(dec_mask == 0, dec_mask))
axs[1][1].axis('off')
axs[1][1].set_title('ensemble models')
plt.show()
```
## Inference
```
subm_ids, subm_masks = [], []
for test_name in tqdm(test_names):
if SINGLE_MODE:
encoded_masks = pred_masks(
test_name,
path=SUBM_PATH,
model=MODELS[0],
ths=THSS[0],
min_pixels=MIN_PIXELS
)
else:
classes, scores, bboxes, masks = ensemble_preds(
file_name=test_name,
path=SUBM_PATH,
models=MODELS,
ths=THSS
)
if NMS:
classes, scores, masks = nms_predictions(
classes,
scores,
bboxes,
masks,
iou_th=IOU_TH
)
encoded_masks = ensemble_pred_masks(
masks,
classes,
min_pixels=MIN_PIXELS
)
for enc_mask in encoded_masks:
subm_ids.append(test_name[:test_name.find('.')])
subm_masks.append(enc_mask)
pd.DataFrame({
'id': subm_ids,
'predicted': subm_masks
}).to_csv('submission.csv', index=False)
pd.read_csv('submission.csv').head()
```
# References
1. https://www.kaggle.com/vgarshin/detectron2-inference-with-ensemble-and-nms
| github_jupyter |
<div style='background-image: url("share/baku.jpg") ; padding: 0px ; background-size: cover ; border-radius: 15px ; height: 250px; background-position: 0% 80%'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.9) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.9) ; line-height: 100%">ObsPy Tutorial</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.7)">Introduction to file formats and reading/writing data with ObsPy</div>
</div>
</div>
</div>
image: User:Abbaszade656 / Wikimedia Commons / <a href="http://creativecommons.org/licenses/by-sa/4.0/">CC-BY-SA-4.0</a>
## Workshop for the "Training in Network Management Systems and Analytical Tools for Seismic"
### Baku, October 2018
Seismo-Live: http://seismo-live.org
##### Authors:
* Lion Krischer ([@krischer](https://github.com/krischer))
* Tobias Megies ([@megies](https://github.com/megies))
---

While oftentimes not taught, it is important to understand the types of data available in seismology, at least at a basic level. The notebook at hand teaches you how to use different types of seismological data in ObsPy.
**This notebook aims to give a quick introductions to ObsPy's core functions and classes. Everything here will be repeated in more detail in later notebooks.**
Using ObsPy revolves around three central function's and the objects they return. Three types of data are classically distinguished in observational seismology and each of these map to one function in ObsPy:
* `obspy.read()`: Reads waveform data to `obspy.Stream` and `obspy.Trace` objects.
* `obspy.read_inventory()`: Reads station information to `obspy.Inventory` objects.
* `obspy.read_events()`: Reads event data to `obspy.Catalog` objects.
The specific format of each of these types of data is automatically determined, each function supports reading from URLs, various compression formats, in-memory files, and other sources. Many different file formats can also be written out again. The resulting objects enable the manipulation of the data in various ways.
One of the main goals of ObsPy is to free researchers from having to worry about which format their data is coming in to be able to focus at the task at hand.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 12, 8
```
## FDSN/SEED Identifiers
According to the [SEED standard](www.fdsn.org/seed_manual/SEEDManual_V2.4.pdf), which is fairly well adopted, the following nomenclature is used to identify seismic receivers:
* **Network identifier**: Identifies the network/owner of the data. Assigned by the FDSN and thus unique.
* **Station identifier**: The station within a network. *NOT UNIQUE IN PRACTICE!* Always use together with a network code!
* **Location identifer**: Identifies different data streams within one station. Commonly used to logically separate multiple instruments at a single station.
* **Channel identifier**: Three character code: 1) Band and approximate sampling rate, 2) The type of instrument, 3) The orientation
This results in full ids of the form **NET.STA.LOC.CHA**, e.g. **BK.HELL.00.BHZ**. *(ObsPy uses this ordering, putting the station identifier first is also fairly common in other packages.)*
---
In seismology we generally distinguish between three separate types of data:
1. **Waveform Data** - The actual waveforms as time series.
2. **Station Data** - Information about the stations' operators, geographical locations, and the instruments' responses.
3. **Event Data** - Information about earthquakes and other seismic sources.
Some formats have elements of two or more of these.
## Waveform Data

There are a myriad of waveform data formats, but in Europe and the USA, two formats dominate: **MiniSEED** and **SAC**
### MiniSEED
* This is what you get from datacenters and usually also what they store, thus the original data
* Most useful as a streaming and archival format
* Can store integers and single/double precision floats
* Integer data (e.g. counts from a digitizer) are heavily compressed: a factor of 3-5 depending on the data
* Can deal with gaps and overlaps
* Multiple components per file
* Contains only the really necessary parameters and some information for the network operators and data providers
```
# To use ObsPy, you always have to import it first.
import obspy
# ObsPy automatically detects the file format. ALWAYS
# (no matter the data format) results in Stream object.
st = obspy.read("data/example.mseed")
# Printing an object usually results in some kind of
# informative string.
print(st)
# Format specific information is an attribute named
# after the format.
print(st[0].stats.mseed)
# Use the .plot() method for a quick preview plot.
st.plot()
# This is a quick interlude to teach you the basics of how to work
# with Stream/Trace objects.
# Most operations work in-place, e.g. they modify the existing
# objects. This is done for performance reasons and to match
# typical seismological workflows.
# We'll create a copy here - otherwise multiple executions of this
# notebook cell will keep modifying the data - one of the caveats
# of the notebooks.
st2 = st.copy()
# To use only parts of a Stream, use the select() function.
print(st2.select(component="Z"))
# Stream objects behave like a list of Trace objects.
tr = st2[0]
# Plotting also works for single traces.
tr.plot()
# Some basic processing. Please note that these modify the
# existing object.
tr.detrend("linear")
tr.taper(type="hann", max_percentage=0.05)
tr.filter("lowpass", freq=0.05, corners=4)
# Plot again.
tr.plot()
# You can write it again by simply specifing the format.
st.write("temp.mseed", format="mseed")
```
### SAC
* Custom format of the `SAC` code.
* Simple header and single precision floating point data.
* Only a single component per file and no concept of gaps/overlaps.
* Used a lot due to `SAC` being very popular and the additional basic information that can be stored in the header.
* ObsPy internally uses a `SACTrace` object to do this which can also be directly used if full control over SAC files is desired/necessary.
```
st = obspy.read("data/example.sac")
print(st)
# SAC specific information is stored in
# the .sac attribute.
st[0].stats.sac.__dict__
st.plot()
# You can once again write it with the write() method.
st.write("temp.sac", format="sac")
```
## Station Data

Station data contains information about the organziation that collections the data, geographical information, as well as the instrument response. It mainly comes in three formats:
* `(dataless)SEED`: Very complete but a complex binary format. Still used a lot, e.g. for the Arclink protocol
* `RESP`: A strict subset of SEED. ASCII based. Contains **ONLY** the response.
* `StationXML`: Essentially like SEED but cleaner and based on XML. Most modern format and what the datacenters nowadays serve. **Use this if you can.**
ObsPy can work with all of them in the same fashion, but today we will focus on StationXML.
They are XML files:
```
!head data/all_stations.xml
import obspy
# Use the read_inventory function to open them. This function
# will return Inventory objects.
inv = obspy.read_inventory("data/all_stations.xml")
print(inv)
```
You can see that they can contain an arbitrary number of networks, stations, and channels.
```
# ObsPy is also able to plot a map of them.
inv.plot(projection="local");
# As well as a plot the instrument response.
inv.select(network="BK", station="HELL").plot_response(0.001);
# Coordinates of single channels can also be extraced. This function
# also takes a datetime arguments to extract information at different
# points in time.
inv.get_coordinates("BK.HELL.00.BHZ")
# And it can naturally be written again, also in a modified state.
inv.select(channel="BHZ").write("temp.xml", format="stationxml")
```
## Event Data

Event data is often served in very simple formats like NDK or the CMTSOLUTION format used by many waveform solvers:
```
!cat data/GCMT_2014_08_24__Mw_6_1
```
Datacenters on the hand offer **QuakeML** files, which are surprisingly complex in structure but can store detailed relations between pieces of data.
```
# Read QuakeML files with the read_events() function.
cat = obspy.read_events("data/GCMT_2014_08_24__Mw_6_1.xml")
print(cat)
print(cat[0])
cat.plot(projection="ortho");
# Once again they can be written with the write() function.
cat.write("temp_quake.xml", format="quakeml")
```
To show off some more things, I added a file containing all events the USGS has for a three degree radius around Livermore for the last three years.
```
import obspy
cat = obspy.read_events("data/last_three_years_around_livermore.zmap")
print(cat)
cat.plot(projection="local", resolution="i");
cat.filter("magnitude > 3")
```
| github_jupyter |
# GPU-accelerated interactive visualization of single cells with RAPIDS, Scanpy and Plotly Dash
Copyright (c) 2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License") you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
In this notebook, we cluster cells based on a single-cell RNA-seq count matrix, and produce an interactive visualization of the clustered cells that allows for further analysis of the data in a browser window.
For demonstration purposes, we use a dataset of ~70,000 human lung cells from Travaglini et al. 2020 (https://www.biorxiv.org/content/10.1101/742320v2) and label cells using the ACE2, TMPRSS2, and EPCAM marker genes.
## Import requirements
```
import scanpy as sc
import anndata
import sys
import time
import os, wget
import cudf
import cupy as cp
from cuml.decomposition import PCA
from cuml.manifold import TSNE
from cuml.cluster import KMeans
from cuml.preprocessing import StandardScaler
import rapids_scanpy_funcs
import warnings
warnings.filterwarnings('ignore', 'Expected ')
```
We use the RAPIDS memory manager on the GPU to control how memory is allocated.
```
import rmm
rmm.reinitialize(
managed_memory=True, # Allows oversubscription
pool_allocator=False, # default is False
devices=0, # GPU device IDs to register. By default registers only GPU 0.
)
cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
```
## Input data
In the cell below, we provide the path to the `.h5ad` file containing the count matrix to analyze. Please see the README for instructions on how to download the dataset we use here.
We recommend saving count matrices in the sparse .h5ad format as it is much faster to load than a dense CSV file. To run this notebook using your own dataset, please see the README for instructions to convert your own count matrix into this format. Then, replace the path in the cell below with the path to your generated `.h5ad` file.
```
input_file = "../data/krasnow_hlca_10x.sparse.h5ad"
if not os.path.exists(input_file):
print('Downloading import file...')
os.makedirs('../data', exist_ok=True)
wget.download('https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/krasnow_hlca_10x.sparse.h5ad',
input_file)
```
## Set parameters
```
# marker genes
RIBO_GENE_PREFIX = "RPS" # Prefix for ribosomal genes to regress out
markers = ["ACE2", "TMPRSS2", "EPCAM"] # Marker genes for visualization
# filtering cells
min_genes_per_cell = 200 # Filter out cells with fewer genes than this expressed
max_genes_per_cell = 6000 # Filter out cells with more genes than this expressed
# filtering genes
min_cells_per_gene = 1 # Filter out genes expressed in fewer cells than this
n_top_genes = 5000 # Number of highly variable genes to retain
# PCA
n_components = 50 # Number of principal components to compute
# KNN
n_neighbors = 15 # Number of nearest neighbors for KNN graph
knn_n_pcs = 50 # Number of principal components to use for finding nearest neighbors
# UMAP
umap_min_dist = 0.3
umap_spread = 1.0
```
## Load and Prepare Data
We load the sparse count matrix from an `h5ad` file using Scanpy. The sparse count matrix will then be placed on the GPU.
```
%%time
adata = sc.read(input_file)
adata.shape
%%time
genes = cudf.Series(adata.var_names)
barcodes = cudf.Series(adata.obs_names)
sparse_gpu_array = cp.sparse.csr_matrix(adata.X)
```
## Preprocessing
### Filter
We filter the count matrix to remove cells with an extreme number of genes expressed.
```
%%time
sparse_gpu_array, barcodes = rapids_scanpy_funcs.filter_cells(sparse_gpu_array, min_genes=min_genes_per_cell, max_genes=max_genes_per_cell, barcodes=barcodes)
```
Some genes will now have zero expression in all cells. We filter out such genes.
```
%%time
sparse_gpu_array, genes = rapids_scanpy_funcs.filter_genes(sparse_gpu_array, genes, min_cells=1)
```
The size of our count matrix is now reduced.
```
sparse_gpu_array.shape
```
### Normalize
We normalize the count matrix so that the total counts in each cell sum to 1e4.
```
%%time
sparse_gpu_array = rapids_scanpy_funcs.normalize_total(sparse_gpu_array, target_sum=1e4)
```
Next, we log transform the count matrix.
```
%%time
sparse_gpu_array = sparse_gpu_array.log1p()
```
### Select Most Variable Genes
We first save the 'raw' expression values of the ACE2 and TMPRSS2 genes to use for labeling cells afterward. We will also store the expression of an epithelial marker gene (EPCAM).
```
%%time
tmp_norm = sparse_gpu_array.tocsc()
marker_genes_raw = {
("%s_raw" % marker): tmp_norm[:, genes[genes == marker].index[0]].todense().ravel()
for marker in markers
}
del tmp_norm
```
We identify the top 5000 variable genes using the `cellranger` method.
```
%time
hvg = rapids_scanpy_funcs.highly_variable_genes(sparse_gpu_array, genes, n_top_genes=5000)
```
We filter the count matrix to retain only the 5000 most variable genes.
```
%%time
sparse_gpu_array = sparse_gpu_array[:, hvg]
genes = genes[hvg].reset_index(drop=True)
sparse_gpu_array.shape
```
### Regress out confounding factors (number of counts, ribosomal gene expression)
We can now perform regression on the count matrix to correct for confounding factors - for example purposes, we use the number of counts and the expression of ribosomal genes. Many workflows use the expression of mitochondrial genes (named starting with `MT-`).
We now calculate the total counts and the percentage of ribosomal counts for each cell.
```
%%time
ribo_genes = genes.str.startswith(RIBO_GENE_PREFIX)
n_counts = sparse_gpu_array.sum(axis=1)
percent_ribo = (sparse_gpu_array[:,ribo_genes].sum(axis=1) / n_counts).ravel()
n_counts = cp.array(n_counts).ravel()
percent_ribo = cp.array(percent_ribo).ravel()
```
And perform regression:
```
%%time
sparse_gpu_array = rapids_scanpy_funcs.regress_out(sparse_gpu_array, n_counts, percent_ribo)
```
### Scale
Finally, we scale the count matrix to obtain a z-score and apply a cutoff value of 10 standard deviations, obtaining the preprocessed count matrix.
```
%%time
sparse_gpu_array = StandardScaler().fit_transform(sparse_gpu_array).clip(a_max=10)
```
## Cluster & Visualize
We store the preprocessed count matrix as an AnnData object, which is currently in host memory.
We also add the barcodes of the filtered cells, and the expression levels of the marker genes, to the annData object.
```
%%time
adata = anndata.AnnData(sparse_gpu_array.get())
adata.var_names = genes.to_pandas()
adata.obs_names = barcodes.to_pandas()
for name, data in marker_genes_raw.items():
adata.obs[name] = data.get()
```
### Reduce
We use PCA to reduce the dimensionality of the matrix to its top 50 principal components.
```
%%time
adata.obsm["X_pca"] = PCA(n_components=n_components, output_type="numpy").fit_transform(adata.X)
```
### UMAP + clustering
We visualize the cells using the UMAP algorithm in Rapids. Before UMAP, we need to construct a k-nearest neighbors graph in which each cell is connected to its nearest neighbors. This can be done conveniently using rapids functionality already integrated into Scanpy.
Note that Scanpy uses an approximation to the nearest neighbors on the CPU while the GPU version performs an exact search. While both methods are known to yield useful results, some differences in the resulting visualization and clusters can be observed.
```
%%time
sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids')
```
The UMAP function from Rapids is also integrated into Scanpy.
```
%%time
sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids')
```
Finally, we use the Leiden algorithm for graph-based clustering.
```
%%time
adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata, resolution=0.1)
```
We plot the cells using the UMAP visualization, using the Louvain clusters as labels.
```
sc.pl.umap(adata, color=["leiden"])
```
## Defining re-clustering function for interactive visualization
As we have shown above, the speed of RAPIDS allows us to run steps like dimension reduction, clustering and visualization in seconds or even less. In the sections below, we create an interactive visualization that takes advantage of this speed by allowing users to cluster and analyze selected groups of cells at the click of a button.
First, we create a function named `re_cluster`. This function can be called on selected groups of cells. According to the function defined below, PCA, KNN, UMAP and Louvain clustering will be re-computed upon the selected cells. You can customize this function for your desired analysis.
```
def re_cluster(adata):
#### Function to repeat clustering and visualization on subsets of cells
#### Runs PCA, KNN, UMAP and Leiden clustering on selected cells.
adata.obsm["X_pca"] = PCA(n_components=n_components, output_type="numpy").fit_transform(adata.X)
sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids')
sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids')
adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata)
return adata
```
## Creating an interactive visualization with Plotly Dash
<img src="https://github.com/clara-parabricks/rapids-single-cell-examples/blob/master/images/dashboard.png?raw=true" alt="Interactive Dashboard" width="400"/>
Below, we create the interactive visualization using the `adata` object and the re-clustering function defined above. To learn more about how this visualization is built, see `visualize.py`.
When you run the cell below, it returns a link. Click on this link to access the interactive visualization within your browser.
Once opened, click the `Directions` button for instructions.
```
import visualize
v = visualize.Visualization(adata, markers, re_cluster_callback=re_cluster)
v.start('0.0.0.0')
1
selected_cells = v.new_df
```
Within the dashboard, you can select cells using a variety of methods. You can then cluster, visualize and analyze the selected cells using the tools provided. Click on the `Directions` button for details.
To export the selected cells and the results of your analysis back to the notebook, click the `Export to Dataframe` button. This exports the results of your analysis back to this notebook, and closes the interactive dashboard.
See the next section for instructions on how to use the exported data.
## Exporting a selection of cells from the dashboard
If you exported a selection cells from the interactive visualization, your selection will be available here as a data frame named `selected_cells`. The `labels` column of this dataframe contains the newly generated cluster labels assigned to these selected cells.
```
print(selected_cells.shape)
selected_cells.head()
```
You can link the selected cells to the original `adata` object using the cell barcodes.
```
adata_selected_cells = adata[selected_cells.barcode.to_array(),:]
adata_selected_cells = adata[selected_cells.barcode.to_array(),:]
adata_selected_cells
```
| github_jupyter |
```
import numpy as np
import urdf2casadi.urdfparser as u2c
from urdf2casadi.geometry import plucker
from urdf_parser_py.urdf import URDF, Pose
import PyKDL as kdl
import kdl_parser_py.urdf as kdlurdf
from timeit import Timer, timeit, repeat
import rbdl
import pybullet as pb
def median(lst):
n = len(lst)
if n < 1:
return None
if n % 2 == 1:
return sorted(lst)[n//2]
else:
return sum(sorted(lst)[n//2-1:n//2+1])/2.0
def average(lst):
return sum(lst) / len(lst)
def ID_u2c_func():
for j in range(njoints):
q_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qdot_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qddot_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
ID_u2c(q_none, qdot_none, qddot_none)
def ID_pb_func():
for j in range(njoints):
q_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qdot_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qddot_none[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
ID_pb = pb.calculateInverseDynamics(pbmodel, q_none, qdot_none, qddot_none)
def ID_rbdl_func():
for j in range(njoints):
q_np[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qdot_np[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
qddot_np[j] = (qmax[j] - qmin[j])*np.random.rand()-(qmax[j] - qmin[j])/2
rbdl.InverseDynamics(rbdlmodel, q_np, qdot_np, qddot_np, ID_rbdl)
#pbmodel = pb.loadURDF("pantilt.urdf")
sim = pb.connect(pb.DIRECT)
#pbmodel = pb.loadURDF("/urdf4timing/1dof.urdf")
ndofs = 60
urdf_nr = list(range(ndofs + 1))
urdf_nr.pop(0)
#storage for timing data
mediantime_kdl = [None]*ndofs
mediantime_u2c = [None]*ndofs
mediantime_rbdl = [None]*ndofs
mediantime_pb = [None]*ndofs
mintime_kdl = [None]*ndofs
mintime_u2c = [None]*ndofs
mintime_rbdl = [None]*ndofs
mintime_pb = [None]*ndofs
averagetime_kdl = [None]*ndofs
averagetime_u2c = [None]*ndofs
averagetime_rbdl = [None]*ndofs
averagetime_pb = [None]*ndofs
nitr = 1
nrepeat = 10
for i in range(ndofs):
print i
path_to_urdf = "/home/lmjohann/urdf2casadi/examples/timing/urdf4timing/" + str(urdf_nr[i]) + "dof.urdf"
#path_to_urdf = str(urdf_nr[i]) + "dof.urdf"
end_link = "link" + str(urdf_nr[i])
root = 'base_link'
tip = end_link
rbdlmodel = rbdl.loadModel(path_to_urdf)
pbmodel = pb.loadURDF(path_to_urdf, useFixedBase=True)
asd = u2c.URDFparser()
robot_desc = asd.from_file(path_to_urdf)
jlist, names, qmax, qmin = asd.get_joint_info(root, tip)
njoints = len(jlist)
gravity = [0, 0, -9.81]
#rbdl declarations
q_np = np.zeros(njoints)
qdot_np = np.zeros(njoints)
qddot_np = np.zeros(njoints)
ID_rbdl = np.zeros(njoints)
#u2c and pybullet declarations
q_none = [None]*njoints
qdot_none = [None]*njoints
qddot_none = [None]*njoints
ID_pb = [None]*njoints
ID_u2c = asd.get_inverse_dynamics_rnea(root, tip, gravity)
timeit_u2c = repeat("ID_u2c_func()", setup = "from __main__ import ID_u2c_func", repeat = nrepeat, number = nitr)
mintime_u2c[i] = min(timeit_u2c)
mediantime_u2c[i] = median(timeit_u2c)
averagetime_u2c[i] = average(timeit_u2c)
#timeit_pb = repeat("ID_pb_func()", setup = "from __main__ import ID_pb_func", repeat = nrepeat, number = nitr)
#mintime_pb[i] = min(timeit_pb)
#mediantime_pb[i] = median(timeit_pb)
#averagetime_pb[i] = average(timeit_pb)
#timeit_rbdl = repeat("ID_rbdl_func()", setup = "from __main__ import ID_rbdl_func", repeat = nrepeat, number = nitr)
#mintime_rbdl[i] = min(timeit_rbdl)
#mediantime_rbdl[i] = median(timeit_rbdl)
#averagetime_rbdl[i] = average(timeit_rbdl)
import matplotlib.pyplot as plt
import pandas as pd
for i in range(ndofs):
mediantime_u2c[i] = mediantime_u2c[i]/nitr*1000000
#mediantime_pb[i] = mediantime_pb[i]/nitr*1000000
# mediantime_rbdl[i] = mediantime_rbdl[i]/nitr*1000000
# mediantime_kdl[i] = mediantime_kdl[i]/nitr*1000000
joint_nr = list(range(ndofs + 1))
joint_nr.pop(0)
med_fig = plt.figure(figsize = (10, 10))
#plt.scatter(joint_nr, mediantime_kdl, c='lightseagreen', label='kdl')
#plt.scatter(joint_nr, mediantime_u2c, c='hotpink', label='u2c')
plt.scatter(joint_nr, mediantime_u2c, label='u2c')
plt.scatter(joint_nr, mediantime_rbdl, label='RBDL')
plt.scatter(joint_nr, mediantime_pb, label='pybullet')
plt.xlabel("number of joints")
plt.ylabel("time (sek)")
plt.legend(loc='upper left')
med_fig.show()
timing_scores = [mediantime_pb, mediantime_rbdl, mediantime_u2c]
names = ["pybullet", "RBDL", "U2C"]#, "RBDL", "pybullet"]
boxplot = plt.figure(figsize = (8, 8))
boxplot.suptitle("Library Timing Comparison")
ax = boxplot.add_subplot(111)
ax.boxplot(timing_scores)
#plt.boxplot(timing_scores)
ax.set_xticklabels(names)
print "mediantime_u2c = ",averagetime_u2c
print "mediantime_kdl = ",averagetime_kdl
print "mediantime_rbdl = ",averagetime_rbdl
print "mediantime_pb = ",averagetime_pb
```
| github_jupyter |
Guilherme Andrade, Gabriel Ramos, Daniel Madeira, Rafael Sachetto, Renato Ferreira, Leonardo Rocha, G-DBSCAN: A GPU Accelerated Algorithm for Density-based Clustering, Procedia Computer Science, Volume 18, 2013, Pages 369-378, ISSN 1877-0509, http://dx.doi.org/10.1016/j.procs.2013.05.200.
(http://www.sciencedirect.com/science/article/pii/S1877050913003438)
Abstract: With the advent of Web 2.0, we see a new and differentiated scenario: there is more data than that can be effectively analyzed. Organizing this data has become one of the biggest problems in Computer Science. Many algorithms have been proposed for this purpose, highlighting those related to the Data Mining area, specifically the clustering algorithms. However, these algo- rithms are still a computational challenge because of the volume of data that needs to be processed. We found in the literature some proposals to make these algorithms feasible, and, recently, those related to parallelization on graphics processing units (GPUs) have presented good results. In this work we present the G-DBSCAN, a GPU parallel version of one of the most widely used clustering algorithms, the DBSCAN. Although there are other parallel versions of this algorithm, our technique distinguishes itself by the simplicity with which the data are indexed, using graphs, allowing various parallelization opportu- nities to be explored. In our evaluation we show that the G-DBSCAN using GPU, can be over 100x faster than its sequential version using CPU.
Keywords: Clustering; Dbscan; Parallel computing; GPU
```
import numpy as np
eps = 0.3
minpts = 10
from sklearn.datasets import make_blobs
centers = [[1, 1], [-1, -1], [1, -1]]
d, labels_true = make_blobs(n_samples=250, centers=centers, cluster_std=0.3,
random_state=0)
from sklearn.metrics.pairwise import euclidean_distances
core = np.zeros(d.shape[0])
Va = np.zeros( (d.shape[0], 2) )
for i in range(d.shape[0]):
num_negh = 0
for j in range(d.shape[0]):
dist = euclidean_distances(d[i].reshape(1, -1), d[j].reshape(1,-1))[0]
if dist < eps:
num_negh += 1
Va[i][0] = num_negh
if num_negh >= minpts:
core[i] = 1
Va[:,0]
Va[:,1] = np.cumsum(Va[:,0])-Va[:,0]
Ea = np.zeros( int(Va[:,1][-1]) + int(Va[:,0][-1]) )
for i in range(d.shape[0]):
ni = 0
for j in range(d.shape[0]):
dist = euclidean_distances(d[i].reshape(1, -1), d[j].reshape(1,-1))[0]
if dist < eps:
#print(i, j, Va[i], int(Va[i][1])+ni)
Ea[int(Va[i][1])+ni] = j
ni += 1
Ea
def BreadthFirstSearchKernel(j, Fa, Xa):
tid = j
if Fa[tid]:
#print("tid", tid, Fa[tid])
Fa[tid] = 0
Xa[tid] = 1
#print("rng from", int(Va[j][1]), "count", int(Va[j][0]))
for k in range(int(Va[j][1]), int(Va[j][1])+int(Va[j][0])):
nid = int(Ea[k])
if not Xa[nid]:
#print(k, nid, not Xa[nid])
Fa[nid] = 1
def BreadthFirstSearch(i, cluster, visited, labels):
Xa = np.zeros(d.shape[0])
Fa = np.zeros(d.shape[0])
Fa[i] = 1
while np.count_nonzero(Fa) > 0:
#print("Count nonzero", np.count_nonzero(Fa))
for j in range(d.shape[0]):
BreadthFirstSearchKernel(j, Fa, Xa)
print("Count nonzero", np.count_nonzero(Fa))
for j in range(d.shape[0]):
if Xa[j]:
visited[j] = 1
labels[j] = cluster
print("Cluster assign", j, cluster)
def IdentifyCluster():
cluster = 0
labels = np.full( d.shape[0], -1 )
visited = np.zeros(d.shape[0])
for i in range(d.shape[0]):
if visited[i]:
continue
if not core[i]:
continue
print("Core ", i)
visited[i] = 1
labels[i] = cluster
BreadthFirstSearch(i, cluster, visited, labels)
cluster += 1
return (cluster, labels)
(cluster, labels) = IdentifyCluster()
import pandas as pd
df = pd.DataFrame.from_records( list(
map( lambda i: ( d[i][0], d[i][1], labels[i]), range(d.shape[0])) ), columns=['x', 'y', 'class'] )
%matplotlib inline
import seaborn as sns
sns.pairplot(df, x_vars=['x'], y_vars=['y'], hue="class", size=7)
```
| github_jupyter |
# 12.15.1 Getting and Mapping the Tweets
### Get the `API` Object
```
from tweetutilities import get_API
api = get_API()
```
### Collections Required By `LocationListener`
```
tweets = []
counts = {'total_tweets': 0, 'locations': 0}
```
### Creating the `LocationListener`
```
from locationlistener import LocationListener
location_listener = LocationListener(api, counts_dict=counts,
tweets_list=tweets, topic='football', limit=50)
```
### Configure and Start the Stream of Tweets
```
import tweepy
stream = tweepy.Stream(auth=api.auth, listener=location_listener)
```
If you're not receiving any tweets, you might want to cancel the following operation and try a different search term. To terminate in a notebook, select **Kernel > Restart Kernel and Clear All Outputs...**
```
stream.filter(track=['football'], languages=['en'], is_async=False)
```
### Displaying the Location Statistics
```
counts['total_tweets']
counts['locations']
print(f'{counts["locations"] / counts["total_tweets"]:.1%}')
```
### Geocoding the Locations
```
from tweetutilities import get_geocodes
bad_locations = get_geocodes(tweets)
```
### Displaying the Bad Location Statistics
```
bad_locations
print(f'{bad_locations / counts["locations"]:.1%}')
```
### Cleaning the Data
```
import pandas as pd
df = pd.DataFrame(tweets)
df = df.dropna()
```
### Creating a Map with Folium
```
import folium
usmap = folium.Map(location=[39.8283, -98.5795],
zoom_start=5, detect_retina=True)
```
### Creating Popup Markers for the Tweet Locations
```
for t in df.itertuples():
text = ': '.join([t.screen_name, t.text])
popup = folium.Popup(text, parse_html=True)
marker = folium.Marker((t.latitude, t.longitude),
popup=popup)
marker.add_to(usmap)
```
### Saving the Map
```
usmap.save('tweet_map.html')
```
We added this cell to display the map in the notebook.
```
usmap
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
```
| github_jupyter |
# 사용자 정의 모델 만들기 (Siamese)
> fastai에서는 데이터를 정의하는 방법으로 DataBlock API를 제안합니다. 각 인자가 의미하는 내용과, 실제 Siamese 공식 튜토리얼에 이 내용이 어떻게 적용되는지를 살펴봅니다.
- author: "Chansung Park"
- toc: true
- image: images/datablock/siamese-model.png
- comments: true
- categories: [model, siamese, fastai]
- permalink: /model-siamese/
- badges: false
- search_exclude: true
```
#hide
!pip install fastai
!pip install nbdev
#hide
from fastai.vision.all import *
import nbdev
#hide
path = untar_data(URLs.PETS)
files = get_image_files(path/"images")
def category_extraction_func(filename):
return re.match(r'^(.*)_\d+.jpg$', filename.name).groups()[0] # () 안의 것을 추출하여 반환
categories = list(set(files.map(category_extraction_func)))
splits = RandomSplitter()(files)
splits_files = [files[splits[i]] for i in range(2)]
splits_sets = mapped(set, splits_files)
splbl2files = [{c: [f for f in s if category_extraction_func(f) == c] for c in categories} for s in splits_sets]
def get_split(filename):
for i, s in enumerate(splits_sets):
if filename in s: return i
raise ValueError(f'File {f} is not presented in any split.')
def draw_other(filename):
given_category = category_extraction_func(filename)
split = get_split(filename)
is_same = random.random() < 0.5
if not is_same:
other_category = random.choice(L(category for category in categories if category != given_category))
else:
other_category = given_category
return random.choice(splbl2files[split][other_category]), is_same
def get_tuples(filenames):
return [[filename, *draw_other(filename)] for filename in filenames]
class ImageTuple(fastuple):
@classmethod
def create(cls, filenames):
return cls(tuple(PILImage.create(f) for f in filenames))
def show(self, ctx=None, **kwargs):
t1,t2 = self
if not isinstance(t1, Tensor) or \
not isinstance(t2, Tensor) or \
t1.shape != t2.shape:
return ctx
line = t1.new_zeros(t1.shape[0], t1.shape[1], 10)
return show_image(torch.cat([t1,line,t2], dim=2), ctx=ctx, **kwargs)
def ImageTupleBlock():
return TransformBlock(type_tfms=ImageTuple.create,
batch_tfms=IntToFloatTensor)
def get_x(t): return t[:2]
def get_y(t): return t[2]
def splitter(items):
def get_split_files(i):
return [j for j,(f1,f2,same) in enumerate(items) if get_split(f1)==i]
return get_split_files(0),get_split_files(1)
siamese = DataBlock(
get_items=get_tuples, # 모든 데이터를 불러 들이는 함수를 지정합니다.
get_x=get_x, # 불러와진 데이터에 대해서, 입력을 결정하는 함수를 지정합니다.
get_y=get_y, # 불러와진 데이터에 대해서, 출력을 결정하는 함수를 지정합니다.
blocks=(ImageTupleBlock, CategoryBlock), # tuple 형식으로, 두 개 이상도 가능합니다.
item_tfms=Resize(224), # 아이템 단위의 변환
batch_tfms=[Normalize.from_stats(*imagenet_stats)], # 배치 단위의 변환
splitter=splitter # 학습/검증 데이터셋을 분리하는 함수를 지정합니다.
)
#hide
siamese.summary(files)
```
이번 포스팅에서는 이전 포스팅 "[데이터 블록 만드는 법 (Siamese)](https://fast-ai-kr.github.io/tutorials/datablock-siamese/)"에서 만든 **DataBlock**을 수용할 수 있는 모델을 만드는 방법을 다룹니다. 기본적으로는 PyTorch의 **nn.Module**을 상속받아서 모델을 만들면 그만이지만, 이 때 **fastai**에서 제공하는 몇 가지 편리한 함수를 살펴볼 것입니다.
## SiameseModel 개요
우선 Siamese 모델이 하는 일을 다시 한번 생각해 봅시다. 이 모델은 두 이미지를 입력받아서, 두 이미지가 같은 부류에 속하는지를 판단하여 같다면 **True**, 다르다면 **False** 라는 결과를 예측합니다.
아래의 코드는 **SiameseModel** 이라는 간단한 모듈을 보여줍니다. 이 모듈은 PyTorch의 **nn.Module** 대신, fastai의 **Module**을 상속받아서 구현되었습니다. **nn.Module**과 **Module**의 차이는 단순히 **__init__** 메서드 내에서, **super.__init__** 부모 메서드를 호출할 필요가 있는지 없는지 입니다. 따라서, **super.__init__** 을 호출한다면, **nn.Module**을 사용해도 무방한 것이죠.
```
class SiameseModel(Module):
def __init__(self, encoder, head):
self.encoder,self.head = encoder,head
def forward(self, x1, x2):
filters = torch.cat([self.encoder(x1), self.encoder(x2)], dim=1)
return self.head(filters)
```
아래의 **Module** 닥스트링을 통해서, 해당 설명을 확인해 보기 바랍니다.
```
nbdev.show_doc(Module)
```
**SiameseModel**이 구현된 방식을 한번 살펴보겠습니다. 우선 생성자인 **__init__** 메서드는 두 개의 인자 (**encoder**, **head**)를 수용합니다. 이 둘은 각각 일반적인 CNN 모델에서 흔히 알고 있는 특징 추출을 담당하는 **Convolutional Layers** 와 분류를 담당하는 **Fully Connected Layers** 를 의미합니다.
이 두개를 입력받는 이유는 **전이학습**을 위해서 입니다. 일반적으로 **전이학습**은 사전에 훈련된 모델의 **Convolutional Layers**의 가중치를 그대로 활용합니다. 즉, 수 많은 이미지로부터 다양한 특징을 추출하는 능력을 이어받는 것이죠. 따라서, 이 부분이 **encoder**에 해당합니다.
하지만, 사전 훈련된 모델이 풀고자 했던 문제와 내가 현재 풀고자 하는 문제는 다릅니다. 분류 할 범주의 개수도 다르며, 종류도 다릅니다. 따라서 마지막 **head** 부분을 나의 문제에 맞게 구조를 잡은 다음, 이를 **encoder**와 결합해 주는 것입니다.
**fastai** 에서는 사전 훈련된 모델로부터 **encoder** 부분을 추출하는 편리한 메서드로, **create_body**를 제공합니다. 또한 일반적인 구조의 **head**를 만들어주는 **create_head** 메서드도 함께 제공합니다. 즉, **create_body**로 **encoder**를 추출한 다음, **create_head**로 생성된 부분을 **encoder**와 결합해 주면 되는 것입니다.

이 내용을 숙지한 상태로, 다시한번 **SiameseModel**의 구현 코드를 살펴봅시다.
```
class SiameseModel(Module):
def __init__(self, encoder, head):
self.encoder,self.head = encoder,head
def forward(self, x1, x2):
filters = torch.cat([self.encoder(x1), self.encoder(x2)], dim=1)
return self.head(filters)
```
**__init__** 생성자는 단순히 **encoder**와 **head**를 수용하여, 내부 변수에 저장합니다. 그리고, **forward**는 **x1**과 **x2** 두 개의 인자를 수용하는데, 각각은 비교되어야 할 서로다른 이미지 두 개를 의미합니다. 각 이미지를 **encoder**에 전달한 다음, **torch.cat** 함수를 통해 그 결과를 이어 붙여줍니다. 즉, 원래 **encoder**가 출력하는 결과의 두 배의 양이되는 것이죠. 다만, 양은 두개지만 서로다른 **encoder**를 사용하는 것이 아니므로, 가중치는 공유됩니다.
그 다음, 이어 붙여진 결과를 단순히 **head**로 삽입해 주는것으로 **SiameseModel**의 역할은 끝이납니다.
## encoder(body)와 head
그러면 이제 우리가 해야할 일은 **encoder**와 **head**를 만들어 주는 것입니다. 직접 **encoder**를 만들어서 밑바닥부터 학습을 진행해도 좋지만, 이미 이미지넷을 통해서 사전에 학습된 훌륭한 모델들이 존재합니다. 가령 **ResNet**, **xResNet**, **EfficientNet** 등과 같은것이 있을 수 있겠죠.
**fastai**에서는 기본적으로 **ResNet**, **xResNet**, **VGG**, **AlexNet**, **DenseNet**, **SqueezeNet** 을 기본적으로 제공합니다. 그 중 **ResNet**은 사실 PyTorch에서 제공하는 모델을 그대로 활용하죠. 다만 **fastai**에서 제공되는 모델은 추가적인 메타데이터가 함께 딸려옵니다. 이 메타데이터가 의미하는바를 먼저 알아보도록 하겠습니다.
### 모델의 메타데이터
다음은 **resnet34**에 대한 메타데이터가 가진 정보를 보여줍니다. **fastai**에서 제공하는 **model_meta** 딕셔너리 정보를 통해서, 각 모델의 메타데이터 정보를 조회할 수 있습니다.
```
model_meta[resnet34]
```
보다시피 세 개의 키값(**cut**, **split**, **stats**) 에 대한 정보를 가지고 있습니다. 각 키값이 의미하는바는 다음과 같습니다.
- **cut**
- CNN 구조에서, Fully Connected Layers가 시작되는 지점의 인덱스를 의미합니다. 즉, 개념적으로 생각해 보자면 **resnet34[-2]** 와 같이 접근하면, Fully Connected Layers를 제외한 나머지 Convolutional Layers 들 만을 가지고 올 수 있게 되는 것이죠. 이는 사전학습된 모델에서 Fully Connected Layer를 제거하고, 나만의 문제에 적합한 Fully Connected Layers를 추가하는 전이학습을 수행할 때 매우 도움이 되는 정보입니다.
---
- **split**
- split은 전이학습시 **freeze**되어야 하는 부분을 포함해서, 차별적 학습률이 적용된 파라미터 그룹을 구분짓습니다. **fastai**가 제공하는 모델 학습 시스템은 계층별로 차별적인 학습률을 둘 수 있도록 설계되어 있고, **split** 정보에는 차별적인 학습률이 적용된 계층 그룹에 대한 정보가 담겨 있습니다.
---
- **stats**
- 사전 학습된 모델이 학습한 데이터의 통계적 정보(**평균**, **표준편차**)를 저장합니다. 특정 모델이 학습한 데이터를 구성하는 값의 분포를 알고, 새로운 데이터를 그 분포에 맞도록 변형해 주면 전이학습의 효과를 좀 더 끌어올릴 수 있는 전략에 사용됩니다. 이 정보는 **DataBlock** 구성시 **batch_tfms**에 자동으로 삽입됩니다.
### encoder 만들기
모델의 메타데이터 정보를 가지고 있으므로, 이를 활용하여 **encoder** 부분을 잘라낼 수 있을 것입니다 (그렇지 않다면, 직접 모델의 구조를 파악한 후 자르는 지점을 결정해야만 합니다). 특히 **cut** 정보가 있을 때, **fastai**에서 제공하는 **create_body** 함수를 활용하면 쉽게 자를 수 있습니다.
먼저 **create_body** 함수의 원형을 살펴보죠.
```
nbdev.show_doc(create_body)
```
여기서의 **arch**는 모델이다. 즉 **resnet18**, **resnet50**과 같은 것이 됩니다. 유념해야 하는 인자는 **cut** 입니다. 바로 이 부분에 모델의 메타데이터의 **cut** 정보를 넣어주면 됩니다.
다음은 **resnet34**에 대하여 **create_body** 함수를 수행하여 얻은 결과로부터, 가장 마지막 인덱스에 해당하는 것을 가져옵니다. 이름에서 알 수 있듯이 34계층으로 구성되어 있기 때문에, 다소 많은 출력을 피하기 위한 목적과, 가장 마지막 계층이 Convolutional Layer로 끝난다는 사실을 확인하기 위함입니다.
```
encoder = create_body(resnet34, cut=model_meta[resnet34]['cut'])
encoder[-1]
```
### head 만들기
```
head = create_head(512*4, 2, ps=0.5)
nbdev.show_doc(create_head)
head
```
## SiameseModel과 Learner의 생성
```
model = SiameseModel(encoder, head)
def siamese_splitter(model):
return [params(model.encoder), params(model.head)]
def loss_func(out, targ):
return CrossEntropyLossFlat()(out, targ.long())
```
| github_jupyter |
This notebook I am going to discuss about,
1. deep learning
2. forward propagation
3. gradient decent
4. backword propagation
5. basic deep learning model with keras
### Deep Learning :
----
Deep learning is a machine learning algorithm where artificial neural network solve particular problem. This neural network is build with *perceptron*, it is a human brain like model. The idea behind deep learning is **trial and error**.
### Foward Propagation :
----
The basic idea of forward propagation is there will be some input nodes with weights then we will calculate the hidden nodes value and also the output node.
Here is an example :
lets say we have two node (2,3) their weight is (1,1) and (-1,1) and hidden node weight is (2,-1), we will calculate hidden node by
$(2*1)+(3*1) = 5$
$(2*-1)+(3*1) = 1$
$(5*2)+(1*-1) = 9$ // output
* forward propagation follow multiply and add process
* dot product
* forward propation one data point at a time
* output is the predicton for the data point

```
import numpy as np
input_data = np.array([2,3])
#store the weights of the nodes as dictionary
weights = {
'node_0': np.array([1,1]),
'node_1': np.array([-1,1]),
'output': np.array([2,-1])
}
node_0_value = (input_data * weights['node_0']).sum()
node_1_value = (input_data * weights['node_1']).sum()
hidden_layer_value = np.array([node_0_value, node_1_value])
output = (hidden_layer_value * weights['output']).sum()
print(hidden_layer_value)
print(output)
```
### Backward Propagation :
----
Most of the time the output of the forward propagation is not closer to the real value, to minimize the error backward propagation comes. Backward propagation updates the weights with respect to the error.
* start at random weights.
* use forward propagation to make a prediction.
* use backward propagation to calculate the slope of the loss function w.r.t to each weight.
* multiply that slope by the learning rate and subtract from the current weights.
* keep going with that cycle untill get to a flat part.
> **Gradient Decent :** Gradient descent is an optimization algorithm used to minimize loss function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model.
```
import numpy as np
import matplotlib.pyplot as plt
#return slop of loss function
def get_slope(input_data, target, weights):
error = get_error(input_data, target, weights)
slope = 2 * input_data * error
return slope
#return mean square error
def get_mse(input_data, target, weights):
error = get_error(input_data, target, weights)
mse = np.mean(error **2)
return mse
#return slope
def get_error(input_data, target, weights):
preds = (input_data * weights).sum()
error = preds - target
return error
# The data point you will make a prediction for
input_data = np.array([4,2,3])
#target
target_actual = 0
# Sample weights
weights = np.array([10,1,2])
learning_rate = 0.01
mse_hist = []
#print(get_slope(input_data, target_actual, weights))
for i in range(20):
slope = get_slope(input_data, target_actual, weights)
weights = weights - learning_rate * slope
#print('iteration {0} weights : {1}'.format(i, weights))
mse = get_mse(input_data, target_actual, weights)
mse_hist.append(mse)
plt.plot(mse_hist)
plt.xlabel('Iterations')
plt.ylabel('Mean Squared Error')
plt.show()
```
### Keras Architecture :
---
Keras is a deep learning library. It works on top of tensorflow.
Here is the basic keras architecture :
* specify architecture
* compile
* fit
* save model
* reload model
* predict
* evaluate
```
import keras
from keras.layers import Dense
from keras.models import Sequential
from keras.models import load_model
from keras.callbacks import EarlyStopping
import numpy as np
import pandas as pd
#import data
df = pd.read_csv('hourly_wages.csv')
target_df = df['wage_per_hour']
feature_df = df.drop(columns = ['wage_per_hour'])
#corvert numpy matrix
predtictor = feature_df.values
target = target_df.values
#get the number of columns
n_cols = predtictor.shape[1]
#print(len(predtictor))
#specify model
model = Sequential()
#specify layer
#1st layer
model.add(Dense(50, activation='relu', input_shape=(n_cols,)))
#2nd layer
model.add(Dense(32, activation='relu'))
#output layer
model.add(Dense(1))
#compile
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
#stop when see model not improving
early_stopping_monitor = EarlyStopping(patience=2)
#fit
model.fit(predtictor, target, validation_split=0.3, epochs=20, callbacks=[early_stopping_monitor])
#save
model.save('hourly_wages.h5')
model.summary()
#reload
my_model = load_model('hourly_wages.h5')
# predict
```
| github_jupyter |
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%206%20-%20Lesson%202%20-%20Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
#Improving Computer Vision Accuracy using Convolutions
In the previous lessons you saw how to do fashion recognition using a Deep Neural Network (DNN) containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and a hidden layer. You experimented with the impact of different sizes of hidden layer, number of training epochs etc on the final accuracy.
For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
Your accuracy is probably about 89% on training and 87% on validation...not bad...But how do you make that even better? One way is to use something called Convolutions. I'm not going to details on Convolutions here, but the ultimate concept is that they narrow down the content of the image to focus on specific, distinct, details.
If you've ever done image processing using a filter (like this: https://en.wikipedia.org/wiki/Kernel_(image_processing)) then convolutions will look very familiar.
In short, you take an array (usually 3x3 or 5x5) and pass it over the image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.
This is perfect for computer vision, because often it's features that can get highlighted like this that distinguish one item for another, and the amount of information needed is then much less...because you'll just train on the highlighted features.
That's the concept of Convolutional Neural Networks. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focussed, and possibly more accurate.
Run the below code -- this is the same neural network as earlier, but this time with Convolutional layers added first. It will take longer, but look at the impact on the accuracy:
```
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
```
It's likely gone up to about 93% on the training data and 91% on the validation data.
That's significant, and a step in the right direction!
Try running it for more epochs -- say about 20, and explore the results! But while the results might seem really good, the validation results may actually go down, due to something called 'overfitting' which will be discussed later.
(In a nutshell, 'overfitting' occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at seeing *other* data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it, but blue suade shoes might confuse you...and you know you should never mess with my blue suede shoes.)
Then, look at the code again, and see, step by step how the Convolutions were built:
Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape.
```
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
```
Next is to define your model. Now instead of the input layer at the top, you're going to add a Convolution. The parameters are:
1. The number of convolutions you want to generate. Purely arbitrary, but good to start with something in the order of 32
2. The size of the Convolution, in this case a 3x3 grid
3. The activation function to use -- in this case we'll use relu, which you might recall is the equivalent of returning x when x>0, else returning 0
4. In the first layer, the shape of the input data.
You'll follow the Convolution with a MaxPooling layer which is then designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one, thus turning 4 pixels into 1. It repeats this across the image, and in so doing halves the number of horizontal, and halves the number of vertical pixels, effectively reducing the image by 25%.
You can call model.summary() to see the size and shape of the network, and you'll notice that after every MaxPooling layer, the image size is reduced in this way.
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
```
Add another convolution
```
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2)
```
Now flatten the output. After this you'll just have the same DNN structure as the non convolutional version
```
tf.keras.layers.Flatten(),
```
The same 128 dense layers, and 10 output layers as in the pre-convolution example:
```
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
```
Now compile the model, call the fit method to do the training, and evaluate the loss and accuracy from the test set.
```
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
# Visualizing the Convolutions and Pooling
This code will show us the convolutions graphically. The print (test_labels[;100]) shows us the first 100 labels in the test set, and you can see that the ones at index 0, index 23 and index 28 are all the same value (9). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the DNN is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination.
```
print(test_labels[:100])
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=7
THIRD_IMAGE=26
CONVOLUTION_NUMBER = 1
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
```
EXERCISES
1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.
2. Remove the final Convolution. What impact will this have on accuracy or training time?
3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.
4. Remove all Convolutions but the first. What impact do you think this will have? Experiment with it.
5. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here!
```
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
```
| github_jupyter |
## Show the attention of VGG19
```
from keras.applications.mobilenet import MobileNet
from keras.applications.mobilenet import preprocess_input as MobileNet_preprocess_input
from keras.applications.vgg19 import VGG19
from keras.applications.vgg19 import preprocess_input as VGG19_preprocess_input
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.inception_resnet_v2 import preprocess_input as InceptionResNetV2_preprocess_input
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input as InceptionV3_preprocess_input
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.applications.mobilenet_v2 import preprocess_input as MobileNetV2_preprocess_input
from keras.applications.nasnet import NASNetLarge
from keras.applications.nasnet import preprocess_input as NASNetLarge_preprocess_input
import keras.backend as K
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import data_preparation
import params
import os
import reset
import gradient_accumulation
from utils import plot_train_metrics, save_model
from sklearn.metrics import roc_curve, auc
from train import create_data_generator, _create_base_model, create_simple_model, create_attention_model
metadata = data_preparation.load_metadata()
metadata, labels = data_preparation.preprocess_metadata(metadata)
train, valid = data_preparation.stratify_train_test_split(metadata)
# for these image sizes, we don't need gradient_accumulation to achieve BATCH_SIZE = 256
optimizer = 'adam'
if params.DEFAULT_OPTIMIZER != optimizer:
optimizer = gradient_accumulation.AdamAccumulate(
lr=params.LEARNING_RATE, accum_iters=params.ACCUMULATION_STEPS)
base_models = [
[VGG19, params.VGG19_IMG_SIZE, VGG19_preprocess_input],
#[MobileNet, params.MOBILENET_IMG_SIZE, MobileNet_preprocess_input],
#[InceptionResNetV2, params.INCEPTIONRESNETV2_IMG_SIZE,
# InceptionResNetV2_preprocess_input],
#[InceptionV3, params.INCEPTIONV3_IMG_SIZE, InceptionV3_preprocess_input],
#[MobileNetV2, params.MOBILENETV2_IMG_SIZE, MobileNetV2_preprocess_input],
#[NASNetLarge, params.NASNETLARGE_IMG_SIZE, NASNetLarge_preprocess_input],
]
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
def plot_attention_map(_Model, input_shape, transfer_learing,
preprocessing_function,
train, valid, labels,
extend_model_callback, optimizer,
name_prefix, weights="imagenet"):
test_X, test_Y = next(create_data_generator(
valid, labels, 100, None, target_size=input_shape))
baseModel = _create_base_model(_Model,
labels,
test_X.shape[1:],
trainable=False,
weights=None)
model = extend_model_callback(baseModel, labels, optimizer)
model_name = name_prefix+'_' + baseModel.name
weights = os.path.join(params.RESULTS_FOLDER,
model_name, 'weights.best.hdf5')
print('Loading '+weights)
model.load_weights(weights, by_name=True)
model.trainable = False
# Get the attention model first
for attention_model in model.layers:
c_shape = attention_model.get_output_shape_at(0)
if c_shape[-1]==13:
break
# now get the attention layer
for attention_layer in attention_model.layers:
c_shape = attention_layer.get_output_shape_at(0)
if len(c_shape)==4:
if c_shape[-1]==1:
break
rand_idx = np.random.choice(range(len(test_X)), size = 12)
attention_function = K.function(inputs = [attention_model.get_input_at(0), K.learning_phase()],
outputs = [attention_layer.get_output_at(0)]
)
fig, m_axs = plt.subplots(len(rand_idx), 2, figsize = (8, 4*len(rand_idx)))
[c_ax.axis('off') for c_ax in m_axs.flatten()]
labels = np.array(labels)
for c_idx, (img_ax, attn_ax) in zip(rand_idx, m_axs):
cur_img = test_X[c_idx:(c_idx+1)]
cur_features = baseModel.predict(cur_img)
attn_img = attention_function([cur_features, 0])[0]
img_ax.imshow(cur_img[0,:,:,0], cmap = 'bone')
attn_ax.imshow(attn_img[0, :, :, 0], cmap = 'viridis',
vmin = 0, vmax = 1,
interpolation = 'lanczos')
real_label = test_Y[c_idx]
indices=np.argwhere(np.array(real_label) > 0.5).ravel()
img_ax.set_title('Classes\n%s' % (labels[indices]))
pred_confidence = model.predict(cur_img)[0]
pred_confidence = np.array(pred_confidence)
pred_confidence = pred_confidence[:].astype(float)
pred_confidence = 100*pred_confidence
string_confidence = ''
for index in indices:
string_confidence = string_confidence + '%2.1f%% ' % (pred_confidence[index])
#print(string_confidence)
attn_ax.set_title('Attention Map\nConfidence:'+string_confidence)
attention_figure = os.path.join(params.RESULTS_FOLDER, model_name, 'attention_map.png')
fig.savefig(attention_figure, dpi = 300)
print('Saved plot at'+attention_figure)
metadata = data_preparation.load_metadata()
metadata, labels = data_preparation.preprocess_metadata(metadata)
train, valid = data_preparation.stratify_train_test_split(metadata)
# for these image sizes, we don't need gradient_accumulation to achieve BATCH_SIZE = 256
optimizer = 'adam'
if params.DEFAULT_OPTIMIZER != optimizer:
optimizer = gradient_accumulation.AdamAccumulate(
lr=params.LEARNING_RATE, accum_iters=params.ACCUMULATION_STEPS)
unfrozen = 'unfrozen_'
if True:
unfrozen = ''
custome_layers = [
[create_attention_model, unfrozen+'latest_attention'],
#[create_simple_model, unfrozen+'latest_simple'],
]
for [custome_layer, name_prefix] in custome_layers:
for [_Model, input_shape, preprocess_input] in base_models:
plot_attention_map(_Model, input_shape, True , preprocess_input,
train, valid, labels,
custome_layer, optimizer, name_prefix)
```
| github_jupyter |
# Transformer
What is a Transformer?
A Transformer is a type of neural network architecture developed by Vaswani et al. in 2017.
Without going into too much detail, this model architecture consists of a multi-head self-attention mechanism combined with an encoder-decoder structure. It can achieve SOTA results that outperform various other models leveraging recurrent (RNN) or convolutional neural networks (CNN) both in terms of evaluation score (BLEU score) and training time.
The Transformer model structure has largely replaced other NLP model implementations such as RNNs.
The GPT model only uses the decoder of the Transformer structure (unidirectional), while **BERT** is based on the Transformer encoder (bidirectional).
Many Transformer-based NLP models were specifically created for transfer learning. Transfer learning describes an approach where a model is first pre-trained on large unlabeled text corpora using self-supervised learning.
While GPT used a standard language modeling objective which predicts the next word in a sentence, BERT was trained on Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). The RoBERTa model replicated the BERT model architecture but changed the pre-training using more data, training for longer, and removing the NSP objective.
The model checkpoints of the pre-trained models serve as the starting point for fine-tuning. A labeled dataset for a specific downstream task is used as training data. There are several different fine-tuning approaches, including the following:
* Training the entire model on the labeled data.
* Training only higher layers and freezing the lower layers.
* Freezing the entire model and training one or more additional layers added on top.
No matter the approach, a task-specific output layer usually needs to be attached to the model.
Source: [How to use transformer-based NLP models](https://towardsdatascience.com/how-to-use-transformer-based-nlp-models-a42adbc292e5)
## Multilabel Classification with BERT
```
#!pip install simpletransformers
#!pip install gin-config
!pip install tensorflow-addons
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
import torch
import wandb
from itertools import cycle
from simpletransformers.classification import MultiLabelClassificationModel
from sklearn.metrics import accuracy_score, auc, classification_report, confusion_matrix, ConfusionMatrixDisplay, roc_curve, roc_auc_score
from sklearn.model_selection import train_test_split
# load data
df = pd.read_csv('../data/df_cleaned.csv')
# Remove new lines from comments
df['comment_text'] = df.comment_text.apply(lambda x: x.replace('\n', ' '))
# category list for plots
categories = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
# prepare dataframe for train test split. MultilabelClassificator needs a text column and a labels column,
# which provides all categories as a list
new_df = pd.DataFrame()
new_df['id'] = df['id']
new_df['text'] = df['comment_text']
new_df['labels'] = df.iloc[:, 2:8].values.tolist()
def split(df):
train_df, eval_df = train_test_split(df, test_size=0.2, random_state=0)
return train_df, eval_df
# Create trand and eval df for the model training and evaluation
train_df, eval_df = split(new_df)
# Model args
args = {
'logging_steps': 10,
'overwrite_output_dir':True,
'train_batch_size':2,
'gradient_accumulation_steps':16,
'learning_rate': 3e-5,
'num_train_epochs': 4,
'max_seq_length': 128,
'wandb_project': 'toxic-comment-classification',
"wandb_kwargs":
{"name": "bert-lr3e-5"},
}
# load pretrained model for the multilabel classification task
model = MultiLabelClassificationModel('bert', 'bert-base-uncased', num_labels=6, args=args)
# train the model with the train data
model.train_model(train_df = train_df)
# save model
torch.save(model, 'saved_models/bert_lr3e-5')
# load model
model = torch.load('saved_models/bert_lr3e-5')
# evaluate model on eval_df
result, model_outputs, wrong_predictions = model.eval_model(eval_df=eval_df, roc_auc_score=sklearn.metrics.roc_auc_score)
# make predictions
preds, outputs = model.predict(eval_df.text)
# define y_true for roc_auc plot and classification report
y_true = np.array(eval_df['labels'].values.tolist())
def evaluate_roc(probs, y_true, category, color):
"""
- Print AUC and accuracy on the test set
- Plot ROC
@params probs (np.array): an array of predicted probabilities with shape (len(y_true), 2)
@params y_true (np.array): an array of the true values with shape (len(y_true),)
"""
preds = probs
fpr, tpr, threshold = roc_curve(y_true, preds)
roc_auc = auc(fpr, tpr)
roc_aucs.append(roc_auc)
print(f'AUC: {roc_auc:.4f}')
# Get accuracy over the test set
y_pred = np.where(preds >= 0.3, 1, 0)
accuracy = accuracy_score(y_true, y_pred)
print(f'Accuracy: {accuracy*100:.2f}%')
# Plot ROC AUC
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, color=color, label="{0} (area = {1:0.5f})".format(category, roc_auc),)
plt.plot(fpr, tpr, color=color)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'k--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.savefig('plots/roc_auc_curve.png')
# evalutae roc auc score and plot curves per category
colors = cycle(["aqua", "darkorange", "cornflowerblue"])
for i, color in zip(range(6), colors):
print('-----------')
print(categories[i])
print('-----------')
evaluate_roc(outputs[:, i].ravel(), y_true[:, i].ravel(), categories[i], color)
# Plot confusion matrix per category
y_test = np.array(eval_df['labels'].to_list())
preds = np.array(preds)
f, axes = plt.subplots(2, 3, figsize=(25, 15))
axes = axes.ravel()
for i in range(6):
disp = ConfusionMatrixDisplay(confusion_matrix(y_test[:, i],
preds[:, i]),
display_labels=[f'non {categories[i]}', categories[i]])
disp.plot(ax=axes[i], values_format='.4g')
disp.ax_.set_title(f'toxicity label:\n {categories[i]}', fontsize=20)
if i<3:
disp.ax_.set_xlabel('')
if i%3!=0:
disp.ax_.set_ylabel('')
disp.im_.colorbar.remove()
plt.subplots_adjust(wspace=0.8, hspace=0.01)
f.colorbar(disp.im_, ax=axes)
plt.show()
# Print classification report
print(f"Classification Report : \n\n{classification_report(y_test, preds)}")
# Create submission_file
test_df = pd.read_csv('data/test.csv')
comments = test_df.comment_text.apply(lambda x: x.replace('\n', ' ')).tolist()
preds, outputs = model.predict(comments)
submission = pd.DataFrame(outputs, columns=categories)
submission['id'] = test_df['id']
submission = submission[categories]
# write to csv and upload at Kaggle to get ROC AUC Scores for Kaggles testdata
submisssion.to_csv('/content/drive/MyDrive/data/submission_roberta_tuning_lr2e5.csv', index=False)
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
```
### Dependencies
```
!unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/train_images256x384.zip'
!unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/validation_images256x384.zip'
!unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/test_images256x384.zip'
!pip install keras-rectified-adam
!pip install segmentation-models
import os
import cv2
import math
import random
import shutil
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import albumentations as albu
from keras_radam import RAdam
import matplotlib.pyplot as plt
import segmentation_models as sm
from tensorflow import set_random_seed
from sklearn.model_selection import train_test_split
from keras import optimizers
from keras import backend as K
from keras.utils import Sequence
from keras.losses import binary_crossentropy
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, Callback
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(seed)
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/'
model_base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Models/files/'
submission_base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/submissions/'
train_path = base_path + 'train.csv'
test_path = base_path + 'sample_submission.csv'
hold_out_set_path = base_path + 'hold-out.csv'
train_images_dest_path = 'train_images256x384/'
validation_images_dest_path = 'validation_images256x384/'
test_images_dest_path = 'test_images256x384/'
```
### Load data
```
train = pd.read_csv(train_path)
submission = pd.read_csv(test_path)
hold_out_set = pd.read_csv(hold_out_set_path)
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
print('Compete set samples:', len(train))
print('Train samples: ', len(X_train))
print('Validation samples: ', len(X_val))
print('Test samples:', len(submission))
# Preprocecss data
train['image'] = train['Image_Label'].apply(lambda x: x.split('_')[0])
submission['image'] = submission['Image_Label'].apply(lambda x: x.split('_')[0])
test = pd.DataFrame(submission['image'].unique(), columns=['image'])
test['set'] = 'test'
display(X_train.head())
```
# Model parameters
```
BACKBONE = 'mobilenet'
BATCH_SIZE = 32
EPOCHS = 30
LEARNING_RATE = 3e-4
HEIGHT = 256
WIDTH = 384
CHANNELS = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
model_name = 'uNet_%s_%sx%s' % (BACKBONE, HEIGHT, WIDTH)
model_path = model_base_path + '%s.h5' % (model_name)
submission_path = submission_base_path + '%s_submission.csv' % (model_name)
submission_post_path = submission_base_path + '%s_submission_post.csv' % (model_name)
preprocessing = sm.backbones.get_preprocessing(BACKBONE)
augmentation = albu.Compose([albu.HorizontalFlip(p=0.5),
albu.VerticalFlip(p=0.5),
albu.GridDistortion(p=0.5),
albu.ShiftScaleRotate(scale_limit=0.5, rotate_limit=0,
shift_limit=0.1, border_mode=0, p=0.5),
albu.OpticalDistortion(p=0.5, distort_limit=2, shift_limit=0.5)
])
```
### Auxiliary functions
```
#@title
def np_resize(img, input_shape):
height, width = input_shape
return cv2.resize(img, (width, height))
def mask2rle(img):
pixels= img.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def build_rles(masks, reshape=None):
width, height, depth = masks.shape
rles = []
for i in range(depth):
mask = masks[:, :, i]
if reshape:
mask = mask.astype(np.float32)
mask = np_resize(mask, reshape).astype(np.int64)
rle = mask2rle(mask)
rles.append(rle)
return rles
def build_masks(rles, input_shape, reshape=None):
depth = len(rles)
if reshape is None:
masks = np.zeros((*input_shape, depth))
else:
masks = np.zeros((*reshape, depth))
for i, rle in enumerate(rles):
if type(rle) is str:
if reshape is None:
masks[:, :, i] = rle2mask(rle, input_shape)
else:
mask = rle2mask(rle, input_shape)
reshaped_mask = np_resize(mask, reshape)
masks[:, :, i] = reshaped_mask
return masks
def rle2mask(rle, input_shape):
width, height = input_shape[:2]
mask = np.zeros( width*height ).astype(np.uint8)
array = np.asarray([int(x) for x in rle.split()])
starts = array[0::2]
lengths = array[1::2]
current_position = 0
for index, start in enumerate(starts):
mask[int(start):int(start+lengths[index])] = 1
current_position += lengths[index]
return mask.reshape(height, width).T
def dice_coefficient(y_true, y_pred):
y_true = np.asarray(y_true).astype(np.bool)
y_pred = np.asarray(y_pred).astype(np.bool)
intersection = np.logical_and(y_true, y_pred)
return (2. * intersection.sum()) / (y_true.sum() + y_pred.sum())
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def post_process(probability, threshold=0.5, min_size=10000):
mask = cv2.threshold(probability, threshold, 1, cv2.THRESH_BINARY)[1]
num_component, component = cv2.connectedComponents(mask.astype(np.uint8))
predictions = np.zeros(probability.shape, np.float32)
for c in range(1, num_component):
p = (component == c)
if p.sum() > min_size:
predictions[p] = 1
return predictions
def get_metrics(model, df, df_images_dest_path, tresholds, min_mask_sizes, set_name='Complete set'):
class_names = ['Fish', 'Flower', 'Gravel', 'Sugar']
metrics = []
for class_name in class_names:
metrics.append([class_name, 0, 0])
metrics_df = pd.DataFrame(metrics, columns=['Class', 'Dice', 'Dice Post'])
for i in range(0, df.shape[0], 500):
batch_idx = list(range(i, min(df.shape[0], i + 500)))
batch_set = df[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(df)
generator = DataGenerator(
directory=df_images_dest_path,
dataframe=batch_set,
target_df=train,
batch_size=len(batch_set),
target_size=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
seed=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = model.predict(x)
for class_index in range(N_CLASSES):
class_score = []
class_score_post = []
mask_class = y[..., class_index]
pred_class = preds[..., class_index]
for index in range(len(batch_idx)):
sample_mask = mask_class[index, ]
sample_pred = pred_class[index, ]
sample_pred_post = post_process(sample_pred, threshold=tresholds[class_index], min_size=min_mask_sizes[class_index])
if (sample_mask.sum() == 0) & (sample_pred.sum() == 0):
dice_score = 1.
else:
dice_score = dice_coefficient(sample_pred, sample_mask)
if (sample_mask.sum() == 0) & (sample_pred_post.sum() == 0):
dice_score_post = 1.
else:
dice_score_post = dice_coefficient(sample_pred_post, sample_mask)
class_score.append(dice_score)
class_score_post.append(dice_score_post)
metrics_df.loc[metrics_df['Class'] == class_names[class_index], 'Dice'] += np.mean(class_score) * ratio
metrics_df.loc[metrics_df['Class'] == class_names[class_index], 'Dice Post'] += np.mean(class_score_post) * ratio
metrics_df = metrics_df.append({'Class':set_name, 'Dice':np.mean(metrics_df['Dice'].values), 'Dice Post':np.mean(metrics_df['Dice Post'].values)}, ignore_index=True).set_index('Class')
return metrics_df
def plot_metrics(history):
fig, axes = plt.subplots(4, 1, sharex='col', figsize=(22, 14))
axes = axes.flatten()
axes[0].plot(history['loss'], label='Train loss')
axes[0].plot(history['val_loss'], label='Validation loss')
axes[0].legend(loc='best')
axes[0].set_title('Loss')
axes[1].plot(history['iou_score'], label='Train IOU Score')
axes[1].plot(history['val_iou_score'], label='Validation IOU Score')
axes[1].legend(loc='best')
axes[1].set_title('IOU Score')
axes[2].plot(history['dice_coef'], label='Train Dice coefficient')
axes[2].plot(history['val_dice_coef'], label='Validation Dice coefficient')
axes[2].legend(loc='best')
axes[2].set_title('Dice coefficient')
axes[3].plot(history['score'], label='Train F-Score')
axes[3].plot(history['val_score'], label='Validation F-Score')
axes[3].legend(loc='best')
axes[3].set_title('F-Score')
plt.xlabel('Epochs')
sns.despine()
plt.show()
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueE32rror('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
### Data generator
```
#@title
class DataGenerator(Sequence):
def __init__(self, dataframe, target_df=None, mode='fit', directory=train_images_dest_path,
batch_size=BATCH_SIZE, n_channels=CHANNELS, target_size=(HEIGHT, WIDTH),
n_classes=N_CLASSES, seed=seed, shuffle=True, preprocessing=None, augmentation=None):
self.batch_size = batch_size
self.dataframe = dataframe
self.mode = mode
self.directory = directory
self.target_df = target_df
self.target_size = target_size
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.augmentation = augmentation
self.preprocessing = preprocessing
self.seed = seed
self.mask_shape = (1400, 2100)
self.list_IDs = self.dataframe.index
if self.seed is not None:
np.random.seed(self.seed)
self.on_epoch_end()
def __len__(self):
return len(self.list_IDs) // self.batch_size
def __getitem__(self, index):
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
list_IDs_batch = [self.list_IDs[k] for k in indexes]
X = self.__generate_X(list_IDs_batch)
if self.mode == 'fit':
Y = self.__generate_Y(list_IDs_batch)
if self.augmentation:
X, Y = self.__augment_batch(X, Y)
return X, Y
elif self.mode == 'predict':
return X
def on_epoch_end(self):
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __generate_X(self, list_IDs_batch):
X = np.empty((self.batch_size, *self.target_size, self.n_channels))
for i, ID in enumerate(list_IDs_batch):
img_name = self.dataframe['image'].loc[ID]
img_path = self.directory + img_name
img = cv2.imread(img_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
if self.preprocessing:
img = self.preprocessing(img)
X[i,] = img
return X
def __generate_Y(self, list_IDs_batch):
Y = np.empty((self.batch_size, *self.target_size, self.n_classes), dtype=int)
for i, ID in enumerate(list_IDs_batch):
img_name = self.dataframe['image'].loc[ID]
image_df = self.target_df[self.target_df['image'] == img_name]
rles = image_df['EncodedPixels'].values
masks = build_masks(rles, input_shape=self.mask_shape, reshape=self.target_size)
Y[i, ] = masks
return Y
def __augment_batch(self, X_batch, Y_batch):
for i in range(X_batch.shape[0]):
X_batch[i, ], Y_batch[i, ] = self.__random_transform(X_batch[i, ], Y_batch[i, ])
return X_batch, Y_batch
def __random_transform(self, X, Y):
composed = self.augmentation(image=X, mask=Y)
X_aug = composed['image']
Y_aug = composed['mask']
return X_aug, Y_aug
train_generator = DataGenerator(
directory=train_images_dest_path,
dataframe=X_train,
target_df=train,
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
augmentation=augmentation,
seed=seed)
valid_generator = DataGenerator(
directory=validation_images_dest_path,
dataframe=X_val,
target_df=train,
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
seed=seed)
```
# Model
```
model = sm.Unet(backbone_name=BACKBONE,
encoder_weights='imagenet',
classes=N_CLASSES,
activation='sigmoid',
input_shape=(HEIGHT, WIDTH, CHANNELS))
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
metric_list = [dice_coef, sm.metrics.iou_score, sm.metrics.f1_score]
callback_list = [checkpoint, es, rlrop]
optimizer = RAdam(learning_rate=LEARNING_RATE, warmup_proportion=0.1)
model.compile(optimizer=optimizer, loss=sm.losses.bce_dice_loss, metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = len(X_train)//BATCH_SIZE
STEP_SIZE_VALID = len(X_val)//BATCH_SIZE
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callback_list,
epochs=EPOCHS,
verbose=1).history
```
## Model loss graph
```
#@title
plot_metrics(history)
```
# Threshold and mask size tunning
```
#@title
class_names = ['Fish ', 'Flower', 'Gravel', 'Sugar ']
mask_grid = [0, 500, 1000, 5000, 7500, 10000, 15000]
threshold_grid = np.arange(.5, 1, .05)
metrics = []
for class_index in range(N_CLASSES):
for threshold in threshold_grid:
for mask_size in mask_grid:
metrics.append([class_index, threshold, mask_size, 0])
metrics_df = pd.DataFrame(metrics, columns=['Class', 'Threshold', 'Mask size', 'Dice'])
for i in range(0, X_val.shape[0], 500):
batch_idx = list(range(i, min(X_val.shape[0], i + 500)))
batch_set = X_val[batch_idx[0]: batch_idx[-1]+1]
ratio = len(batch_set) / len(X_val)
generator = DataGenerator(
directory=validation_images_dest_path,
dataframe=batch_set,
target_df=train,
batch_size=len(batch_set),
target_size=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
seed=seed,
mode='fit',
shuffle=False)
x, y = generator.__getitem__(0)
preds = model.predict(x)
for class_index in range(N_CLASSES):
class_score = []
label_class = y[..., class_index]
pred_class = preds[..., class_index]
for threshold in threshold_grid:
for mask_size in mask_grid:
mask_score = []
for index in range(len(batch_idx)):
label_mask = label_class[index, ]
pred_mask = pred_class[index, ]
pred_mask = post_process(pred_mask, threshold=threshold, min_size=mask_size)
dice_score = dice_coefficient(pred_mask, label_mask)
if (pred_mask.sum() == 0) & (label_mask.sum() == 0):
dice_score = 1.
mask_score.append(dice_score)
metrics_df.loc[(metrics_df['Class'] == class_index) & (metrics_df['Threshold'] == threshold) &
(metrics_df['Mask size'] == mask_size), 'Dice'] += np.mean(mask_score) * ratio
metrics_df_0 = metrics_df[metrics_df['Class'] == 0]
metrics_df_1 = metrics_df[metrics_df['Class'] == 1]
metrics_df_2 = metrics_df[metrics_df['Class'] == 2]
metrics_df_3 = metrics_df[metrics_df['Class'] == 3]
optimal_values_0 = metrics_df_0.loc[metrics_df_0['Dice'].idxmax()].values
optimal_values_1 = metrics_df_1.loc[metrics_df_1['Dice'].idxmax()].values
optimal_values_2 = metrics_df_2.loc[metrics_df_2['Dice'].idxmax()].values
optimal_values_3 = metrics_df_3.loc[metrics_df_3['Dice'].idxmax()].values
best_tresholds = [optimal_values_0[1], optimal_values_1[1], optimal_values_2[1], optimal_values_3[1]]
best_masks = [optimal_values_0[2], optimal_values_1[2], optimal_values_2[2], optimal_values_3[2]]
best_dices = [optimal_values_0[3], optimal_values_1[3], optimal_values_2[3], optimal_values_3[3]]
for index, name in enumerate(class_names):
print('%s treshold=%.2f mask size=%d Dice=%.3f' % (name, best_tresholds[index], best_masks[index], best_dices[index]))
```
# Model evaluation
```
#@title
train_metrics = get_metrics(model, X_train, train_images_dest_path, best_tresholds, best_masks, 'Train')
display(train_metrics)
validation_metrics = get_metrics(model, X_val, validation_images_dest_path, best_tresholds, best_masks, 'Validation')
display(validation_metrics)
```
# Apply model to test set
```
#@title
test_df = []
for i in range(0, test.shape[0], 500):
batch_idx = list(range(i, min(test.shape[0], i + 500)))
batch_set = test[batch_idx[0]: batch_idx[-1]+1]
test_generator = DataGenerator(
directory=test_images_dest_path,
dataframe=batch_set,
target_df=submission,
batch_size=1,
target_size=(HEIGHT, WIDTH),
n_channels=CHANNELS,
n_classes=N_CLASSES,
preprocessing=preprocessing,
seed=seed,
mode='predict',
shuffle=False)
preds = model.predict_generator(test_generator)
for index, b in enumerate(batch_idx):
filename = test['image'].iloc[b]
image_df = submission[submission['image'] == filename].copy()
pred_masks = preds[index, ].round().astype(int)
pred_rles = build_rles(pred_masks, reshape=(350, 525))
image_df['EncodedPixels'] = pred_rles
### Post procecssing
pred_masks_post = preds[index, ].astype('float32')
for class_index in range(N_CLASSES):
pred_mask = pred_masks_post[...,class_index]
pred_mask = post_process(pred_mask, threshold=best_tresholds[class_index], min_size=best_masks[class_index])
pred_masks_post[...,class_index] = pred_mask
pred_rles_post = build_rles(pred_masks_post, reshape=(350, 525))
image_df['EncodedPixels_post'] = pred_rles_post
###
test_df.append(image_df)
sub_df = pd.concat(test_df)
```
### Regular submission
```
#@title
submission_df = sub_df[['Image_Label' ,'EncodedPixels']]
submission_df.to_csv(submission_path, index=False)
display(submission_df.head())
```
### Submission with post processing
```
#@title
submission_df_post = sub_df[['Image_Label' ,'EncodedPixels_post']]
submission_df_post.columns = ['Image_Label' ,'EncodedPixels']
submission_df_post.to_csv(submission_post_path, index=False)
display(submission_df_post.head())
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import numpy as np
from IPython.display import HTML, Latex, Markdown, Pretty
from windIO.Plant import WTLayout
from fusedwake.WindFarm import WindFarm
from fusedwake.Plotting import circles
from fusedwake.gcl import GCL
import fusedwake.gcl.fortran as fgcl
import fusedwake.gcl.python as pygcl
%matplotlib inline
import matplotlib.pyplot as plt
#filename = 'middelgrunden.yml'
#filename = 'lillgrund.yml'
#filename = 'hornsrev.yml'
#filename = 'test_WF.yml'
#filename = 'test_WF_4.yml'
filename = 'test_WF_4Turbines.yml'
#wtl = WTLayout(filename)
wf = WindFarm(yml=filename)
# Fixed parameters
a1 = 0.435449861
a2 = 0.797853685
a3 = -0.124807893
a4 = 0.136821858
b1 = 15.6298
b2 = 1.0
# Variables
D = 80.0
CT = 0.98
TI = 0.10
print np.allclose(fgcl.get_r96(D, CT, TI, a1, a2, a3, a4, b1, b2),
pygcl.get_r96(D, CT, TI, pars=[a1, a2, a3, a4, b1, b2]))
R = D/2.
x=4.*D
r=0.*R
print [np.allclose(fgcl.get_rw(x, D, CT, TI, a1, a2, a3, a4, b1, b2)[i],
pygcl.get_Rw(x, R, TI, CT, pars=[a1, a2, a3, a4, b1, b2])[i]) for i in range(3)]
print np.allclose(fgcl.get_du(x,r,D,CT, TI, a1, a2, a3, a4, b1, b2),
pygcl.get_dU(x,r,R, CT, TI, pars=[a1, a2, a3, a4, b1, b2]))
R = D/2.
x=D*np.linspace(0.,10.,100)
r=0.*R*np.ones_like(x)
print [np.allclose(fgcl.get_rw(x, D, CT, TI, a1, a2, a3, a4, b1, b2)[i],
pygcl.get_Rw(x, R, TI, CT, pars=[a1, a2, a3, a4, b1, b2])[i]) for i in range(3)]
print np.allclose(fgcl.get_du(x,r,D,CT, TI, a1, a2, a3, a4, b1, b2),
pygcl.get_dU(x,r,R, CT, TI, pars=[a1, a2, a3, a4, b1, b2]))
R = D/2.
x=D*np.linspace(0.,10.,100)
r=D*np.linspace(0.,2.,100)
print np.allclose(fgcl.get_du(x,r,D,CT, TI, a1, a2, a3, a4, b1, b2),
pygcl.get_dU(x,r,R, CT, TI, pars=[a1, a2, a3, a4, b1, b2]))
dx = 7.*D
dy = 6.*D
dz = -2.*D
# Wake operating turbine
RT = 2.*R
DT = 2.*RT
print np.allclose(fgcl.get_dueq(dx,dy,dz,DT,D,CT,TI,a1,a2,a3,a4,b1,b2),
pygcl.get_dUeq(dx,dy,dz,RT,R,CT,TI,pars=[a1, a2, a3, a4, b1, b2]))
dx = np.array([6.,10.,15.])*D
dy = np.array([1.,-5.,0.])*D
dz = np.array([-2,2.,1.])*D
# Wake operating turbines
Rop = np.array([1.,2.,.5])*D
Dop = 2.*Rop
print np.allclose(fgcl.get_dueq(dx,dy,dz,Dop,D,CT,TI,a1,a2,a3,a4,b1,b2),
pygcl.get_dUeq(dx,dy,dz,Rop,R,CT,TI,pars=[a1, a2, a3, a4, b1, b2]))
gcl = GCL(WF=wf)
# Inputs
WS=10.0*np.ones([wf.nWT])
WD=271.*np.ones([wf.nWT])#np.random.normal(270.,30.)*np.ones([wf.nWT])#
TI=0.1*np.ones([wf.nWT])
print np.mean(WD)
fig = plt.figure(figsize=[6,5])
ax = fig.add_subplot(111)
circles(x=wf.xyz[0,:],
y=wf.xyz[1,:],
s=np.array(wf.R),
c=WS,
cmap=plt.cm.viridis,lw=0,
)
plt.colorbar()
for i in range(wf.nWT):
ax.annotate(str(wf.__getattr__('name')[i][-2:]).zfill(2),wf.xyz[[0,1],i]+0.7*np.array([wf.R[i],wf.R[i]]))
ax.arrow(x=wf.xyz[0,i],
y=wf.xyz[1,i],
dx=0.5*wf.R[1]*WS[i]*np.cos(np.deg2rad(270-WD[i])),
dy=0.5*wf.R[1]*WS[i]*np.sin(np.deg2rad(270-WD[i])),
head_length=2*wf.R[1],head_width=wf.R[1],
length_includes_head=True,
fc='k',ec='k',
)
ax.axis('equal')
ax.set_xlim([-50, 1000])
#ax.set_ylim([-400, 400])
gcl = GCL(WF=wf,WS=WS, WD=WD, TI=TI)
# Run the models
out_fort_gclm_s=gcl(version='fort_gclm')
#print gcl._get_kwargs(version='fort_gclm'),'\n'
print gcl.version, ':', out_fort_gclm_s.p_wt.sum(),'\n'
# Run the models
out_py_gcl_v0=gcl(version='py_gcl_v0')
#print gcl._get_kwargs(version='py_gcl_v0'),'\n'
print gcl.version, ':', out_py_gcl_v0.p_wt.sum(),'\n'
print np.allclose(out_fort_gclm_s.p_wt,out_py_gcl_v0.p_wt)
print np.allclose(out_fort_gclm_s.u_wt,out_py_gcl_v0.u_wt),'\n'
# Run the models
out_py_gcl_v1=gcl(version='py_gcl_v1')
#print gcl._get_kwargs(version='py_gcl_v1'),'\n'
print gcl.version, ':', out_py_gcl_v1.p_wt.sum(),'\n'
print np.allclose(out_fort_gclm_s.p_wt,out_py_gcl_v1.p_wt)
print np.allclose(out_fort_gclm_s.u_wt,out_py_gcl_v1.u_wt),'\n'
varout=out_py_gcl_v0.u_wt#p_wt/1e6
fig = plt.figure(figsize=[6,5])
ax = fig.add_subplot(111)
circles(x=wf.xyz[0,:],
y=wf.xyz[1,:],
s=np.array(wf.R),
c=varout,
cmap=plt.cm.viridis,lw=0,
)
plt.colorbar()
for i in range(wf.nWT):
ax.annotate(str(wf.__getattr__('name')[i][-2:]).zfill(2),wf.xyz[[0,1],i]+0.7*np.array([wf.R[i],wf.R[i]]))
ax.axis('equal')
ax.set_xlim([-100, 1000])
#ax.set_ylim([-500, 500])
# Inputs
WS=10.0*np.ones([wf.nWT])+np.random.normal(loc=0.0, scale=0.25, size=[wf.nWT])
WD=285*np.ones([wf.nWT])+np.random.normal(loc=0.0, scale=3., size=[wf.nWT])
TI=0.1*np.ones([wf.nWT])+np.random.normal(loc=0.0, scale=0.02, size=[wf.nWT])
version = 'py_gcl_v0' #'fort_gclm_s' #
# Run the models
results=gcl(WS=WS, WD=WD, TI=TI, version=version)
results.p_wt.sum()
fig = plt.figure(figsize=[6,5])
ax = fig.add_subplot(111)
circles(x=wf.xyz[0,:],
y=wf.xyz[1,:],
s=np.array(wf.R),
c=WS,
cmap=plt.cm.viridis,lw=0,
)
plt.colorbar()
for i in range(wf.nWT):
ax.annotate(str(wf.__getattr__('name')[i][-2:]).zfill(2),wf.xyz[[0,1],i]+0.7*np.array([wf.R[i],wf.R[i]]))
ax.arrow(x=wf.xyz[0,i],
y=wf.xyz[1,i],
dx=0.5*wf.R[1]*WS[i]*np.cos(np.deg2rad(270-WD[i])),
dy=0.5*wf.R[1]*WS[i]*np.sin(np.deg2rad(270-WD[i])),
head_length=2*wf.R[1],head_width=wf.R[1],
length_includes_head=True,
fc='k',ec='k',
)
ax.axis('equal')
ax.set_xlim([-50, 1000])
#ax.set_ylim([-400, 400])
varout=results.u_wt#p_wt/1e6
fig = plt.figure(figsize=[6,5])
ax = fig.add_subplot(111)
circles(x=wf.xyz[0,:],
y=wf.xyz[1,:],
s=np.array(wf.R),
c=varout,
cmap=plt.cm.viridis,lw=0,
)
plt.colorbar()
for i in range(wf.nWT):
ax.annotate(str(wf.__getattr__('name')[i][-2:]).zfill(2),wf.xyz[[0,1],i]+0.7*np.array([wf.R[i],wf.R[i]]))
ax.axis('equal')
ax.set_xlim([-100, 1000])
#ax.set_ylim([-500, 500])
WD = (np.arange(-50,50)+270.)#np.linspace(-50,50,200)+270
WS = 10.
P_rat_py_v0 = []
P_rat_py_v1 = []
P_rat_fgclm_rdn = []
for wd in WD:
#out = gcl(WF=wf, WS=WS*np.ones([wf.nWT]), WD=wd+np.random.normal(loc=0.0, scale=2, size=[wf.nWT]),
# TI=0.1*np.ones([wf.nWT]), version='fort_gclm')
#P_rat_fgclm_rdn = np.append(P_rat_fgclm_rdn,out.p_wt[1]/out.p_wt[0])
out = gcl(WS=WS*np.ones([wf.nWT]), WD=wd*np.ones([wf.nWT]), TI=0.1*np.ones([wf.nWT]), version='py_gcl_v1')
P_rat_py_v1 = np.append(P_rat_py_v1,out.p_wt[1]/out.p_wt[0])
out = gcl(WF=wf, WS=WS*np.ones([wf.nWT]), WD=wd*np.ones([wf.nWT]), TI=0.1*np.ones([wf.nWT]), version='py_gcl_v0')
P_rat_py_v0 = np.append(P_rat_py_v0,out.p_wt[1]/out.p_wt[0])
out = gcl(WF=wf, WS=WS*np.ones_like(WD), WD=WD, TI=0.1*np.ones_like(WD), version='fort_gcl')
P_rat_fgcl = out.p_wt[:,1]/out.p_wt[:,0]
WDm = WD.reshape([-1,1])*np.ones([1,wf.nWT])
out = gcl(WF=wf, WS=WS*np.ones_like(WDm), WD=WDm, TI=0.1*np.ones_like(WDm), version='fort_gclm')
P_rat_fgclm = out.p_wt[:,1]/out.p_wt[:,0]
fig = plt.figure(figsize=[15,5])
ax = fig.add_subplot(111)
plt.plot(WD,P_rat_py_v1,'sk',ms=10,mfc='w',label ='py_gcl_v1')
#plt.plot(-WD,P_rat_py_v1,'x--',label ='py_gcl_v1_inv')
plt.plot(WD,P_rat_py_v0,'<b',ms=9,mfc='w',label ='py_gcl_v0')
#plt.plot(-WD,P_rat_py_v0,'-',label ='py_gcl_v0_inv')
plt.plot(WD,P_rat_fgcl,'-ok',ms=8,mec='k',mfc='w',label ='fort_gcl')
#plt.plot(-WD,P_rat_fgcl,'+--',label ='fort_gcl')
plt.plot(WD,P_rat_fgclm,'--+r',label ='fort_gclm')
#plt.plot(-WD,P_rat_fgclm_s,'.-',label ='fort_gclm')
#plt.plot(WD,P_rat_fgclm_rdn,'o',label ='fort_gclm_s_rdn')
plt.legend(loc=3)
ax = plt.gca()
WS_cases=np.arange(4,26)
WD_cases=np.arange(0,360,2)
WS_ms,WD_ms=np.meshgrid(WS_cases,WD_cases)
WS=WS_ms.flatten()
WD=WD_ms.flatten()
out = gcl(WF=wf, WS=WS, WD=WD, TI=0.1*np.ones_like(WD), version='fort_gcl')
out.p_wt.shape
WS_cases=np.arange(4,26)
WD_cases=np.arange(0,360,2)
WS_ms,WD_ms=np.meshgrid(WS_cases,WD_cases)
WS=WS_ms.reshape(-1,1)*np.ones([1,wf.nWT])
WS=WS+np.random.normal(0,0.5,size=WS.shape)
WD=WD_ms.reshape(-1,1)*np.ones([1,wf.nWT])
WD=WD+np.random.normal(0,3,size=WS.shape)
out = gcl(WF=wf, WS=WS, WD=WD, TI=0.1*np.ones_like(WD), version='fort_gclm')
out.p_wt.shape
```
| github_jupyter |
## Smart Signatures with Transaction Groups
#### 06.3.5 Winter School on Smart Contracts
##### Peter Gruber (peter.gruber@usi.ch)
2022-01-22
* Smart Signatures with more than 1 transaction
* Combine conditions across transactions
## Setup
See notebook 04.1, the lines below will always automatically load functions in `algo_util.py`, the 5 accounts and the Purestake credentials
```
# Loading shared code and credentials
import sys, os
codepath = '..'+os.path.sep+'..'+os.path.sep+'sharedCode'
sys.path.append(codepath)
from algo_util import *
cred = load_credentials()
# Shortcuts to directly access the 3 main accounts
MyAlgo = cred['MyAlgo']
Alice = cred['Alice']
Bob = cred['Bob']
Charlie = cred['Charlie']
Dina = cred['Dina']
from algosdk import account, mnemonic
from algosdk.v2client import algod
from algosdk.future import transaction
from algosdk.future.transaction import PaymentTxn
from algosdk.future.transaction import AssetConfigTxn, AssetTransferTxn, AssetFreezeTxn
from algosdk.future.transaction import LogicSig, LogicSigTransaction
import algosdk.error
import json
import base64
import pandas as pd
from pyteal import *
# Initialize the algod client (Testnet or Mainnet)
algod_client = algod.AlgodClient(algod_token='', algod_address=cred['algod_test'], headers=cred['purestake_token'])
print(Alice['public'])
print(Bob['public'])
print(Charlie['public'])
```
## The Split Payment
* Classical Smart Contract
* Example
* Two business partners agree to split all revenues in a fixed percentage
* The smart contract is the business acount, into which customers have to pay
* Both business partners can initiate a payout, but only in the fixed percentage
* Other examples
* Fixed tax rate
* Sales commission
#### Step 0: Get the status before the transaction
* Also fund accounts if need be
* https://bank.testnet.algorand.network
* https://testnet.algoexplorer.io/dispenser
```
# Get the holdings of MyAlgo and Bob separately
alice_holding=asset_holdings_df(algod_client, Alice['public'])
bob_holding=asset_holdings_df(algod_client, Bob['public'])
# Merge in one data.frame using pandas merge
pd.merge(alice_holding, bob_holding, how="outer", on=["asset-id", "unit", "name", "decimals"], suffixes=['Alice','Bob'])
```
#### Step 1a: Write down the conditions as a PyTeal program
* Alice and Bob are business partners
* Alice gets 3/4 of the proceeds
* Bob gets 1/4 of the proceeds
```
split_cond = And(
Gtxn[0].sender() == Gtxn[1].sender(), # Both payments come from the same address
Gtxn[0].receiver() == Addr(Alice['public']), # Payment 0 to Alice
Gtxn[1].receiver() == Addr(Bob['public']), # Payment 1 to Bob
Gtxn[0].amount() == Int(4) * (Gtxn[0].amount() + Gtxn[1].amount()) / Int(5) # Alice_amount = 3/4 * Total_amount
)
fee_cond = And(
Gtxn[0].fee() <= Int(1000), # No fee attack
Gtxn[1].fee() <= Int(1000) # No fee attack
)
safety_cond = And(
Global.group_size() == Int(2), # Exactly 2 transactions
Gtxn[0].type_enum() == TxnType.Payment, # Both are PaymentTxn
Gtxn[1].type_enum() == TxnType.Payment,
Gtxn[0].rekey_to() == Global.zero_address(), # No rekey attach
Gtxn[1].rekey_to() == Global.zero_address(),
Gtxn[0].close_remainder_to() == Global.zero_address(), # No close_to attack
Gtxn[1].close_remainder_to() == Global.zero_address()
)
split_pyteal = And(
split_cond,
fee_cond,
safety_cond
)
```
#### Step 1b: Pyteal -> Teal
```
split_teal = compileTeal(split_pyteal, Mode.Signature, version=3)
print(split_teal)
```
#### Step 1c: Teal -> Bytecode for AVM
```
Split = algod_client.compile(split_teal)
Split
```
### The split payment is now ready
* We only need to communicate the hash to customers
#### Step 2: A customer makes a payment
* Dina buys something from the Alice_Bob_Company
* She pays 5 Algos into the company account
```
# Step 2.1: prepare transaction
sp = algod_client.suggested_params()
amt = int(5*1e6)
txn = transaction.PaymentTxn(sender=Dina['public'], sp=sp,
receiver=Split['hash'], amt=amt)
# Step 2.(2+3+4): sign and send and wait ...
stxn = txn.sign(Dina['private'])
txid = algod_client.send_transaction(stxn)
txinfo = wait_for_confirmation(algod_client, txid)
```
#### Step 3: Payout request
* Alice or Bob or anybody actually can make a payout request
* The only thing that matter is that 3/4 go to Alice and 1/4 goes to Bob
* Consider the TX fees and min holdings in the contract
```
### Step 3.1: prepare and create TX group
sp = algod_client.suggested_params()
total_amt = 4.8 # total withdrawl
amt_1 = int(4/5 * 4.8 * 1E6) # Alice share in microalgos
amt_2 = int(1/5 * 4.8 * 1E6) # Bob share in microalgos
txn_1 = PaymentTxn(sender=Split['hash'],sp=sp,receiver=Alice['public'],amt=amt_1)
txn_2 = PaymentTxn(sender=Split['hash'],sp=sp,receiver=Bob['public'],amt=amt_2)
gid = transaction.calculate_group_id([txn_1, txn_2])
txn_1.group = gid
txn_2.group = gid
# Step 3.2a ask Smart Signature to sign txn_1
encodedProg = Split['result'].encode()
program = base64.decodebytes(encodedProg)
lsig = LogicSig(program)
stxn_1 = LogicSigTransaction(txn_1, lsig)
# Step 3.2b ask Smart Signature to sign txn_2
encodedProg = Split['result'].encode()
program = base64.decodebytes(encodedProg)
lsig = LogicSig(program)
stxn_2 = LogicSigTransaction(txn_2, lsig)
# Step 3.3: assemble transaction group and send
signed_group = [stxn_1, stxn_2]
txid = algod_client.send_transactions(signed_group)
# Step 3.4: wait for confirmation
txinfo = wait_for_confirmation(algod_client, txid)
```
## Appendix: how to calcualte the splitting condition
The splitting condition is written in a strange order
```python
Gtxn[0].amount() == Int(3) * (Gtxn[0].amount()+Gtxn[1].amount()) / Int(4)
```
This (mathematically identical) will not work
```python
Gtxn[0].amount() == Int(3)/Int(4) * (Gtxn[0].amount()+Gtxn[1].amount())
```
```
a = 750
b = 250
# Version 1
int(a) == int( int(3 * (int(a) + int(b)) ) / 4)
# Version 2
int(a) == int(3/4) * ( int(a) + int(b))
```
#### CONCLUSION: (Py)TEAL uses integer calucaltions for every single step!
## Exercise
* Change the Smart Signature (and the withdrawl transaction) so that Alice gets 80% and Bob gets 20%
## Exercise
* Discuss: How would we deal with fractions like 2/3 and 1/3, that cannot be easily expressed in percentages?
| github_jupyter |
# Self Supervised Learning with Fastai
> Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

[](https://pypi.org/project/self-supervised/#description)
[](https://zenodo.org/badge/latestdoi/295835009)
## Install
`pip install self-supervised`
## Documentation
Please read the documentation [here](https://keremturgutlu.github.io/self_supervised).
To go back to github repo please click [here](https://github.com/keremturgutlu/self_supervised/tree/master/).
## Algorithms
Please read the papers or blog posts before getting started with an algorithm, you may also check out documentation page of each algorithm to get a better understanding.
Here are the list of implemented **self_supervised.vision** algorithms:
- [SimCLR v1](https://arxiv.org/pdf/2002.05709.pdf) & [SimCLR v2](https://arxiv.org/pdf/2006.10029.pdf)
- [MoCo v1](https://arxiv.org/pdf/1911.05722.pdf) & [MoCo v2](https://arxiv.org/pdf/2003.04297.pdf)
- [BYOL](https://arxiv.org/pdf/2006.07733.pdf)
- [SwAV](https://arxiv.org/pdf/2006.09882.pdf)
- [Barlow Twins](https://arxiv.org/pdf/2103.03230.pdf)
- [DINO](https://arxiv.org/pdf/2104.14294.pdf)
Here are the list of implemented **self_supervised.multimodal** algorithms:
- [CLIP](https://arxiv.org/pdf/2103.00020.pdf)
- CLIP-MoCo (No paper, own idea)
For vision algorithms all models from [timm](https://github.com/rwightman/pytorch-image-models) and [fastai](https://github.com/fastai/fastai) can be used as encoders.
For multimodal training currently CLIP supports ViT-B/32 and ViT-L/14, following best architectures from the paper.
## Simple Usage
### Vision
#### SimCLR
```python
from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_simclr_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_simclr_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[SimCLR(aug_pipelines, temp=0.07)])
learn.fit_flat_cos(100, 1e-2)
```
#### MoCo
```python
from self_supervised.vision.moco import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_moco_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_moco_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[MOCO(aug_pipelines=aug_pipelines, K=128)])
learn.fit_flat_cos(100, 1e-2)
```
#### BYOL
```python
from self_supervised.vision.byol import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_byol_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_byol_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[BYOL(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)
```
#### SWAV
```python
from self_supervised.vision.swav import *
dls = get_dls(resize, bs)
encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_swav_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],
crop_sizes=[128,96],
min_scales=[0.25,0.05],
max_scales=[1.0,0.3])
learn = Learner(dls, model, cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=bs*2**6, queue_start_pct=0.5)])
learn.fit_flat_cos(100, 1e-2)
```
#### Barlow Twins
```python
from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_barlow_twins_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_barlow_twins_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[BarlowTwins(aug_pipelines, lmb=5e-3)])
learn.fit_flat_cos(100, 1e-2)
```
#### DINO
```python
from self_supervised.models.vision_transformer import *
from self_supervised.vision.dino import *
dls = get_dls(resize, bs)
deits16 = MultiCropWrapper(deit_small(patch_size=16, drop_path_rate=0.1))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
student_model = nn.Sequential(deits16,dino_head)
deits16 = MultiCropWrapper(deit_small(patch_size=16))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
teacher_model = nn.Sequential(deits16,dino_head)
dino_model = DINOModel(student_model, teacher_model)
aug_pipelines = get_dino_aug_pipelines(num_crops=[2,6],
crop_sizes=[128,96],
min_scales=[0.25,0.05],
max_scales=[1.0,0.3])
learn = Learner(dls,model,cbs=[DINO(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)
```
### Multimodal
#### CLIP
```python
from self_supervised.multimodal.clip import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIP(**vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPTrainer()])
learn.fit_flat_cos(100, 1e-2)
```
#### CLIP-MoCo
```python
from self_supervised.multimodal.clip_moco import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer()])
learn.fit_flat_cos(100, 1e-2)
```
## ImageWang Benchmarks
All of the algorithms implemented in this library have been evaluated in [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard).
In overall superiority of the algorithms are as follows `SwAV > MoCo > BYOL > SimCLR` in most of the benchmarks. For details you may inspect the history of [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard) through github.
`BarlowTwins` is still under testing on ImageWang.
It should be noted that during these experiments no hyperparameter selection/tuning was made beyond using `learn.lr_find()` or making
sanity checks over data augmentations by visualizing batches. So, there is still space for improvement and overall rankings of the alogrithms may change based on your setup. Yet, the overall rankings are on par with the papers.
## Contributing
Contributions and or requests for new self-supervised algorithms are welcome. This repo will try to keep itself up-to-date with recent SOTA self-supervised algorithms.
Before raising a PR please create a new branch with name `<self-supervised-algorithm>`. You may refer to previous notebooks before implementing your Callback.
Please refer to sections `Developers Guide, Abbreviations Guide, and Style Guide` from https://docs.fast.ai/dev-setup and note that same rules apply for this library.
| github_jupyter |
# Aerospike Spark Connector Tutorial for Scala
## Tested with Spark connector 3.2.0, ASDB EE 5.7.0.7, Java 8, Apache Spark 3.0.2, Python 3.7 and Scala 2.12.11 and [Spylon]( https://pypi.org/project/spylon-kernel/)
#### Please download the appropriate Aeropsike Connect for Spark from the [download page](https://enterprise.aerospike.com/enterprise/download/connectors/aerospike-spark/notes.html)
Set `launcher.jars` with path to the downloaded binary
```
%%init_spark
launcher.jars = ["aerospike-spark-assembly-3.2.0.jar"]
launcher.master = "local[*]"
//Specify the Seed Host of the Aerospike Server
val AS_HOST = "127.0.0.1:3000"
import scala.collection.mutable.ArrayBuffer
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SaveMode
import com.aerospike.spark.sql.AerospikeConnection
import org.apache.spark.sql.SparkSession
```
## Schema in the Spark Connector
- Aerospike is schemaless, however Spark adher to schema. After the schema is decided upon (either through inference or given), data within the bins must honor the types.
- To infer the schema, the connector samples a set of records (configurable through `aerospike.schema.scan`) to decide the name of bins/columns and their types. This implies that the derived schema depends entirely upon sampled records.
- **Note that `__key` was not part of provided schema. So how can one query using `__key`? We can just add `__key` in provided schema with appropriate type. Similarly we can add `__gen` or `__ttl` etc.**
val schemaWithPK: StructType = new StructType(Array(
StructField("__key",IntegerType, nullable = false),
StructField("id", IntegerType, nullable = false),
StructField("name", StringType, nullable = false),
StructField("age", IntegerType, nullable = false),
StructField("salary",IntegerType, nullable = false)))
- **We recommend that you provide schema for queries that involve [collection data types](https://docs.aerospike.com/docs/guide/cdt.html) such as lists, maps, and mixed types. Using schema inference for CDT may cause unexpected issues.**
## Create sample data and write it into Aerospike Database
```
//Create test data
val conf = sc.getConf.clone();
conf.set("aerospike.seedhost" , AS_HOST)
conf.set("aerospike.namespace", "test")
spark.close()
val spark2= SparkSession.builder().config(conf).master("local[2]").getOrCreate()
val num_records=1000
val rand = scala.util.Random
val schema: StructType = new StructType(
Array(
StructField("id", IntegerType, nullable = false),
StructField("name", StringType, nullable = false),
StructField("age", IntegerType, nullable = false),
StructField("salary",IntegerType, nullable = false)
))
val inputDF = {
val inputBuf= new ArrayBuffer[Row]()
for ( i <- 1 to num_records){
val name = "name" + i
val age = i%100
val salary = 50000 + rand.nextInt(50000)
val id = i
val r = Row(id, name, age,salary)
inputBuf.append(r)
}
val inputRDD = spark2.sparkContext.parallelize(inputBuf.toSeq)
spark2.createDataFrame(inputRDD,schema)
}
inputDF.show(10)
//Write the Sample Data to Aerospike
inputDF.write.mode(SaveMode.Overwrite)
.format("aerospike") //aerospike specific format
.option("aerospike.writeset", "scala_input_data") //write to this set
.option("aerospike.updateByKey", "id") //indicates which columns should be used for construction of primary key
.option("aerospike.sendKey", "true")
.save()
```
### Using Spark SQL syntax
```
/*
Aerospike DB needs a Primary key for record insertion. Hence, you must identify the primary key column
using for example .option(“aerospike.updateByKey”, “id”), where “id” is the name of the column that you’d
like to be the Primary key, while loading data from the DB.
*/
val insertDFWithSchema=spark2
.sqlContext
.read
.format("aerospike")
.schema(schema)
.option("aerospike.set", "scala_input_data")
.load()
val sqlView="inserttable"
insertDFWithSchema.createOrReplaceTempView(sqlView)
//
//V2 datasource doesn't allow insert into a view.
//
spark2.sql(s"select * from $sqlView").show()
```
## Load data into a DataFrame without specifying any schema i.e. using connector schema inference
```
// Create a Spark DataFrame by using the Connector Schema inference mechanism
val loadedDFWithoutSchema=spark2
.sqlContext
.read
.format("aerospike")
.option("aerospike.set", "scala_input_data") //read the data from this set
.load
loadedDFWithoutSchema.printSchema()
//Notice that schema of loaded data has some additional fields.
// When connector infers schema, it also adds internal metadata.
```
## Load data into a DataFrame with user specified schema
```
//Data can be loaded with known schema as well.
val loadedDFWithSchema=spark2
.sqlContext
.read
.format("aerospike")
.schema(schema)
.option("aerospike.set", "scala_input_data").load
loadedDFWithSchema.show(5)
```
## Writing Sample Collection Data Types (CDT) data into Aerospike
```
val complex_data_json="resources/nested_data.json"
val alias= StructType(List(
StructField("first_name",StringType, false),
StructField("last_name",StringType, false)))
val name= StructType(List(
StructField("first_name",StringType, false),
StructField("aliases",ArrayType(alias), false )
))
val street_adress= StructType(List(
StructField("street_name", StringType, false),
StructField("apt_number" , IntegerType, false)))
val address = StructType( List(
StructField ("zip" , LongType, false),
StructField("street", street_adress, false),
StructField("city", StringType, false)))
val workHistory = StructType(List(
StructField ("company_name" , StringType, false),
StructField( "company_address" , address, false),
StructField("worked_from", StringType, false)))
val person= StructType ( List(
StructField("name" , name, false, Metadata.empty),
StructField("SSN", StringType, false,Metadata.empty),
StructField("home_address", ArrayType(address), false),
StructField("work_history", ArrayType(workHistory), false)))
val cmplx_data_with_schema=spark2.read.schema(person).json(complex_data_json)
cmplx_data_with_schema.printSchema()
cmplx_data_with_schema.write.mode(SaveMode.Overwrite)
.format("aerospike") //aerospike specific format
.option("aerospike.seedhost", AS_HOST) //db hostname, can be added multiple hosts, delimited with ":"
.option("aerospike.namespace", "test") //use this namespace
.option("aerospike.writeset", "scala_complex_input_data") //write to this set
.option("aerospike.updateByKey", "SSN") //indicates which columns should be used for construction of primary key
.save()
```
## Load Complex Data Types (CDT) into a DataFrame with user specified schema
```
val loadedComplexDFWithSchema=spark2
.sqlContext
.read
.format("aerospike")
.option("aerospike.set", "scala_complex_input_data") //read the data from this set
.schema(person)
.load
loadedComplexDFWithSchema.show(2)
loadedComplexDFWithSchema.printSchema()
loadedComplexDFWithSchema.cache()
//Please note the difference in types of loaded data in both cases. With schema, we extactly infer complex types.
```
# Quering Aerospike Data using SparkSQL
### Things to keep in mind
1. Queries that involve Primary Key or Digest in the predicate trigger aerospike_batch_get()( https://www.aerospike.com/docs/client/c/usage/kvs/batch.html) and run extremely fast. For e.g. a query containing `__key` or `__digest` with, with no `OR` between two bins.
2. All other queries may entail a full scan of the Aerospike DB if they can’t be converted to Aerospike batchget.
## Queries that include Primary Key in the Predicate
In case of batchget queries we can also apply filters upon metadata columns like `__gen` or `__ttl` etc. To do so, these columns should be exposed through schema (if schema provided).
```
val batchGet1= spark2.sqlContext
.read
.format("aerospike")
.option("aerospike.set", "scala_input_data")
.option("aerospike.keyType", "int") //used to hint primary key(PK) type when schema is not provided.
.load.where("__key = 829")
batchGet1.show()
//Please be aware Aerospike database supports only equality test with PKs in primary key query.
//So, a where clause with "__key >10", would result in scan query!
//In this query we are doing *OR* between PK subqueries
val somePrimaryKeys= 1.to(10).toSeq
val someMoreKeys= 12.to(14).toSeq
val batchGet2= spark2.sqlContext
.read
.format("aerospike")
.option("aerospike.set", "scala_input_data")
.option("aerospike.keyType", "int") //used to hint primary key(PK) type when inferred without schema.
.load.where((col("__key") isin (somePrimaryKeys:_*)) || ( col("__key") isin (someMoreKeys:_*) ))
batchGet2.show(15)
//We should got in total 13 records.
```
## Queries that do not include Primary Key in the Predicate
```
val somePrimaryKeys= 1.to(10).toSeq
val scanQuery1= spark2.sqlContext
.read
.format("aerospike")
.option("aerospike.set", "scala_input_data")
.option("aerospike.keyType", "int") //used to hint primary key(PK) type when inferred without schema.
.load.where((col("__key") isin (somePrimaryKeys:_*)) || ( col("age") >50 ))
scanQuery1.show()
//Since there is OR between PKs and Bin. It will be treated as Scan query.
//Primary keys are not stored in bins(by default), hence only filters corresponding to bins are honored.
```
## Sampling from Aerospike DB
- Sample specified number of records from Aerospike to considerably reduce data movement between Aerospike and the Spark clusters. Depending on the aerospike.partition.factor setting, you may get more records than desired. Please use this property in conjunction with Spark `limit()` function to get the specified number of records. The sample read is not randomized, so sample more than you need and use the Spark `sample()` function to randomize if you see fit. You can use it in conjunction with `aerospike.recordspersecond` to control the load on the Aerospike server while sampling.
- For more information, please see [documentation](https://docs.aerospike.com/docs/connect/processing/spark/configuration.html) page.
```
//number_of_spark_partitions (num_sp)=2^{aerospike.partition.factor}
//total number of records = Math.ceil((float)aerospike.sample.size/num_sp) * (num_sp)
//use lower partition factor for more accurate sampling
val setname="scala_input_data"
val sample_size=101
val df3=spark2.read.format("aerospike")
.option("aerospike.partition.factor","2")
.option("aerospike.set",setname)
.option("aerospike.sample.size","101") //allows to sample approximately spacific number of record.
.load()
val df4=spark2.read.format("aerospike")
.option("aerospike.partition.factor","6")
.option("aerospike.set",setname)
.option("aerospike.sample.size","101") //allows to sample approximately spacific number of record.
.load()
//Notice that more records were read than requested due to the underlying partitioning logic related to the partition factor as described earlier, hence we use Spark limit() function additionally to return the desired number of records.
val count3=df3.count()
val count4=df4.count()
//Note how limit got only 101 record from df4 which have 128 records.
val dfWithLimit=df4.limit(101)
val limitCount=dfWithLimit.count()
```
## Pushdown [Aerospike Expressions](https://docs.aerospike.com/docs/guide/expressions/) from within a Spark API.
- Make sure that you do not use no the WHERE clause or spark filters while querying
- See [Aerospike Expressions](https://docs.aerospike.com/docs/guide/expressions/) for more information on how to construct expressions.
- Contstructed expressions must be converted to Base64 before using them in the Spark API
- Arbitrary expression can be dynamically cosntructed with unshaded connector jar.
```
val pushdownset="scala_input_data" // we are using this set created above
import com.aerospike.spark.utility.AerospikePushdownExpressions
//We can construct dynamix expression only when library is unshaded.
// id % 5 == 0
// Equvalent Exp: Exp.eq(Exp.mod(Exp.intBin("a"), Exp.`val`(5)), Exp.`val`(0))
// These can be only done with unshaded connector
// val expIntBin=AerospikePushdownExpressions.intBin("id") // id is the name of column
// val expMODIntBinEqualToZero=AerospikePushdownExpressions.eq(
// AerospikePushdownExpressions.mod(expIntBin, AerospikePushdownExpressions.`val`(5)),
// AerospikePushdownExpressions.`val`(0))
// val expMODIntBinToBase64= AerospikePushdownExpressions.build(expMODIntBinEqualToZero).getBase64
// convert to base64 Expression object
val expMODIntBinToBase64= "kwGTGpNRAqJpZAUA"
val pushDownDF =spark2.sqlContext
.read
.format("aerospike")
.schema(schema)
.option("aerospike.set", pushdownset)
.option("aerospike.pushdown.expressions", expMODIntBinToBase64)
.load()
pushDownDF.count() //note this should return 200, becuase there are 200 records whose id bin is divisible be 5
```
## aerolookup
aerolookup allows you to look up records corresponding to a set of keys stored in a Spark DF, streaming or otherwise. It supports:
- [Aerospike CDT](https://docs.aerospike.com/docs/guide/cdt.htmlarbitrary)
- Quota and retry (these configurations are extracted from sparkconf)
- [Flexible schema](https://docs.aerospike.com/docs/connect/processing/spark/configuration.html#flexible-schemas). To enable, set `aerospike.schema.flexible` to true in the SparkConf object.
- Aerospike Expressions Pushdown (Note: This must be specified through SparkConf object.)
```
val outputSchema= StructType(
List(StructField("name", name, false),
StructField("SSN", StringType, false),
StructField("home_address", ArrayType(address), false))
)
import spark2.implicits._
//Create a set of PKs whose records you'd like to look up in the Aerospike database
val ssns = Seq("825-55-3247", "289-18-1554", "756-46-4088", "525-31-0299", "456-45-2200", "200-71-7765")
val ssnDF = ssns.toDF("SSN")
import com.aerospike.spark._ // to import aerojoin functionality
//scala_complex_input_data is the set in Aerospike database that you are using to look up the keys stored in ssnDF
val outputDF=aerolookup(ssnDF,"SSN", "scala_complex_input_data",outputSchema, "test")
outputDF.show(100)
```
| github_jupyter |
# bioptim #1 - InitialGuess
This tutorial is a trivial example on how to manage InitialGuess with bioptim. It is designed to show how one can change the InitialGuess of a problem if it's needed.
InitialGuess allow the problem to start the calculation at a certain point, the goal is to make this initialGuess as near as possible to the real point.
In our example, we will use the musculoskeletal model illustrated just below.
<p style="text-align: center;">
<img src="../doc/wu_model_pic.png" width="400" height="400" /></p>
First, import librairies and the InitialGuess and InitialGuessList Class:
```
import biorbd_casadi as biorbd
from bioptim import (InitialGuess, InitialGuessList)
```
Then, we load the model :
```
model_name = "../models/wu_converted_definitif.bioMod"
# giving the path
biorbd_model = biorbd.Model(model_name)
```
The InitialGuess allow us to define the initial values of degrees of freedom of our model. The most simple problem is a torque driven problem which inculdes states and controls. Controls are denoted by $\boldsymbol{u}$ and contains torques. States are denoted by $\boldsymbol{x}$ and contains generalized coordinates $\boldsymbol{q}$ and generalized velocities $\boldsymbol{\dot{q}}$ .
In this example, all initial guesses are initialized to 0.
```
# State Initial Guess
n_q = biorbd_model.nbQ()
n_qdot = biorbd_model.nbQdot()
x_init = InitialGuess([0] * (n_q + n_qdot))
# Control Initial guess
n_tau = biorbd_model.nbGeneralizedTorque() # We get the number of torque in our model
u_init = InitialGuess([0] * n_tau)
```
An InitialGuess object has two attributes, init and type.
Here, the type of the InitialGuess is constant and the initial values are set to 0 all over the phase.
x_init.init correspond to the initial values of generalized coordinates and generalized velocities.
```
x_init.init
```
x_init.type describes the InitialGuess that is set over the phase.
```
x_init.type
```
If you want to give different initial value to specific states and/or controls.
This is how we proceed:
```
dof0 = 1
dof1 = 0.005
dof25 = 2
control0 = 100
control1 = 200
control12 = -300
```
And we add this values to a list.
```
x_init_list = [0] * (biorbd_model.nbQ() + biorbd_model.nbQdot()) # We created a list of 0 that will become the InitialGuessList
x_init_list[0:2] = [dof0, dof1]
x_init_list[25] = dof25
print("x_init",x_init_list)
u_init_list = ([0] * n_tau)
u_init_list[0:2] = [control0, control1]
u_init_list[12] = control12
print("u_init",u_init_list)
```
And finally we put our list in the InitalGuess.
```
x_init = InitialGuess(x_init_list)
u_init = InitialGuess(u_init_list)
```
## Do It Yourself
Now that you saw an example, let's apply what you learned on an other example. You have to change the following InitialGuess.
```
# Complete the following line
# We want to change states 5, 8 and 9
# Initial guess
n_q = biorbd_model.nbQ()
n_qdot = biorbd_model.nbQdot()
x_init = ... () # we created x_init as a list of 0
dof5 = 1
dof8 = 0.005
dof9 = 2
x_init = InitialGuess(...) # Finally we created the list of InitialGuess
```
Let's check what we've done !
```
x_init.init
```
# Considering a multiphase ocp
Some ocp are made of several phases.
We will now use InitialGuessList to have a InitialGuess for each phase.
```
x_init = InitialGuessList()
u_init = InitialGuessList()
```
Then we define the values for each phase.
```
dof0_phase1 = 1
dof1_phase1 = 0.005
dof25_phase1 = 2
dof0_phase2 = 0.4
dof1_phase2 = 2
dof25_phase2 = 0.008
control0_phase1 = 100
control1_phase1 = 200
control11_phase1 = -300
control0_phase2 = 200
control1_phase2 = -300
control11_phase2 = 100
```
When we have more than one phase, we have to create a list for each phase and add these lists to the InitialGuessList, through .add method.
```
x_init_list_phase1 = [0] * (biorbd_model.nbQ() + biorbd_model.nbQdot())
x_init_list_phase2 = [0] * (biorbd_model.nbQ() + biorbd_model.nbQdot())
x_init_list_phase1[0:2] = [dof0_phase1, dof1_phase1]
x_init_list_phase1[25] = dof25_phase1
x_init_list_phase2[0:2] = [dof0_phase2, dof1_phase2]
x_init_list_phase2[25] = dof25_phase2
x_init.add(x_init_list_phase1)
x_init.add(x_init_list_phase2)
```
Now, InitialGuessList for states contains two list. So let's check !
```
x_init[0].init
x_init[1].init
```
In our case, our problem has multiple phases, so x_init[i].type describes the fact that InitialGuess is set as constant over the phase i.
```
x_init[0].type
x_init[1].type
```
## Do It Yourself
Now that you saw an example, let's apply what you learned on an other example. You have to change the following InitialGuess.
```
# Complete the following line
# We want to change states 5, 8 and 9
# Initial guess
n_q = biorbd_model.nbQ()
n_qdot = biorbd_model.nbQdot()
x_init = ... () # we created x_init as a list of InitialGuess that is initially empty
x_init = InitialGuessList()
dof5_phase1 = 1
dof8_phase1 = 0.005
dof9_phase1 = 2
dof5_phase2 = 0.005
dof8_phase2 = 1
dof9_phase2 = 3
x_init.add(...) # Finally we add each list to the InitialGuessList
x_init.add(...)
```
Let's check what you've done !
```
x_init[0].init
x_init[1].init
```
# Congratulation! You went from zero to hero in the understanding of InitialGuess !
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import scipy.signal
import time
import cv2
import matplotlib.pyplot as plt
tf.config.list_physical_devices("GPU")
import tensorflow as tf
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.InteractiveSession(config=config)
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
num_actions = 3
observation_dimensions = (128, 128, 3)
def Model(acc_actions, steer_actions):
inp = keras.layers.Input( shape=observation_dimensions )
x = keras.layers.Conv2D(filters=32, kernel_size=(3,3), kernel_initializer='he_normal',
padding='same', activation="relu")(inp)
x = keras.layers.AveragePooling2D( (2,2) )(x)
x = keras.layers.Conv2D(filters=8, kernel_size=(3,3), kernel_initializer='he_normal',
padding='same', activation="relu")(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(128, activation="relu")(x)
x1 = keras.layers.Dense(64, activation="relu")(x)
x1 = keras.layers.Dense(acc_actions, activation="softmax", name="throttle")(x1)
x2 = keras.layers.Dense(64, activation="relu")(x)
x2 = keras.layers.Dense(steer_actions, activation="softmax", name='steer')(x2)
return keras.models.Model( inp, [x1, x2] )
model = Model(2, 3)
model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] )
model.summary()
import os
data = pd.read_csv("action_took.csv").drop_duplicates()
print(data.shape)
data = data[ data.iloc[:,0] != "0" ]
data.shape
up_down_target = data.iloc[:,1]
up_down_target.iloc[data.iloc[:,1] == -1] = 0
up_down_target.value_counts()
left_right_target = data.iloc[:,2]
left_right_target.value_counts()
images = []
for img in data.iloc[:,0]:
images.append(cv2.imread(img).reshape((-1,128,128,3)) )
images = np.concatenate(images, axis=0)
images.shape
up_down_target = (up_down_target).values
left_right_target = (left_right_target+1).values
target_ud = np.zeros( (len(images), 2) )
target_lr = np.zeros( (len(images), 3) )
target_ud.shape, target_lr.shape
for i in range(len(up_down_target)):
target_ud[i][ up_down_target[i] ] = 1
for i in range(len(left_right_target)):
target_lr[i][ left_right_target[i] ] = 1
target_ud.sum(axis=0), target_lr.sum(axis=0)
history = model.fit(images/255, [target_ud, target_lr], validation_split=0.2, batch_size=128, epochs=5)
import gym
import gym_carla
import carla
#WetCloudyNoon, WetSunset, HardRainNoon
params = {
'number_of_vehicles': 40,
'number_of_walkers': 30,
'display_size': 250, # screen size of bird-eye render
'display_height' : 512,
'display_main': True,
'weather': "WetSunset",
'max_past_step': 1, # the number of past steps to draw
'dt': 0.1, # time interval between two frames
'discrete': False, # whether to use discrete control space
'discrete_acc': [1.0, 0.0, 1.0], # discrete value of accelerations
'discrete_steer': [-1, 0, 1], # discrete value of steering angles
'continuous_accel_range': [-3.0, 3.0], # continuous acceleration range
'continuous_steer_range': [-0.2, 0.2], # continuous steering angle range
'ego_vehicle_filter': 'vehicle.tesla.model3', # filter for defining ego vehicle
'address': 'localhost',
'port': 2000, # connection port
'town': 'Town02', # which town to simulate
'task_mode': 'random', # mode of the task, [random, roundabout (only for Town03)]
'max_time_episode': 5000, # maximum timesteps per episode
'max_waypt': 12, # maximum number of waypoints
'obs_range': 32, # observation range (meter)
'lidar_bin': 0.125, # bin size of lidar sensor (meter)
'd_behind': 12, # distance behind the ego vehicle (meter)
'out_lane_thres': 5.0, # threshold for out of lane
'desired_speed': 8, # desired speed (m/s)
'max_ego_spawn_times': 200, # maximum times to spawn ego vehicle
'display_route': True, # whether to render the desired route50
'pixor_size': 64, # size of the pixor labels
'pixor': False, # whether to output PIXOR observation
}
def read_transform(img):
return img[76:204,76:204,:]/255
env = gym.make('carla-v0', params=params)
observation = env.reset()
for _ in range(20): observation, _, _, _ = env.step([1.25,0])
done = False
while not done:
#action_ud = np.argmax(up_down_model.predict( read_transform(observation['birdeye']).reshape( (1, 128,128,3) ))[0])
#action_lr = np.argmax(left_right_model.predict( read_transform(observation['birdeye']).reshape( (1, 128,128,3) ))[0])-1
action_ud, action_lr = model.predict( read_transform(observation['birdeye']).reshape( (1, 128,128,3) ))
action_ud, action_lr = np.argmax(action_ud[0]), np.argmax(action_lr[0])-1
observation, _, done, _ = env.step( [1.25*action_ud if action_ud == 1 else -1, action_lr] )
```
| github_jupyter |
# COVID-19 Time Series Prediction Using Temporal Fusion Transformers
## Bernhard Kaindl
**DISCLAIMER:** This project is part of Udacity's [Data Scientist Nanodegree](https://classroom.udacity.com/nanodegrees/nd025/dashboard/overview). The model shipped with this version of the project is to be understood as a _proof of concept_ or, at most, a starting point for further model improvements.
Please refer to official information and guidance provided by your local authorities concerning the COVID-19 pandemic rather than trusting some stranger on the internet and their private coding project.
## Abstract - About the Project
In this project, we are going to train a [Temporal Fusion Transformer](https://arxiv.org/pdf/1912.09363.pdf) (TFT) for time series forecasting. We will use a dataset provided by _Our World in Data_ (OWID) and Jan Breitner's package [`pytorch-forecasting`](https://github.com/jdb78/pytorch-forecasting), which contains an easy-to-use implementastion of a TFT-model based on `torch`.
The goal of this project is to provide an example of a pipeline for training and prediction, together with a minimal dashboard application using `dash` to provide an interactive interface to retrieve predictions made by the model on a per-country basis.
## 1. On Temporal Fusion Transformers
Predictions of time series comprise a rather complex tier of prediction problems with a wide array of use cases in many fields. With advancements in machine learning, research has begun exploring the use of deep neural network algorithms for tackling time series predictions, yielding approaches involving both recurrent neural networks, as well as attention-based architectures, which havce proven useful for problems in image classification and natural language processing.
TFTs as proposed by [Lim et al. (2020)](https://arxiv.org/pdf/1912.09363.pdf) employ attention mechanisms to enable probabilistic multi-horizon forecasting of time-series involving complex inputs, such as past covariates and known - or assumed - future inputs. Furthermore, while providing high-performance forecasts, TFTs also allow for interpretation of the results, as will be demonstrated on the trained model.
## 2. Problem Domain and Method
This project is intended to be a proof of concept to demonstrate the usefulness of TFTs in time series forecasting in areas outside of purely business-focused settings. To this end, this project will examine pandemic data on COVID-19. As such, model performance in forecasting accuracy is secondary, also since training TFTs on large datasets is resource intense.
The training run for the final model chosen in this project needed about 4.5 hours to finish (only for fitting) with eight 2.8 GHz CPU cores and 8 GB GPU RAM.
Nonetheless, this project should provide the interested practitioner with a starting point either to improve upon this model or to use `pytorch-forecasting` and TFTs for their own use case.
The following will be covered in this notebook:
- Data Retrieval & Preprocessing
- Configuring WANDB for Monitoring and Tuning of the Model
- Interpretation/Discussion of Attention and Variable Importance
For the prediction pipeline, you can have a look at the Dash application coming in the same repository.
## 3. Data
There are several methods for retrieving the data. The most straightforward one is to download the data directly from 'https://covid.ourworldindata.org/data/owid-covid-data.json'. However, as the data there is updated on a daily basis, we need some way to keep track of which dataset we are using in order to make models comparable.
One could download and save the data using a timestamp, for example. I peronally used [WANDB's artifact system](https://docs.wandb.ai/artifacts), which versions datasets and allows a good deal of control over which model is trained on which data.
For this example, however, I have already prepared the dataset I used to train the final model. I will leave a snippet commented out to demonstrate how to generate a dataset artifact using `wandb`.
```
import torch
import json
import numpy as np
import pandas as pd
import requests
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, Baseline, DeepAR
from pytorch_forecasting.data import GroupNormalizer
from pytorch_forecasting.metrics import PoissonLoss, QuantileLoss, SMAPE
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
from utils.data_retrieval import retrieve_data_from_url, prepare_data
from datetime import date as dt
import wandb
# initialize offline run to load configuration file
# for online syncing change mode to online and adjust project parameter and run name
run = wandb.init(project="capstone",
name="submission",
mode="offline",
job_type="data_retrieval",
reinit=True,
config='run_config.yml')
# # optional: download data and create new artifact
# URL = 'https://covid.ourworldindata.org/data/owid-covid-data.json'
# df = retrieve_data_from_url(URL)
# # the data in the raw json contains a column with the time-series data in a nested structure
# # run prepare_data on raw output to get one row with all the data per country & per day
# data, x_df, info = prepare_data(df)
# # record timestamp to add retrieval date to artifact metadata
# timestamp = dt.strftime(dt.today(), "%Y%m%d")
# # register artifact with URL to move version number and to log usage
# data_artifact = wandb.Artifact("covid-data", type="dataset", metadata={"retrieval" : timestamp})
# # data_artifact.add_file('cleaned')
# data_artifact.add_reference(URL, name="covid_json", checksum=True)
# run.log_artifact(data_artifact)
```
For the final model, I used [`covid-data:v4`](https://wandb.ai/kaiharuto/capstone/artifacts/dataset/covid-data/961a07a07d5586309848) as training data. As the data artifacts in WANDB only contain the URL as reference, we need to do the same preparation in `prepare_data` as in the cell above.
```
#load raw data from file and prepare
df = pd.read_json("data/covid-data:v4/covid_json", orient="index")
data, x_df, info = prepare_data(df)
data.head()
data.date.min(), data.date.max()
import matplotlib.pyplot as plt
# plot nans in xs
x_nans_sums = info["x_nan_sums"]
x_ax_labels = [k for k,v in x_nans_sums.items()]
x_ax_values = [v for k,v in x_nans_sums.items()]
plt.bar(x_ax_labels,x_ax_values)
plt.title("Missing Values in (Exogenous) Covariates")
plt.xticks(rotation = 90);
```
The graph above shows already the first challenge we are facing with this dataset. For quite a few exogenous variables, we have a problem with missing values. In fact, when experimenting to find a good configuration for model training, trying out different approachs for dealing with NaNs was quite important.
As we will see, however, for the final model, replkacing the NaNs with zeroes yielded acceptable results, but keeping this in mind for further improvements on the model might help.
```
y_nans_sums = info["y_nan_sums"]
y_ax_labels = [k for k,v in y_nans_sums.items()]
y_ax_values = [v for k,v in y_nans_sums.items()]
plt.bar(y_ax_labels,y_ax_values)
plt.title("Missing Values in (Endogenous) Covariates")
plt.xticks(rotation = 90);
```
This chart shows that we might have a similar problem for many of our potential target v. Luckily, for our case, where we use `new_cases_smoothed` as our target, we might not lose too many observations.
```
# plot histograms for all (potential) ys
from pandas.api.types import is_numeric_dtype
fig = plt.figure(figsize=(50,50))
# ax.set_title("Distribution of Covariates")
i = 0
ys = info["y_nan_sums"].keys()
for x in ys:
if(x in ['location', 'date']):
continue
else:
i += 1
ax = fig.add_subplot(6,7,i)
if is_numeric_dtype(data[x]):
ax.set_title(x)
ax.hist(data[x],density=False)
else:
ax.bar(data.groupby(x).size().index, data.groupby(x).size())
ax.set_title(x)
fig;
```
When looking at the distributions of the endogenous variables, we can quickly tell that most of them are left-skewed, except for the `stringency_index` describing the strictness of regulatory measures, which is slightly skewed to the right. This makes sense, as very few governments have not imposed any kind of restriction during the pasndemic.
```
# plot histograms for all (potential) xs
from pandas.api.types import is_numeric_dtype
fig = plt.figure(figsize=(50,50))
# ax.set_title("Distribution of Covariates")
i = 0
xs = info["x_nan_sums"].keys()
for x in xs:
if(x in ['location', 'date']):
continue
else:
i += 1
ax = fig.add_subplot(4,4,i)
if is_numeric_dtype(data[x]):
ax.set_title(x)
ax.hist(data[x],density=False)
else:
ax.bar(data.groupby(x).size().index, data.groupby(x).size())
ax.set_title(x)
fig;
```
Looking at the distrributions for the exogenous and time-invariant variable is not very informative. The most interesting observation here is the diastribution of `handwashing_facilities`, where there seems to be quite some variance among the few countries where we actually do have values for this variable.
Idiosyncratic (i.e. within-country) errors can luckily be neglected, as `pytorch forecasting` applies a [by-group normalization](https://pytorch-forecasting.readthedocs.io/en/latest/api/pytorch_forecasting.data.encoders.GroupNormalizer.html?highlight=group#groupnormalizer), alleviating effects that we would see from strong variance within one country over time.
Now, we can have a look at our model setup. Since we use `wandb`, most parameters can be configured in `run_config.yml` (with one exception). It is still instructive to have a look at the parameters. The YAML file provides a short definition for each of them.
## 4. Configuring the Training Run
```
print("Final Parameters used")
for param, value in run.config.items():
print(param + ": " + str(value))
```
Note that `max_pred_length` aned `max_encoder_length` are both given in days. Also, `time_varying_known_reals` might not actually be known for the forecast period, but can be assumed and set during prediction for "counterfactual" forecasts.
Another point to note is that `impute_strategy`, `n_neighbors`, and `weights`, are only used if imputers from sklearn are used to replace missing values. Below is the code for setting these values without a yaml file and for setting up WANDB for logging during training.
Also note that `training_cutoff` cannot be specified in the YAML as it is a computed parameter!
The parameters and their comments are gratefully taken from [this tutorial on `pytorch-forecasting`](https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html#Create-dataset-and-dataloaders)
```
# set up wandb-based logging for prototype
wandb_logger = WandbLogger(name='submission' + "_training",project="capstone")
# set model parameters for wandb
config = wandb.config
# model configs for wandb
# config.max_pred_length = 7 # predict at most two weeks
# config.max_encoder_length = 180 # use at most ~6 months as input
config.training_cutoff = data['time_idx'].max() - config.max_pred_length
# config.max_epochs = 50
# config.gradient_clip_val = 0.03
# config.learning_rate = .08
# config.hidden_size = 16
# # number of attention heads. Set to up to 4 for large datasets
# config.attention_head_size = 4
# config.dropout = 0.25 # between 0.1 and 0.3 are good values
# config.hidden_continuous_size = 4 # set to <= hidden_size
# config.output_size = 7 # 7 quantiles by default
# # reduce learning rate if no improvement in validation loss after x epochs
# config.reduce_on_plateau_patience = 4
# config.targets = 'new_cases_smoothed'
# config.static_reals = ['population','population_density','median_age','aged_65_older','aged_70_older',
# 'gdp_per_capita','cardiovasc_death_rate', 'diabetes_prevalence', 'handwashing_facilities',
# 'hospital_beds_per_thousand', 'life_expectancy', 'human_development_index', 'extreme_poverty', 'female_smokers','male_smokers']
# config.time_varying_known_reals = ['time_idx','stringency_index', 'new_tests_smoothed',"new_vaccinations_smoothed",
# "new_deaths_smoothed"
# ] # in reality unknown but could be used for conditional forecasts
# config.transformation = "softplus"
# config.impute_strategy = "median"
# config.n_neighbors = 8
# config.weights = "distance"
```
Now comes the point where we need to care about the missing values, as we need to construct our training dataset. I decided to simply fill them with 0, which during experimentation worked out better than using more sophisticated imputation strategies.
In the model, this led to predictions being excessively high if a country had very low case numbers over the last 180 days, as this could be interpreted as the respective country underreporting numbers.
To offset this effect slightly, I added dummy variables for missing observations.
```
# see https://github.com/jdb78/pytorch-forecasting/issues/187#issuecomment-743797144
# simple imputation following https://www.kaggle.com/dansbecker/handling-missing-values
train_data = data.copy()
# drop "FSM" (Micronesia) Columns that only appeared as of 2021/01/21
# this removal needs to be maintained as the loaded model in this example was trained
# without these entries.
train_data.drop(index=data.loc[data.location == "FSM"].index, inplace=True)
# make new columns indicating what will be imputed
cols_with_missing = (col for col in [*config.static_reals, *config.time_varying_known_reals,
config.targets]
if train_data[col].isnull().any())
for col in cols_with_missing:
train_data[col + '_was_missing'] = train_data[col].isnull()
train_data[[*config.static_reals, *config.time_varying_known_reals, config.targets]] = train_data[[*config.static_reals, *config.time_varying_known_reals, config.targets]].fillna(0)
impute_dummies = [col for
col in train_data.columns if col.endswith("_was_missing")]
train_data[impute_dummies] = train_data[impute_dummies].astype("str").astype("category")
# if the dataset still contains missing values for the target, count them and drop them
missing_targets = train_data.loc[train_data[config.targets].isna()][["location","date"]].copy(deep=True)
train_data.drop(index=missing_targets.index, inplace=True)
training = TimeSeriesDataSet(
train_data[lambda x: x.time_idx <= config.training_cutoff],
time_idx='time_idx',
target=config.targets,
group_ids=['location'],
min_encoder_length=int(config.max_encoder_length // 2),
max_encoder_length=config.max_encoder_length,
min_prediction_length=1,
max_prediction_length=config.max_pred_length,
static_categoricals=['location', 'continent', 'tests_units'],
static_reals = config.static_reals,
time_varying_known_categoricals=['month', *impute_dummies], #allow for missings to be flagged on country-level over time - needs to be assumed in forecasts
time_varying_known_reals=config.time_varying_known_reals,
target_normalizer=GroupNormalizer(groups=['location'], transformation=config.transformation),
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
allow_missings=True
)
# create validation set (predict=True) which means to predict the last max_prediction_length points in time for each series
validation = TimeSeriesDataSet.from_dataset(training, train_data, predict=True, stop_randomization=True)
# create dataloaders for model
batch_size = 128 # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=4)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=4)
print("Dropping the following for missing target values: \n\n ", missing_targets)
```
As the warnings tell us, we have quite a few countries that do not have enough data for us to make predictions. We will account for those during prediction on new data, which will be briefly discussed in this notebook to give an overview. of how the prediction pipeline works in Dash.
### Baseline and Metrics for Performance Assessment
To compare performance across models, we will use the mean absolute error (MAE). The most important reason for doing so is its straightforward interpretation: As it measures the error in terms of our target variable, it tells us directly **by how much, on average, we can expect our predictions to be off.** This allows us to have an intuitive measure to take into account when using our predictions, as an MAE of _x_ means that our model's prediction will be off by _x_ cases on average.
A disadvantage of this approach is, that the MAE is **sensitive to outliers**, which makes it difficult to use it to compqare models dealing with time-series that exhibit large variance. This also has implications on the choice of our target variable. As the **raw number of new cases** on a daily basis can exhibit quite some variation and is prone to having outliers, even within one location, this metric is better suited for models predicting the smoother seven-day average case numbers.
`pytorch-forecasting` provides a method to compute a simple baseline based on the MAE of a model that simply forward fills the target.
As easy to beat as this might sound, for time series that are well-described by an autoregressive model, taking the own past values of the series can actually be a decent estimator if they are stationary.
```
# calculate baseline mean absolute error, i.e. predict next value as the last available value from the historical values
actuals = torch.cat([y for x, y in iter(val_dataloader)])
baseline_predictions = Baseline().predict(val_dataloader)
baseline = (actuals - baseline_predictions).abs().mean().item()
baseline
```
This is the MAE we will be looking to beat with our model.
Another thing to note about the metrics we're using is the loss metric we are using for optimization in the fitting process. As you can see in the setup of the model object `tft` in the codeblock below, we use the `QuantileLoss()` function provided by `pytorch-forecasting`. As temporal fusion transformers are used to produce _probabilistic forecasts_, we want to ensure to also take the prediction quantiles into account during model fitting, to not only optimize for the average of the forecast values, but to also take into account _prediction uncertainty_.
### Model Setup and Training.
Now, as mentioned above, training the model takes quite some time. That's why I will leave the code for training here commented out and focus on discussing the experimentation during training (together with graphs from the training process), before importing the final model to showcase some results.
```
# # configure network and trainer
# pl.seed_everything(42)
# trainer = pl.Trainer(
# gpus=1,
# num_processes=8,
# # clipping gradients is a hyperparameter and important to prevent divergance
# # of the gradient for recurrent neural networks
# gradient_clip_val=config.gradient_clip_val,
# max_epochs=config.max_epochs,
# logger=wandb_logger
# )
# tft = TemporalFusionTransformer.from_dataset(
# training,
# # not meaningful for finding the learning rate but otherwise very important
# learning_rate=config.learning_rate,
# hidden_size=config.hidden_size, # most important hyperparameter apart from learning rate
# # number of attention heads. Set to up to 4 for large datasets
# attention_head_size=config.attention_head_size,
# dropout=config.dropout, # between 0.1 and 0.3 are good values
# hidden_continuous_size=config.hidden_continuous_size, # set to <= hidden_size
# output_size=config.output_size, # 7 quantiles by default
# loss=QuantileLoss(),
# # reduce learning rate if no improvement in validation loss after x epochs
# reduce_on_plateau_patience=config.reduce_on_plateau_patience
# )
# # print(f"Number of parameters in network: {tft.size()/1e3:.1f}")
# trainer.fit(
# tft,
# train_dataloader=train_dataloader,
# val_dataloaders=val_dataloader)
# load the best model according to the validation loss
# (given that we use early stopping, this is not necessarily the last epoch)
# best_model_path = trainer.checkpoint_callback.best_model_path
# best_tft = TemporalFusionTransformer.load_from_checkpoint(best_model_path)
```
The last block triggers the actual fitting. One comment on the hyperpatameter tuning tools mentionede in the tutorial for `pytorch-forecasting`.
They ended up not being super helpful. What was helpful, however was manual tuning in conjunction with WANDB. This was, admittedly quite cumbersome and involved a long process of adjusting parameters, observing the performance during training, readjusting, and repeating.
The most important runs are summarized in [this Report](https://wandb.ai/kaiharuto/capstone/reports/Snapshot-Mar-8-2021-8-19pm--Vmlldzo1MTM1NjE).
Initially, I ran the model with the parameters copied from [Jan Breitner's Tutorial](https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html) and simply filling NaNs with 0s to try out how this works. At that point, the model was also being trained to forecast the raw number `new_cases`, rather than the 7-day avg `new_cases_smoothed`. This was the model `cov_model_prototype`.
After weeks of tuning parameters and trying different imputation strategies, `cov_model_prototype` was still the best model with an MAE (val) of close to 1,500, qwhich was nowhere close to the baseline.
Switching to `new_cases_smoothed` (hence the name for the run series `smooth_jazz`) pretty quickly delivered very promising results. However, as you can see in the report linked above, both `smooth_jazz_3` and its successor `smooth_jazz_3_2` started out as very promising runs but flattened out and stabilized at high error and loss levels.
This indicated that they were close to getting "to the right path at the beginning but then settled and weren't adjusting with their errors enough anymore.
`smooth_jazz_5` solved this with a higher `reduce_on_plateau_patience`, a parameter giving the number of epochs after which the model will reduce the learning rate, if loss does not decrease. This parameter gave it more "room" to experiment for longer, and indeed it ended up settling at a level that did not only bet its predece4ssors, but also the naive baseline model.
We can see this by loading this model from its best checkpoint.
```
trained_model = TemporalFusionTransformer.load_from_checkpoint('models/smooth_jazz_5.ckpt')
#compute MAE
actuals = torch.cat([y for x, y in iter(val_dataloader)])
predictions = trained_model.predict(val_dataloader)
(actuals - predictions).abs().mean().item()
```
Now, admittedly, it does not beat the baseline by a huge stretch, but, as mentioned in the introduction, it provides a starting point to test the model by running predictions for a few countries. The quantile loss at the best-performing model checkpoint (epoch 23) was at 149.71 (see Weights & Biases report linked above).
## 4. Predicting
To predict on new data, we will download the most recent available data, prepare it like our training data and add the preparation steps described in [this step of the TFT tutorial](https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html#Predict-on-new-data).
We will download the most current dataset from OWID's repository aned use it for predictions. To keep things simple, we will simply forward fill values for our covariates, effectively predicting under _ceteris paribus_ assumptions, i.e. assuming that the future unknown covariates will not change.
```
#download most recent data
URL = 'https://covid.ourworldindata.org/data/owid-covid-data.json'
df = retrieve_data_from_url(URL)
data, x_df, info = prepare_data(df)
predict_data = data.copy(deep=True)
```
As we saw in the warnings above, there might be countries for which we cannot provide predictions. As a result, we will need to adjust the location indices by dropping countries, for which we don't have sufficient data in order to later retrieve them by their embedded ID.
```
indices_to_drop = predict_data.groupby("location").agg("size").loc[
(predict_data.groupby("location").agg("size") < run.config.max_encoder_length)
|
(predict_data.groupby("location").agg("size") < run.config.max_pred_length)
].index.to_list()
indices_to_drop
```
now we prepare the data in the same way as for the training step, except that we drop all countries appearing in `indices to drop`.
```
#set training cutoff programatically
run.config.training_cutoff = data['time_idx'].max() - run.config.max_pred_length
# adjust data to match training format
predict_data.drop(index=predict_data.loc[predict_data.location.isin(indices_to_drop)].index, inplace=True)
predict_data = predict_data.sort_values('location')
# make new columns indicating what will be imputed
cols_with_missing = (col for col in [*run.config.static_reals, *run.config.time_varying_known_reals,
run.config
.targets]
if predict_data[col].isnull().any())
for col in cols_with_missing:
predict_data[col + '_was_missing'] = predict_data[col].isnull()
predict_data[[*run.config.static_reals, *run.config.time_varying_known_reals, run.config.targets]] = predict_data[[*run.config.static_reals, *run.config.time_varying_known_reals, run.config.targets]].fillna(0)
impute_dummies = [col for
col in predict_data.columns if col.endswith("_was_missing")]
predict_data[impute_dummies] = predict_data[impute_dummies].astype("str").astype("category")
# if the dataset still contains missing values for the target, count them and drop them
missing_targets = predict_data.loc[predict_data[run.config.targets].isna()][["location","date"]].copy(deep=True)
predict_data.drop(index=missing_targets.index, inplace=True)
predict_data.date.max()
```
Dropping the above indices leads to a shift in the embedding labels of our model, which we need to correct for by building a new index to draw our country observations from.
```
new_index = dict(tuple(zip(predict_data.location.unique(), range(0, predict_data.location.nunique()))))
```
Now we follow the steps given in the tutorial to build our dataset for making new predictions.
```
# from pytorch tutorial
# select last 180 days
encoder_data = predict_data[lambda x: x.time_idx > x.time_idx.max() - run.config.max_encoder_length]
last_data = predict_data[lambda x: x.time_idx == x.time_idx.max()]
decoder_data = pd.concat(
[last_data.assign(date=lambda x: x.date + pd.DateOffset(days=i)) for i in range(1, run.config.max_pred_length + 1)],
ignore_index=True,
)
# add time index consistent with "data"
decoder_data["time_idx"] = decoder_data["date"].dt.year * 356 + decoder_data["date"].dt.day
decoder_data["time_idx"] += encoder_data["time_idx"].max() + 1 - decoder_data["time_idx"].min()
# adjust additional time feature(s)
decoder_data["month"] = decoder_data.date.dt.month.astype(str).astype("category") # categories have be strings
# combine encoder and decoder data
new_prediction_data = pd.concat([encoder_data, decoder_data], ignore_index=True)
#make time series
pred_ts = TimeSeriesDataSet(
new_prediction_data,
group_ids=["location"],
time_idx="time_idx",
static_categoricals=['location', 'continent', 'tests_units'],
static_reals = run.config.static_reals,
time_varying_known_categoricals=['month', *impute_dummies], #allow for missings to be flagged on country-level over time - needs to be assumed in forecasts
time_varying_known_reals=run.config.time_varying_known_reals,
target_normalizer=GroupNormalizer(groups=['location'], transformation=run.config.transformation),
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
target= run.config.targets,
max_encoder_length=run.config.max_encoder_length,
max_prediction_length=run.config.max_pred_length,
predict_mode=True,
allow_missings=True,
)
new_predictions, new_x = trained_model.predict(pred_ts, mode="raw", return_x=True)
```
Now we can pick predictions by getting the corresponding indesx for each countries ISO3 code. In the dashboard application, these codes will also be mapped to the full country names and we will use `plotly` to build the plots, rather than `matplotlib`. But in this example, we will demonstrate the method for plotting predictions as it comes in `pytorch-forecasting`.
```
# plotting 7-day-avgss of new cases, attention, prediction,
# and quantiles for(South) Korea, Japan, Germany, and Austria
for country in ("KOR", "JPN", "DEU", "AUT"):
trained_model.plot_prediction(new_x, new_predictions, idx=new_index[country], show_future_observed=False);
```
The grey lines show the attention the model pays to certain points in time for making its predictions. As the graphs show, these periods will have consideraable weight in the resulting seven-day forecasts and drive the model to - in some cases - not foresee more recent jumps, especially if they were not included in the model training.
Also note that the time here is given as a time index in days from the last date in the new dataset.
## 6. Interpreting Attention & Variable Importance
One big advantage of TFTs is that they provide us with attentiuon values and variable importances, allowing us to make some inference.
```
raw_predictions, new_x = trained_model.predict(pred_ts, mode="raw", return_x=True)
interpretation = trained_model.interpret_output(raw_predictions, reduction="sum")
trained_model.plot_interpretation(interpretation)
```
The attention the model pays to each period tells us, which points in time are considered the most important in making new forecasts. As we can see, the model considers values from six to four weeks before the most recent date as the most important ones. Given that we have a little more than two weeks between the most recent points in the training and the prediction data, we would conclude that values from four to two weeks prior to model training cxontribute the most to our estimates. This is consistent with the time it might take until someone having contracted COVID-19 to show symptoms and be diagnosed, i.e. show up in the data.
This also shows how important it is to frequently re-train a TFT in order to keep predictions consistent with most recent developments.
For the variable importance statistics, it is not surprising that location and the value range into which previous case numbers fall would be considered the most important factors for preditions by the model. In fact, allowing for spatial interaction in the model might even yield more precise forecasts.
In the second graph, we also see that the month in which a measurement is of considerable importance for prediction, providing evidence for the seasonal pattern that we saw with COVID-19 in the past year and that we would also expect for other corona viruses, such as the common flu.
In the last graph, we can also see the role vaccination numbers play in the dynamics drving the data.
Finally, we can also see in the last graph that the model attributes quite some importance to our dummies for flagging missing values, suggesting that the fact whether numbers are reported or not already allows to draw some conclusions on actually realized values.
## 7. Justification
In this project we demonstrated how to implement a Temporal Fusion Transformer model to forecast 7-day averages of new COVID19 infections using `pytorch-forecasting` as an easy-to-use implementation of this technique. In addition, the code that comes with this project also provides a simple implementation of a web-application written using `Dash`, which can be used to load trained models together with their training data set and configuration, to make predictions available to end users.
We saw how to train a model beating the `pytorch-forecasting` forward-filling baseline model in terms of MAE and how to produce probabilistic forecasts using the trained model. The `plotting` module in this application also shows how to retrieve forecasts, model properties, and quantiles to recreate and style prediction graphs on a country-level to make them accessible to users.
Although the model might not yet yield results that are ready for "production" use-cases, i.e. by policy decision-makers or for informing the wider public, its ease of implementation and the setup with a focus on configurability should allow practitioners and experts to reproduce the results of this first version, as well as to improve upon them by using this setup to reconfigure the model, extend the feature set used during training and enrich the data.
The difficulties we encountered were not much different from what is to be expected by training any machine learning model, and was lower by what one might expect for tackling time-series prediction on such a scale, which usually tend to be rather involved and complex problems.
I am confident that the model outlined here together with the application proof of concept should provide practitioners to build even more performant model based on this setup.
## 8. Where to Go From Here - Future Directions for Improvement
As mentioned above, this model is far from done. The quite time-intensive training process put some pragmatic restrictions on this project, which was meant to primarily showcase how TFTs could be used in predicting the dynamics of pandemic events. Nonetheless, the model results, given its rather simplistic setup and ease of configuration show great promise.
Improving this model and putting in place sustainable pipelines for frequent re-training, as well as using more elaborate imputation strategies might be enough to make this model useful enough to inform political decision-makers and the general public of the situation and to hjelp implement forward-looking policies rather than having to rely on data that is already "outdated" by the time it is analyzed due to the long time COVID-19 cases are unreported.
A few suggestions for improvements might be:
- Allowing for spatial interaction by incorporating geographical data.
- Making counterfactual forecasts by simulating different scenarios when forward-filling values for covariates before making predictions.
- Using more sophisticated approaches to impute missing values.
I am convinced that applying machine learning techniques to time series forecasting opens up a wide array of possibilities that could help in decision making in times of crisis.
| github_jupyter |
```
class MFInput:
def __init__(self, name, x, y, x0):
self.name = name
# list of tuples
self.points = [(x[i], y[i]) for i in range(len(x))]
self.mi = self.getMi(x0)
def getY(self, x1, y1, x2, y2, x0):
if y1 == y2:
return y1
if y1 < y2:
return (x0 - x1) / (x2 -x1)
return (x2 - x0) / (x2 - x1)
def getMi(self, x0):
if x0 < self.points[0][0]:
return self.points[0][1]
if x0 > self.points[-1][0]:
return self.points[-1][1]
for i in range(1,len(self.points)):
x1 = self.points[i - 1][0]
x2 = self.points[i][0]
if x0 >= x1 and x0 < x2:
y1 = self.points[i - 1][1]
y2 = self.points[i][1]
return self.getY(x1, y1, x2, y2, x0)
return -1
class MFOutput:
def __init__(self, name, x, y):
self.name = name
sumX = 0
nb1 = 0
self.points = []
for i in range(len(x)):
self.points.append((x[i], y[i]))
if y[i] == 1:
sumX += x[i]
nb1 += 1
self.mi = 0
self.value = sumX / nb1
from enum import Enum, unique
@unique
class Logic(Enum):
OR = 0
AND = 1
class Rule:
def __init__(self, mfo):
self.mfOutput = mfo
def addInputs(self, mfi1, mfi2, logic):
self.mfInput1 = mfi1
self.mfInput2 = mfi2
if logic == Logic.OR:
self.mfOutput.mi = max(self.mfOutput.mi, max(self.mfInput1.mi, self.mfInput2.mi))
else:
self.mfOutput.mi = max(self.mfOutput.mi, min(self.mfInput1.mi, self.mfInput2.mi))
def addInput(self, mfi):
self.mfInput = mfi
self.mfOutput.mi = max(self.mfOutput.mi, self.mfInput.mi)
def main():
kvalitet = []
kvalitet.append(MFInput("los", [0, 5], [1, 0], 6.5))
kvalitet.append(MFInput("prosek", [0, 5, 10], [0, 1, 0], 6.5))
kvalitet.append(MFInput("dobar", [5, 10], [0, 1], 6.5))
usluga = []
usluga.append(MFInput("losa", [0, 5], [1, 0], 9.8))
usluga.append(MFInput("prosek", [0, 5, 10], [0, 1, 0], 9.8))
usluga.append(MFInput("dobra", [5, 10], [0, 1], 9.8))
napojnica = []
napojnica.append(MFOutput("mala", [0, 13], [1, 0]))
napojnica.append(MFOutput("srednja", [0, 13, 25], [0, 1, 0]))
napojnica.append(MFOutput("velika", [13, 25], [0, 1]))
rules = []
rules.append(Rule(napojnica[0]).addInputs(kvalitet[0], usluga[0], Logic.OR))
rules.append(Rule(napojnica[1]).addInput(usluga[1]))
rules.append(Rule(napojnica[2]).addInputs(kvalitet[2], usluga[2], Logic.OR))
numerator = 0
denominator = 0
for mfo in napojnica:
numerator += mfo.mi * mfo.value
denominator += mfo.mi
solution = numerator / denominator
print("Kvalitet:")
for mfi in kvalitet:
print(mfi.name, mfi.mi)
print("Usluga:")
for mfi in usluga:
print(mfi.name, mfi.mi)
print("Napojnica")
for mfo in napojnica:
print(mfo.name, mfo.mi, mfo.value)
print("Resenje je {}".format(solution))
main()
```
| github_jupyter |
```
#importing important libraries
#libraries for reading dataset
import numpy as np
import pandas as pd
#libraries for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
#libraries for model building and understanding
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
#library to deal with warning
import warnings
warnings.filterwarnings('ignore')
#to display all the columns in the dataset
pd.set_option('display.max_columns', 500)
```
## Reading Data
```
cp = pd.read_csv('CarPrice_Assignment.csv')
```
### Understanding the data
```
cp.head()
cp.shape
cp.info()
cp.describe()
#symboling is a categorical data so we convert to the specified type
cp['symboling'] = cp['symboling'].apply(str)
```
#### There are 11 categorical data and remaining are the numerical data
```
#as per the bussiness requirements we only need name of the carcompany and not the carmodel
#so we drop carmodel and only keep carCompany
cp['CarName'] = cp['CarName'].str.lower()
cp['carCompany'] = cp['CarName'].str.split(' ').str[0]
cp = cp.drop('CarName',axis = 1)
cp.head()
```
## Data visualization and understanding using EDA
### Price is a dependent variable
#### Visualising numerical data
```
#Finding correlation
cor = cp.corr()
cor
# visulaising correlation using heatmap
plt.subplots(figsize=(20, 20))
plt.title('Correlation between each data variable')
sns.heatmap(cor, xticklabels=cor.columns.values,
yticklabels=cor.columns.values,annot= True,linecolor="black",linewidths=2, cmap="viridis")
plt.show()
```
`citympg` and `highwaympg` have negative correlation with price
`carlength`,`carwidth`,`curbweight`,`enginesize` and `horsepower` have positve correlation with price
```
#scatter plot for numerical data with yaxis fixed as price
plt.figure(figsize=[18,12])
plt.subplot(3,3,1)
plt.scatter(cp.wheelbase, cp.price)
plt.title('wheelbase vs price')
plt.subplot(3,3,2)
plt.scatter(cp.peakrpm, cp.price)
plt.title('peakrpm vs price')
plt.subplot(3,3,3)
plt.scatter(cp.carheight, cp.price)
plt.title('carheight vs price')
plt.subplot(3,3,4)
plt.scatter(cp.compressionratio, cp.price)
plt.title('compressionratio vs price')
plt.subplot(3,3,5)
plt.scatter(cp.stroke, cp.price)
plt.title('Stroke vs price')
plt.subplot(3,3,6)
plt.scatter(cp.boreratio, cp.price)
plt.title('boreratio vs price')
plt.subplot(3,3,7)
plt.scatter(cp.enginesize, cp.price)
plt.title('enginesize vs price')
plt.subplot(3,3,8)
plt.scatter(cp.horsepower, cp.price)
plt.title('horsepower vs price')
plt.subplot(3,3,9)
plt.scatter(cp.curbweight, cp.price)
plt.title('curbweight vs price')
plt.show()
plt.figure(figsize=[15,8])
plt.subplot(2,2,1)
plt.scatter(cp.carlength, cp.price)
plt.title('carlength vs price')
plt.subplot(2,2,2)
plt.scatter(cp.carwidth, cp.price)
plt.title('carwidth vs price')
plt.subplot(2,2,3)
plt.scatter(cp.citympg, cp.price)
plt.title('citympg vs price')
plt.subplot(2,2,4)
plt.scatter(cp.highwaympg, cp.price)
plt.title('highwaympg vs price')
plt.show()
print(np.corrcoef(cp['carlength'], cp['carwidth'])[0, 1])
print(np.corrcoef(cp['citympg'], cp['highwaympg'])[0, 1])
```
### Deriving new feature `mielage`
```
#citympg and highwaympg both have similar negativecorrelation with price and can derive a new metric mielage
#using these two features which represents the realtion of both features with price as one
cp['mielage'] = (0.70*cp['highwaympg'])+(0.30*cp['citympg'])
cp['mielage'].corr(cp['price'])
```
#### Visualising categorical data
```
plt.figure(figsize=(20, 15))
plt.subplot(3,3,1)
sns.countplot(cp.fueltype)
plt.subplot(3,3,2)
sns.countplot(cp.aspiration)
plt.subplot(3,3,3)
sns.countplot(cp.doornumber)
plt.subplot(3,3,4)
sns.countplot(cp.drivewheel)
plt.subplot(3,3,5)
sns.countplot(cp.carbody)
plt.subplot(3,3,6)
sns.countplot(cp.enginelocation)
plt.subplot(3,3,7)
sns.countplot(cp.enginetype)
plt.subplot(3,3,8)
sns.countplot(cp.cylindernumber)
plt.subplot(3,3,9)
sns.countplot(cp.symboling)
plt.show()
```
`ohc` is the most preferred enginetype
most cars have `four cylinders`
`sedan` and `hatchback` are most common carbody
most cars prefer `gas` fueltype
```
plt.figure(figsize=(30,25))
plt.subplot(2,1,1)
sns.countplot(cp.fuelsystem)
plt.subplot(2,1,2)
sns.countplot(cp.carCompany)
plt.show()
```
`mpfi` and `2bbl` are most common fuelsystem
`Toyota` is most favoured carcompany
##### we can observe that numerous carcompanies are misspelled
```
# replcaing incorrect carcompanies with correct ones so we replace them with correct spellings
cp['carCompany'] = cp['carCompany'].str.replace('vok','volk')
cp['carCompany'] = cp['carCompany'].str.replace('ou','o')
cp['carCompany'] = cp['carCompany'].str.replace('cshce','sche')
cp['carCompany'] = cp['carCompany'].str.replace('vw','volkswagen')
cp['carCompany'] = cp['carCompany'].str.replace('xd','zd')
# visualising categorical data vs price
plt.figure(figsize = (25,15))
plt.subplot(3,3,1)
sns.boxplot(x = 'fueltype',y='price', data = cp)
plt.subplot(3,3,2)
sns.boxplot(x = 'symboling',y='price', data = cp)
plt.subplot(3,3,3)
sns.boxplot(x = 'aspiration',y='price', data = cp)
plt.subplot(3,3,4)
sns.boxplot(x = 'doornumber',y='price', data = cp)
plt.subplot(3,3,5)
sns.boxplot(x = 'carbody',y='price', data = cp)
plt.subplot(3,3,6)
sns.boxplot(x = 'drivewheel',y='price', data = cp)
plt.subplot(3,3,7)
sns.boxplot(x = 'enginelocation',y='price', data = cp)
plt.subplot(3,3,8)
sns.boxplot(x = 'enginetype',y='price', data = cp)
plt.subplot(3,3,9)
sns.boxplot(x = 'cylindernumber',y='price', data = cp)
plt.show()
```
`ohcv` are most expensive of the enginetype
`doornumber` don't have much impact on the price
`hardtop` and `covertible` are most expensive among the carbody
cars that are `rwd` have higher price
```
plt.figure(figsize=(30,25))
plt.subplot(2,1,1)
sns.boxplot(x = 'fuelsystem',y='price', data = cp)
plt.subplot(2,1,2)
sns.boxplot(x = 'carCompany',y='price', data = cp)
plt.show()
```
`buick`, `jaguar`, `porsche` and `bmw` are most expensive carCompany
`mpfi` and `idi` having the highest price range.
### Encoding categorical data
```
#defining fucntion for binary encoding of features with only 2 types of data
def number_map(x):
return x.map({'gas':1,'diesel':0,'std':0,'turbo':1,'two':0,'four':1,'front':0,'rear':1})
cp[['aspiration']] =cp[['aspiration']].apply(number_map)
cp[['doornumber']] =cp[['doornumber']].apply(number_map)
cp[['fueltype']] =cp[['fueltype']].apply(number_map)
cp[['enginelocation']] =cp[['enginelocation']].apply(number_map)
#creating dummies for categorical data by defining fucntion
def dummies(x,df):
sp = pd.get_dummies(df[x], drop_first = True)
df = pd.concat([df, sp], axis = 1)
return df
#appliying dummies function to features
cp = dummies('symboling',cp)
cp = dummies('carbody',cp)
cp = dummies('drivewheel',cp)
cp = dummies('enginetype',cp)
cp = dummies('cylindernumber',cp)
cp = dummies('fuelsystem',cp)
cp = dummies('carCompany',cp)
#dropping orignal columns after dummies
#and dropping columns that are not required for the analysis
cp = cp.drop(['carCompany','car_ID','symboling','carbody','drivewheel','enginetype','cylindernumber','fuelsystem'],axis = 1)
cp.head()
cp.info()
```
### Splitting data into test and train datasets
```
# Splitting data into test and train sets
df_train,df_test = train_test_split(cp ,train_size = 0.7, random_state = 100)
print(df_train.shape)
print(df_test.shape)
```
#### scaling the train dataset
```
scaler = MinMaxScaler()
#applying MinMax scaling to all the numerical data
num_var = ['boreratio','stroke','mielage','carlength','carwidth','wheelbase','curbweight','carheight','enginesize','price','peakrpm','horsepower','compressionratio']
df_train[num_var] = scaler.fit_transform(df_train[num_var])
df_train.head()
cp.columns
```
#### Dividing into X and y for model building
```
y_train = df_train.pop('price')
X_train = df_train
```
### Building model
#### we use RFE(recursive feature elemination) for building our model because we have lots of feature
#### it is not feasible to build a model by using them one by one
## RFE
```
# Running RFE with the output number of the variable equal to 14
lm = LinearRegression()
lm.fit(X_train, y_train)
rfe = RFE(lm, 14) # running RFE
rfe = rfe.fit(X_train, y_train)
#ranking of all the features
list(zip(X_train.columns,rfe.support_,rfe.ranking_))
#14 selected features
col = X_train.columns[rfe.support_]
col
#rejected features
X_train.columns[~rfe.support_]
```
### Building model using statsmodel, for the detailed statistics
```
# Creating X_test dataframe with RFE selected variables
X_train_rfe = X_train[col]
#Adding the constant
X_train_lr = sm.add_constant(X_train_rfe)
lr_1 = sm.OLS(y_train,X_train_lr).fit() # Running the linear model
print(lr_1.summary())
#defining the fucntion for calculating VIF
def get_vif(X):
vif = pd.DataFrame()
vif['Features'] = X.columns
vif['VIF'] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
return (vif)
get_vif(X_train_lr)
```
##### threshold for p-value = 0.030
```
#feature rotor has the highest VIF so we drop this feature as it is highly multicolinear
X_train_new = X_train_rfe.drop(['rotor'], axis=1)
```
Rebuilding the model without `rotor`
```
X_train_lm = sm.add_constant(X_train_new)
lr_2 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_2.summary())
get_vif(X_train_lm)
#feature enginesize has the highest vif value so we drop this feature
X_train_new = X_train_new.drop(['enginesize'], axis=1)
```
Rebuilding the model without `enginesize`
```
X_train_lm = sm.add_constant(X_train_new)
lr_3 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_3.summary())
get_vif(X_train_lm)
#feature two has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['two'], axis=1)
```
Rebuilding the model without `two`
```
X_train_lm = sm.add_constant(X_train_new)
lr_4 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_4.summary())
#feature stroke has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['stroke'], axis=1)
```
Rebuilding the model without `stroke`
```
X_train_lm = sm.add_constant(X_train_new)
lr_5 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_5.summary())
#feature five has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['five'], axis=1)
```
Rebuilding the model without `five`
```
X_train_lm = sm.add_constant(X_train_new)
lr_6 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_6.summary())
get_vif(X_train_lm)
#feature boreratio has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['boreratio'], axis=1)
```
Rebuilding the model without `boreratio`
```
X_train_lm = sm.add_constant(X_train_new)
lr_7 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_7.summary())
get_vif(X_train_lm)
#feature three has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['three'], axis=1)
```
Rebuilding the model without `three`
```
X_train_lm = sm.add_constant(X_train_new)
lr_8 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_8.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
#feature curbweight has high vif thus it is multicolinear
X_train_new = X_train_new.drop(['curbweight'], axis=1)
```
Rebuilding the model without `curbweight`
```
X_train_lm = sm.add_constant(X_train_new)
lr_9 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_9.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
#feature porsche has high p-value thus it is insignificant
X_train_new = X_train_new.drop(['porsche'], axis=1)
```
Rebuilding the model without `porsche`
```
X_train_lm = sm.add_constant(X_train_new)
lr_10 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_10.summary())
get_vif(X_train_lm)
#checking model after removing twelve as it has very few datapoints
X_train_new = X_train_new.drop(['twelve'], axis=1)
```
Rebuilding the model without `twelve`
```
X_train_lm = sm.add_constant(X_train_new)
lr_11 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_11.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
```
## Residual Analysis of the train data
to check if the error terms are also normally distributed
```
y_train_price = lr_11.predict(X_train_lm)
# Plot the histogram of the error terms
fig = plt.figure()
sns.distplot((y_train - y_train_price), bins = 20)
fig.suptitle('Error Terms', fontsize = 20) # Plot heading
plt.xlabel('Errors', fontsize = 18) # X-label
plt.show()
```
#### This above graph shows that our assumption that the error is normally distributed is correct and satisfied
## Making Predictions
#### Scaling the test sets
```
#applying MinMax scaling to all the numerical data
num_vars = ['boreratio','stroke','mielage','carlength','carwidth','wheelbase','curbweight','carheight','enginesize','price','peakrpm','horsepower','compressionratio']
df_test[num_vars] = scaler.transform(df_test[num_vars])
```
#### Dividing into X_test and y_test
```
y_test = df_test.pop('price')
X_test = df_test
# checking model to make predictions.
# Creating X_test_new dataframe by dropping variables from X_test
X_test_new = X_test[X_train_new.columns]
# Adding a constant variable
X_test_new = sm.add_constant(X_test_new)
# Making predictions
y_pred = lr_11.predict(X_test_new)
```
## Model Evaluation
```
# Plotting y_test and y_pred to understand the spread.
fig = plt.figure()
plt.scatter(y_test,y_pred)
fig.suptitle('y_test vs y_pred', fontsize=20) # Plot heading
plt.xlabel('y_test', fontsize=18) # X-label
plt.ylabel('y_pred', fontsize=16) # Y-label
#calculationg the final r2 score for test data
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
```
The test data has `r2 score` of `.80` i.e it explains `80%` variance.
```
# The final features for building model are -
# Features coefficients p-value vif
# enginelocation 0.5619 0.000 1.04
# carwidth 0.7678 0.000 1.45
# four -0.1170 0.000 1.59
# bmw 0.2762 0.000 1.09
```
1. The R-sqaured and Adjusted R-squared are 0.836 and 0.831 i.e. 83% of variance can be explained.
2. The F-stats and Prob(F-stats) are 175.7 and 4.17e-53(approx. 0.0) i.e. 83%variance explain is not by chance
3. The p-values for all the coefficients is less than the significance level of 0.03.i.e. all the predictors are significant.
### Model lr_11 satisfies all the requirements thus its the best fit model
The equation of our best fitted line is:
$ price = 0.561 \times enginelocation + 0.767 \times carwidth + 0.272 \times carCompany bmw - 0.117 \times cylindernumber four $
| github_jupyter |
<img src="https://rhyme.com/assets/img/logo-dark.png" align="center"> <h2 align="center">Logistic Regression: A Sentiment Analysis Case Study</h2>
### Introduction
___
- IMDB movie reviews dataset
- http://ai.stanford.edu/~amaas/data/sentiment
- Contains 25000 positive and 25000 negative reviews
<img src="https://i.imgur.com/lQNnqgi.png" align="center">
- Contains at most reviews per movie
- At least 7 stars out of 10 $\rightarrow$ positive (label = 1)
- At most 4 stars out of 10 $\rightarrow$ negative (label = 0)
- 50/50 train/test split
- Evaluation accuracy
<b>Features: bag of 1-grams with TF-IDF values</b>:
- Extremely sparse feature matrix - close to 97% are zeros
<b>Model: Logistic regression</b>
- $p(y = 1|x) = \sigma(w^{T}x)$
- Linear classification model
- Can handle sparse data
- Fast to train
- Weights can be interpreted
<img src="https://i.imgur.com/VieM41f.png" align="center" width=500 height=500>
### Task 1: Loading the dataset
---
```
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(5)
```
## <h2 align="center">Bag of words / Bag of N-grams model</h2>
### Task 2: Transforming documents into feature vectors
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
Below, we will call the fit_transform method on CountVectorizer. This will construct the vocabulary of the bag-of-words model and transform the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
```
Raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*
### Task 3: Word relevancy using term frequency-inverse document frequency
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
where $n_d$ is the total number of documents, and df(d, t) is the number of documents d that contain the term t.
```
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
```
The equations for the idf and tf-idf that are implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that is implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
### Example:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
### Task 4: Calculate tf-idf of the term *is*:
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
```
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
```
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
```
### Task 5:Data Preparation
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
```
### Task 6: Tokenization of documents
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
```
### Task 7: Document classification via a logistic regression model
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
'''param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l2'],
'clf__C': [1.0, 10.0, 100.0]},
]'''
```
### Task 8: Load saved model from disk
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
### Task 9: Model accuracy
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
| github_jupyter |
```
import sys
if not '..' in sys.path:
sys.path.append('..')
from draw_workflow import draw_workflow
```
# Noodles
_Easy_ concurrent programming <s>in</s> using Python
Johan Hidding, Thursday 19-11-2015 @ NLeSC
```
from noodles import schedule, run, run_parallel, gather
```
## But, why?
* save time _user's_ time
* be flexible
## Alternatives
* What we discussed: Taverna, KNIME, Pegasus etc.
* Celery
* IPyParallel
* Fireworks
* Hadoop / Spark
## Noodles parable (thank you Oscar!)

### start with example
We start with a few functions that happen to exist some out where
```
@schedule
def add(a, b):
return a+b
@schedule
def sub(a, b):
return a-b
@schedule
def mul(a, b):
return a*b
```
Our fledgeling Python script kiddie then enters the following code
```
u = add(5, 4)
v = sub(u, 3)
w = sub(u, 2)
x = mul(v, w)
draw_workflow('callgraph1.png', x._workflow)
```
resulting in this __workflow__:

We may run this in parallel!
```
run_parallel(x, n_threads = 2)
```
## How does it work?
* Decorate functions to build a workflow
* Use _any_ back-end to run on
## The decorator
```
def schedule(f):
@wraps(f)
def wrapped(*args, **kwargs):
bound_args = signature(f).bind(*args, **kwargs)
bound_args.apply_defaults()
return PromisedObject(merge_workflow(f, bound_args))
return wrapped
```
### Mocking a 'real' Python object
```
class PromisedObject:
def __init__(self, workflow):
self._workflow = workflow
def __call__(self, *args, **kwargs):
return _do_call(self._workflow, *args, **kwargs)
def __getattr__(self, attr):
if attr[0] == '_':
return self.__dict__[attr]
return _getattr(self._workflow, attr)
def __setattr__(self, attr, value):
if attr[0] == '_':
self.__dict__[attr] = value
return
self._workflow = get_workflow(_setattr(self._workflow, attr, value))
```
### Merging workflows into a function call
```
def merge_workflow(f, bound_args):
variadic = next((x.name for x in bound_args.signature.parameters.values()
if x.kind == Parameter.VAR_POSITIONAL), None)
if variadic:
bound_args.arguments[variadic] = list(bound_args.arguments[variadic])
node = FunctionNode(f, bound_args)
idx = id(node)
nodes = {idx: node}
links = {idx: set()}
for address in serialize_arguments(bound_args):
workflow = get_workflow(
ref_argument(bound_args, address))
if not workflow:
continue
set_argument(bound_args, address, Parameter.empty)
for n in workflow.nodes:
if n not in nodes:
nodes[n] = workflow.nodes[n]
links[n] = set()
links[n].update(workflow.links[n])
links[workflow.top].add((idx, address))
return Workflow(id(node), nodes, links)
```
## eeeehm, What can we do (sort of)?
* embarrassingly parallel loops
* embedded workflows
* empirical member assignment
### loops
```
from noodles import schedule, run, run_parallel, gather
@schedule
def sum(a, buildin_sum = sum):
return buildin_sum(a)
r1 = add(1, 1)
r2 = sub(3, r1)
def foo(a, b, c):
return mul(add(a, b), c)
multiples = [foo(i, r2, r1) for i in range(6)]
r5 = sum(gather(*multiples))
draw_workflow('callgraph2.png', r5._workflow)
```

```
run_parallel(r5, n_threads = 4)
```
### embedded workflows
```
@schedule
def sqr(a):
return a*a
@schedule
def map(f, lst):
return gather(*[f(x) for x in lst])
@schedule
def num_range(a, b):
return range(a, b)
wf = sum(map(sqr, num_range(0, 1000)))
draw_workflow('callgraph3.png', wf._workflow)
```

```
run_parallel(wf, n_threads=4)
```
## Using objects
### Golden rule
* if you change something, return it
```
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
a = A(5)
a.multiply(10)
a.second = 7
draw_workflow("callgraph4.png", a._workflow)
```

```
@schedule
class A:
def __init__(self, value):
self.value = value
def multiply(self, factor):
self.value *= factor
return self
a = A(5)
a = a.multiply(10)
a.second = 7
draw_workflow("callgraph5.png", a._workflow)
```

```
result = run_parallel(a, n_threads=4)
print(result.value, result.second)
```
# Questions / Suggestions
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#-MIDS---w261-Machine-Learning-At-Scale-" data-toc-modified-id="-MIDS---w261-Machine-Learning-At-Scale--1"><span class="toc-item-num">1 </span> MIDS - w261 Machine Learning At Scale </a></div><div class="lev2 toc-item"><a href="#Assignment---HW11" data-toc-modified-id="Assignment---HW11-11"><span class="toc-item-num">1.1 </span>Assignment - HW11</a></div><div class="lev3 toc-item"><a href="#INSTRUCTIONS-for-SUBMISSION" data-toc-modified-id="INSTRUCTIONS-for-SUBMISSION-111"><span class="toc-item-num">1.1.1 </span>INSTRUCTIONS for SUBMISSION</a></div><div class="lev3 toc-item"><a href="#USEFUL-REFERENCES" data-toc-modified-id="USEFUL-REFERENCES-112"><span class="toc-item-num">1.1.2 </span>USEFUL REFERENCES</a></div><div class="lev3 toc-item"><a href="#CONFIGURATION" data-toc-modified-id="CONFIGURATION-113"><span class="toc-item-num">1.1.3 </span>CONFIGURATION</a></div><div class="lev3 toc-item"><a href="#DATASETS" data-toc-modified-id="DATASETS-114"><span class="toc-item-num">1.1.4 </span>DATASETS</a></div><div class="lev1 toc-item"><a href="#HW-Problems" data-toc-modified-id="HW-Problems-2"><span class="toc-item-num">2 </span>HW Problems</a></div><div class="lev2 toc-item"><a href="#HW11.0:-Broadcast-versus-Caching-in-Spark" data-toc-modified-id="HW11.0:-Broadcast-versus-Caching-in-Spark-21"><span class="toc-item-num">2.1 </span>HW11.0: Broadcast versus Caching in Spark</a></div><div class="lev2 toc-item"><a href="#HW11.1-Loss-Functions" data-toc-modified-id="HW11.1-Loss-Functions-22"><span class="toc-item-num">2.2 </span>HW11.1 Loss Functions</a></div><div class="lev2 toc-item"><a href="#HW11.2-Gradient-descent" data-toc-modified-id="HW11.2-Gradient-descent-23"><span class="toc-item-num">2.3 </span>HW11.2 Gradient descent</a></div><div class="lev2 toc-item"><a href="#HW11.3-Logistic-Regression" data-toc-modified-id="HW11.3-Logistic-Regression-24"><span class="toc-item-num">2.4 </span>HW11.3 Logistic Regression</a></div><div class="lev2 toc-item"><a href="#HW11.4-SVMs" data-toc-modified-id="HW11.4-SVMs-25"><span class="toc-item-num">2.5 </span>HW11.4 SVMs</a></div><div class="lev2 toc-item"><a href="#HW11.5--[OPTIONAL]-Distributed-Perceptron-algorithm." data-toc-modified-id="HW11.5--[OPTIONAL]-Distributed-Perceptron-algorithm.-26"><span class="toc-item-num">2.6 </span>HW11.5 [OPTIONAL] Distributed Perceptron algorithm.</a></div><div class="lev2 toc-item"><a href="#HW11.6-[OPTIONAL:-consider-doing-this-in-a-group]--Evalution-of-perceptron-algorihtms-on-PennTreeBank-POS-corpus" data-toc-modified-id="HW11.6-[OPTIONAL:-consider-doing-this-in-a-group]--Evalution-of-perceptron-algorihtms-on-PennTreeBank-POS-corpus-27"><span class="toc-item-num">2.7 </span>HW11.6 [OPTIONAL: consider doing this in a group] Evalution of perceptron algorihtms on PennTreeBank POS corpus</a></div><div class="lev2 toc-item"><a href="#HW11.7-[OPTIONAL:-consider-doing-this-in-a-group]-Kernel-Adatron" data-toc-modified-id="HW11.7-[OPTIONAL:-consider-doing-this-in-a-group]-Kernel-Adatron-28"><span class="toc-item-num">2.8 </span>HW11.7 [OPTIONAL: consider doing this in a group] Kernel Adatron</a></div><div class="lev2 toc-item"><a href="#HW11.8-[OPTIONAL]-Create-an-animation-of-gradient-descent-for-the-Perceptron-learning-or-for-the-logistic-regression" data-toc-modified-id="HW11.8-[OPTIONAL]-Create-an-animation-of-gradient-descent-for-the-Perceptron-learning-or-for-the-logistic-regression-29"><span class="toc-item-num">2.9 </span>HW11.8 [OPTIONAL] Create an animation of gradient descent for the Perceptron learning or for the logistic regression</a></div><div class="lev1 toc-item"><a href="#---------ALTERNATIVE-HOWEWORK---------" data-toc-modified-id="---------ALTERNATIVE-HOWEWORK----------3"><span class="toc-item-num">3 </span>------- ALTERNATIVE HOWEWORK --------</a></div>
<h1> MIDS - w261 Machine Learning At Scale </h1>
__Course Lead:__ Dr James G. Shanahan (__email__ Jimi via James.Shanahan _AT_ gmail.com)
<h2>Assignment - HW11</h2>
---
__Name:__ *Your Name Goes Here*
__Class:__ MIDS w261 (Section *Your Section Goes Here*, e.g., Fall 2016 Group 1)
__Email:__ *Your UC Berkeley Email Goes Here*@iSchool.Berkeley.edu
__StudentId__ 123457 __End of StudentId__
__Week:__ 11
__NOTE:__ please replace `1234567` with your student id above
### INSTRUCTIONS for SUBMISSION
This homework can be completed locally on your computer. __Please submit your notebook to your classroom github repository 24 hours prior to the next live session.__
### USEFUL REFERENCES
* Karau, Holden, Konwinski, Andy, Wendell, Patrick, & Zaharia, Matei. (2015). Learning Spark: Lightning-fast big data analysis. Sebastopol, CA: O’Reilly Publishers.
* Hastie, Trevor, Tibshirani, Robert, & Friedman, Jerome. (2009). The elements of statistical learning: Data mining, inference, and prediction (2nd ed.). Stanford, CA: Springer Science+Business Media. (Download for free [here](http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf))
* https://www.dropbox.com/s/ngomebw1koujs2d/classificationISLBook-Logistic-Regression-LDA-NaiveBayes.pdf?dl=0
### CONFIGURATION
Before starting your homework run the following cells to confirm your setup.
```
# general imports
import os
import re
import sys
import numpy as np
import matplotlib.pyplot as plt
# tell matplotlib not to open a new window
%matplotlib inline
# automatically reload modules
%reload_ext autoreload
%autoreload 2
```
__[OPTIONAL]:__ Fix chrome formatting. _The cell below implements a quick hack based on [this stackoverflow thread](http://stackoverflow.com/questions/34277967/chrome-rendering-mathjax-equations-with-a-trailing-vertical-line) to fix [this known issue](https://github.com/mathjax/MathJax/issues/1300) with Mathjax formatting in Chrome (a rounding issue adds a border to the right of mathjax markup)._
```
%%javascript
$('.math>span').css("border-left-color","transparent")
```
### DATASETS
The only data we'll use in the required portion of this assignment is some fake data that you will generate by running provided code.
# HW Problems
## HW11.0: Broadcast versus Caching in Spark
a) __What is the difference between broadcasting and caching data in Spark? Give an example (in the context of machine learning) of each mechanism (at a highlevel). Feel free to cut and paste code examples from the lectures to support your answer.__
> Type your answer here!
b) __Review the following Spark-notebook-based implementation of KMeans and use the broadcast pattern to make this implementation more efficient. Please describe your changes in English first, implement, comment your code and highlight your changes (write all your code in this notebook):___
* Notebook
https://www.dropbox.com/s/41q9lgyqhy8ed5g/EM-Kmeans.ipynb?dl=0
* Notebook via NBViewer
http://nbviewer.ipython.org/urls/dl.dropbox.com/s/41q9lgyqhy8ed5g/EM-Kmeans.ipynb
> Type your answer here!
```
# START STUDENT CODE 11.0
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.0
```
## HW11.1 Loss Functions
a) __In the context of binary classification problems, does the linear SVM learning algorithm yield the same result as a L2 penalized logistic regesssion learning algorithm?__ In your reponse, please discuss the loss functions, and the learnt models, and separating surfaces between the two classes.
> Type your answer here!
b) __In the context of binary classification problems, does the linear SVM learning algorithm yield the same result as a perceptron learning algorithm?__
> Type your answer here!
c) __[OPTIONAL]: Generate an artifical binary classification dataset with 2 input features and plot the learnt separating surface for both a linear SVM and for logistic regression. Comment on the learnt surfaces.__ Please feel free to do this in Python (no need to use Spark)
```
# START STUDENT CODE 11.1.c
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.1.c
```
## HW11.2 Gradient descent
a) __In the context of logistic regression describe and define three flavors of penalized loss functions. Are these all supported in Spark MLLib (include online references to support your answers)__?
> Type your answer here!
b) __Describe probabilistic interpretations of the L1 and L2 priors for penalized logistic regression (HINT: see synchronous slides for week 11 for details)__
> Type your answer here!
## HW11.3 Logistic Regression
a) __Generate 2 sets of linearly separable data with 100 data points each using the data generation code provided below and plot each in separate plots. Call one the training set and the other the testing set.__
```
# part a - provided code
def generateData(n):
"""
Generates a 2D linearly separable dataset with n samples.
The third element of the sample is the label.
The fourth element is an integer {0,1,2} representing the
source of the hypothetical data point (See part d)
"""
xb = (np.random.randn(n)*2-1)/2-0.5
yb = (np.random.randn(n)*2-1)/2+0.5
xr = (np.random.randn(n)*2-1)/2+0.5
yr = (np.random.randn(n)*2-1)/2-0.5
source = np.random.choice(3,n)
inputs = []
for i in range(len(xb)):
inputs.append([xb[i], yb[i], 1, source[i]])
inputs.append([xr[i], yr[i], -1, source[i]])
return inputs
# START STUDENT CODE 11.3.a
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.3.a
```
b) __Write your own data generation function (based on the one provided in part a) to generating non-linearly separable training and testing datasets (with approximately 10% of the data falling on the wrong side of the separating hyperplane. Plot the resulting datasets.__ NOTE: For the remainder of this problem please use the non-linearly separable training and testing datasets. Make sure that your simulated data have a 4th field representing the 'source' as does the example code -- we'll use this for weighted regression in part d similar to what we did in HW6.
```
# START STUDENT CODE 11.3.b
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.3.b
```
c) __Using MLLib train up a LASSO logistic regression model with the training dataset and evaluate with the testing set. What a good number of iterations for training the logistic regression model? Justify with plots and words.__
```
# START STUDENT CODE 11.3.c
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.3.c
```
__ HW11.3 Part C Discussion:__
> Type your response here!
d) __Derive and implement in Spark a weighted LASSO logistic regression using the weighting scheme from HW6. Implement a convergence test of your choice to check for termination within your training algorithm.__
* Evaluate your homegrown weighted LASSO logistic regression on the test dataset.
* Report misclassification error (1 - Accuracy) and how many iterations does it took to converge.
* Does Spark MLLib have a weighted LASSO logistic regression implementation? If so use it and report your findings on the weighted training set and test set.
__Weighting Scheme Reminder:__ The `source` field contains a $0$, $1$ or $2$ for each data point. For this demo, lets suppose that our data come from three sources which we trust to varying degrees (2 = most reliable, 0 = least reliable). We'll weight our linear regression to pay more attention to the more reliable data points. Use the following weighting scheme:
* if source = 0, weight: 1
* if source = 1, weight: 2
* if source = 2, weight: 3
Effectively, we're saying that data points from the most reliable source are three times as important as data points from the least reliable source.
```
# START STUDENT CODE 11.3.d
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.3.d
```
__ HW11.3 Part D Discussion:__
> Type your response here!
## HW11.4 SVMs
Use the non-linearly separable training and testing datasets from HW11.3 in this problem.
a) __Using MLLib train up a soft SVM model with the training dataset and evaluate with the testing set. What is a good number of iterations for training the SVM model? Justify with plots and words.__
```
# START STUDENT CODE 11.4.a
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.4.a
```
__ HW11.4 Part A Discussion:__
> Type your response here!
b) __[Optional] Derive and Implement in Spark a weighted hard linear svm classification learning algorithm.__
* Feel free to use the following notebook as a starting point: [SVM Notebook](http://nbviewer.jupyter.org/urls/dl.dropbox.com/s/dm2l73iznde7y4f/SVM-Notebook-Linear-Kernel-2015-06-19.ipynb).
* Evaluate your homegrown weighted linear svm classification learning algorithm on the weighted training dataset and test dataset from HW11.3 (linearly separable dataset). Report misclassification error (1 - Accuracy) and how many iterations does it took to converge? How many support vectors do you end up with?
* Does Spark MLLib have a weighted soft SVM learner? If so use it and report your findings on the weighted training set and test set.
__ HW11.4 Part B Discussion:__
> Type your response here!
```
# START STUDENT CODE 11.4.b
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.4.b
```
c) __[Optional]Repeat HW11.4.b using a soft SVM and a nonlinearly separable datasets. Compare the error rates that you get here with the error rates you achieve using MLLib's soft SVM. Report the number of support vectors in both cases (may not be available the MLLib implementation).__
```
# START STUDENT CODE 11.4.c
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.4.c
```
__ HW11.4 Part C Discussion:__
> Type your response here!
## HW11.5 [OPTIONAL] Distributed Perceptron algorithm.
[Back to Table of Contents](#TOC)
Using the following papers as background:
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//pubs/archive/36266.pdf
https://www.dropbox.com/s/a5pdcp0r8ptudgj/gesmundo-tomeh-eacl-2012.pdf?dl=0
http://www.slideshare.net/matsubaray/distributed-perceptron
Implement each of the following flavors of perceptron learning algorithm:
1. Serial (All Data): This is the classifier returned if trained serially on all the available data. On a single computer for example (Mistake driven)
2. Serial (Sub Sampling): Shard the data, select one shard randomly and train serially.
3. Parallel (Parameter Mix): Learn a perceptron locally on each shard:
Once learning is complete combine each learnt percepton using a uniform weighting
4. Parallel (Iterative Parameter Mix) as described in the above papers.
```
# START STUDENT CODE 11.5
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.5
```
## HW11.6 [OPTIONAL: consider doing this in a group] Evalution of perceptron algorihtms on PennTreeBank POS corpus
[Back to Table of Contents](#TOC)
Reproduce the experiments reported in the following paper:
*Prediction with MapReduce - Andrea Gesmundo and Nadi Tomeh*
http://www.aclweb.org/anthology/E12-2020
These experiments focus on the prediction accuracy on a part-of-speech
(POS) task using the PennTreeBank corpus. They use sections 0-18 of the Wall
Street Journal for training, and sections 22-24 for testing.
```
# START STUDENT CODE 11.6
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.6
```
## HW11.7 [OPTIONAL: consider doing this in a group] Kernel Adatron
Implement the Kernal Adatron in Spark (contact Jimi for details)
```
# START STUDENT CODE 11.7
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.7
```
## HW11.8 [OPTIONAL] Create an animation of gradient descent for the Perceptron learning or for the logistic regression
Learning with the following 3 training examples. Present the progress in terms of the 2 dimensional input space in terms of a contour plot and also in terms of the 3D surface plot. See Live slides for an example.
Here is a sample training dataset that can be used:
```
-2, 3, +1
-1, -1, -1
2, -3, 1
```
Please feel free to use
+ R (yes R!)
+ d3
+ https://plot.ly/python/
+ Matplotlib
I am happy for folks to collaborate on HW11.8 also.
It would be great to get the 3D surface and contours lines (with solution region and label normalized data) all in the same graph
```
# START STUDENT CODE 11.8
# (ADD CELLS AS NEEDED)
# END STUDENT CODE 11.8
```
# ------- ALTERNATIVE HOWEWORK --------
- Implement a scaleable softmax classifier (aka multinomial logistic regression) in Spark (regularized and non-regularized)
- Run experiments on MNIST dataset and for CIFAR dataset
- And compare to MLLib implementations (accuracy, CPU times)
| github_jupyter |
```
from keras.layers import Dense, Activation, Dropout, Reshape, concatenate, ReLU, Input
from keras.models import Model, Sequential
from keras.regularizers import l2, l1_l2
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.layers.normalization import BatchNormalization
from keras.constraints import unit_norm
from keras import optimizers
from keras import regularizers
from keras import initializers
import keras.backend as K
from sklearn.model_selection import train_test_split
from sklearn.utils import class_weight
from scipy.linalg import fractional_matrix_power
import tensorflow as tf
import numpy as np
from utils import *
from dfnets_optimizer import *
from dfnets_layer import DFNets
import warnings
warnings.filterwarnings('ignore')
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
#Read data.
A, X, Y_train, Y_val, Y_test, train_idx, val_idx, test_idx, train_mask, val_mask, test_mask, Y = load_data('cora')
A = np.array(A.todense())
X = X.todense()
X /= X.sum(1).reshape(-1, 1)
X = np.array(X)
labels = np.argmax(Y, axis=1) + 1
labels_train = np.zeros(labels.shape)
labels_train[train_idx] = labels[train_idx]
#Identity matrix for self loop.
I = np.matrix(np.eye(A.shape[0]))
A_hat = A + I
#Degree matrix.
D_hat = np.array(np.sum(A_hat, axis=0))[0]
D_hat = np.matrix(np.diag(D_hat))
#Laplacian matrix.
L = I - (fractional_matrix_power(D_hat, -0.5) * A_hat * fractional_matrix_power(D_hat, -0.5))
L = L - ((lmax(L)/2) * I)
lambda_cut = 0.5
def step(x, a):
for index in range(len(x)):
if(x[index] >= a):
x[index] = float(1)
else:
x[index] = float(0)
return x
response = lambda x: step(x, lmax(L)/2 - lambda_cut)
#Since the eigenvalues might change, sample eigenvalue domain uniformly.
mu = np.linspace(0, lmax(L), 200)
#AR filter order.
Ka = 5
#MA filter order.
Kb = 3
#The parameter 'radius' controls the tradeoff between convergence efficiency and approximation accuracy.
#A higher value of 'radius' can lead to slower convergence but better accuracy.
radius = 0.90
b, a, rARMA, error = dfnets_coefficients_optimizer(mu, response, Kb, Ka, radius)
h_zero = np.zeros(L.shape[0])
def L_mult_numerator(coef):
y = coef.item(0) * np.linalg.matrix_power(L, 0)
for i in range(1, len(coef)):
x = np.linalg.matrix_power(L, i)
y = y + coef.item(i) * x
return y
def L_mult_denominator(coef):
y_d = h_zero
for i in range(0, len(coef)):
x_d = np.linalg.matrix_power(L, i+1)
y_d = y_d + coef.item(i) * x_d
return y_d
poly_num = L_mult_numerator(b)
poly_denom = L_mult_denominator(a)
arma_conv_AR = K.constant(poly_denom)
arma_conv_MA = K.constant(poly_num)
def dense_factor(inputs, input_signal, num_nodes, droput):
h_1 = BatchNormalization()(inputs)
h_1 = DFNets(num_nodes,
arma_conv_AR,
arma_conv_MA,
input_signal,
kernel_initializer=initializers.glorot_normal(seed=1),
kernel_regularizer=l2(9e-2),
kernel_constraint=unit_norm(),
use_bias=True,
bias_initializer=initializers.glorot_normal(seed=1),
bias_constraint=unit_norm())(h_1)
h_1 = ReLU()(h_1)
output = Dropout(droput)(h_1)
return output
def dense_block(inputs):
concatenated_inputs = inputs
num_nodes = [8, 16, 32, 64, 128]
droput = [0.9, 0.9, 0.9, 0.9, 0.9]
for i in range(5):
x = dense_factor(concatenated_inputs, inputs, num_nodes[i], droput[i])
concatenated_inputs = concatenate([concatenated_inputs, x], axis=1)
return concatenated_inputs
def dense_block_model(x_train):
inputs = Input((x_train.shape[1],))
x = dense_block(inputs)
predictions = Dense(7, kernel_initializer=initializers.glorot_normal(seed=1),
kernel_regularizer=regularizers.l2(1e-10),
kernel_constraint=unit_norm(),
activity_regularizer=regularizers.l2(1e-10),
use_bias=True,
bias_initializer=initializers.glorot_normal(seed=1),
bias_constraint=unit_norm(),
activation='softmax', name='fc_'+str(1))(x)
model = Model(input=inputs, output=predictions)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.002), metrics=['acc'])
return model
model_dense_block = dense_block_model(X)
model_dense_block.summary()
nb_epochs = 200
class_weight = class_weight.compute_class_weight('balanced', np.unique(labels_train), labels_train)
class_weight_dic = dict(enumerate(class_weight))
for epoch in range(nb_epochs):
model_dense_block.fit(X, Y_train, sample_weight=train_mask, batch_size=A.shape[0], epochs=1, shuffle=False,
class_weight=class_weight_dic, verbose=0)
Y_pred = model_dense_block.predict(X, batch_size=A.shape[0])
_, train_acc = evaluate_preds(Y_pred, [Y_train], [train_idx])
_, val_acc = evaluate_preds(Y_pred, [Y_val], [val_idx])
_, test_acc = evaluate_preds(Y_pred, [Y_test], [test_idx])
print("Epoch: {:04d}".format(epoch), "train_acc= {:.4f}".format(train_acc[0]), "test_acc= {:.4f}".format(test_acc[0]))
```
| github_jupyter |
## Python Closures and Generators
## Closures - binding variables from outer function in the inner function
## Technically - function gets stored with its enviroment(bound variables)
### Can also think of preserving certain state
```
# remember this function?
def add_factory(x):
def add(y):
return y + x
return add # upon return free variable x gets bound in the add function
add5 = add_factory(5)
# 5 is bound inside add5 now,
add5(10)
type(add5.__closure__)
[x for x in add5.__closure__]
len(add5.__closure__)
int(add5.__closure__[0])
dir(add5.__closure__[0])
add5.__closure__[0].cell_contents
## Voila!! We get what we expected to get!
## Remember __closure__ is a tuple so we do not get to mutate this!
So how about more values stored?
def add2_fact(x, y):
return lambda z: z+x+y
a10n20 = add2_fact(10,20)
a10n20(40)
len(a10n20.__closure__)
[x.cell_contents for x in a10n20.__closure__]
One last closure example:
def outer(x):
a = 20
def inner(y):
print(f'x:{x}')
print(f'a:{a}')
print(f'y:{y}')
## x += 15 # We can't change the argument that was bound from outside argument
## a += 15 # We can't change the a that was bound from outside function
return a+x+y
return inner
axy = outer(10)
axy(5)
axy(5)
[x.cell_contents for x in axy.__closure__]
```
## What if we want rebind(assign new values) to variables coming from outer scope?
### In languages like Javascript you can do it, so Python should be able to, right?
### Solution: Python3 nonlocal modifier inside inner function
```
# https://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement
# 7.13. The nonlocal statement
# The nonlocal statement causes the listed identifiers to refer to previously bound variables in the nearest enclosing scope excluding globals.
# This is important because the default behavior for binding is to search the local namespace first. The statement allows encapsulated code to rebind variables outside of the local scope besides the global (module) scope.
# Names listed in a nonlocal statement, unlike those listed in a global statement, must refer to pre-existing bindings in an enclosing scope (the scope in which a new binding should be created cannot be determined unambiguously).
# Names listed in a nonlocal statement must not collide with pre-existing bindings in the local scope.
def makeCounter():
count = 5
def f():
nonlocal count
count +=1
def h():
nonlocal count
count +=2
return count
return h()
return f
a = makeCounter()
a()
dir(a)
a()
print(a(),a(),a())
[a() for x in range(10)]
dir(a)
def makeAdjCounter(x):
count = 0
def f():
nonlocal count # without nonlocal we could reference count but couldn't modify it!
count += x
return count
return f
b = makeAdjCounter(2)
c = makeAdjCounter(3)
print(b(),b(),b(), c(), c(), c())
print(c(),c(),c(),c())
[x.cell_contents for x in c.__closure__]
# Result count is hidden from us, but by calling function we can modify its value.
## Another older way was to create some structure(List, Class, Dictionary) inside outer function whose members could be modified by innner
def makeAdjList():
holder=[1,0,0,3] # old method not recommended anymore!
def f():
holder[0] +=1
print(holder)
return holder[0]
return f
d = makeAdjList()
print(d(),d(),d())
```
### Most Python answer is to use generators for persisting some sort of iterable state
## What the heck is a Generator ?
### A Python generator is a function which returns a generator iterator (just an object we can iterate over) by calling yield
* KEY POINT: generator functions use **yield** instead of **return**
* in Python 3 we use next(generatorName) to obtain next value
```
def makeNextGen(current):
while True: ##This means our generator will never run out of values...
current += 1
yield current
numGen = makeNextGen(30)
mybyte = b'\x31\x32\x13\x07'
print(mybyte.decode('ascii'))
len(mybyte)
int.from_bytes(mybyte, byteorder='little')
int.from_bytes(mybyte, byteorder='big')
type(mybyte)
len(mybyte)
print(mybyte)
type(makeNextGen)
dir(makeNextGen)
type(range)
for i in range(20):
print(i)
for i in range(15):
print(next(numGen)) # This is for Python 3.x , in Python 2.x it was numGen.next()
## DO not do this!!
#for el in numGen:
# print(el)
## We can do even better and make an adjustable increment
def makeNextGenInc(current, inc):
while True:
current += inc
yield current
numGen = makeNextGenInc(20,5)
next(numGen)
def smallYield():
yield 1
yield 2
yield 99
yield 5
smallGen = smallYield()
next(smallGen)
list(numGen)
list(smallGen)
numGen10 = makeNextGenInc(200, 10)
[next(numGen10) for x in range(15)]
## Now the above is Pythonic approach to the problem!
### Then there is a generator expression
## The whole point is have a lazy evaluation (ie no need to have everything at once in memory)
gen = (i+10 for i in range(10))
for g in gen:
print(g)
list(gen)
## list(i+10 for i in range(10)) == [i+10 for i in range(10)]
type(gen)
list(gen)
gen = (i+10 for i in range(10))
for g in gen:
print(g)
for g in gen:
print(g)
# You see what is going on?!
gen_exp = (x ** 2 for x in range(10) if x % 2 == 0)
type(gen_exp)
for x in gen_exp:
print(x)
glist = list(gen)
glist
gen = (i+10 for i in range(10))
[next(gen) for x in range(5)]
yes_expr = ('yes' for _ in range(10))
def my_yes_gen():
for _ in range(10):
yield('yes')
#infinite generator
def my_yes_gen():
while True:
yield('yes')
myg = my_yes_gen()
list(myg)
list(yes_expr)
## Challenge how to make an infinite generator with a generator expression?
import itertools
genX = (i*5 for i in itertools.count(start=0, step=1))
[next(genX) for x in range(10)]
[next(genX) for x in range(35)]
gendice = (random.randrange(1,7) for _ in itertools.count(start=0, step=1))
[next(gendice) for x in range(20)]
import random
genY = (i*10+random.randrange(10) for i in itertools.count(start=0, step=1))
[next(genY) for x in range(10)]
## Be very careful with infinite generators, calling list on infinite generator not recommended!
## Of course we should generally have a pretty good idea of maximum number of generations needed
```
### Difference between Python's Generators and Iterators
* iterators is more general, covers all generators
From Official Docs: Python’s generators provide a convenient way to implement the iterator protocol. If a container object’s __iter__() method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the __iter__() and next()
## A Generator is an Iterator
### Specifically, generator is a subtype of iterator.
Conceptually:
Iterators are about various ways to loop over data, generators generate the data on the fly
```
import itertools
dir(itertools)
help(itertools.product)
list(itertools.product(range(10),list('ABCDE')))
```
## Homework
### Write a generator to yield cubes (forever!)
### Write a generator to yield Fibonacci numbers(first 1000)
* Generator Functions ok to use here
```
def fib():
a, b = 0, 1
while True:
a, b = b, a+b
yield b
def fib1000():
a, b = 0, 1
for x in range(1000):
a, b = b, a+b
yield b
f1k = fib1000()
[next(f1k) for _ in range(10)]
f = fib()
[next(f) for _ in range(10)]
def cubes(current):
while True:
#print(current**3)
current+=1
cube = current**3
yield cube
g3 = cubes(1)
next(g3)
cubesforever = (x**3 for x in itertools.count(start=0, step=1))
c30 = [next(cubesforever) for _ in range(30)]
c30
# Hint use yield
## Extra Credit! write generator expression for first 500 cubes that are made from even numbers
g500 = (x**3 for x in range(1,501) if x % 2 == 0)
a10 = [next(g500) for x in range(10)]
a10
a = list(g500)
a[:10]
next(g500)
f10 = list(g500[:10])
```
| github_jupyter |
# Lab 01 : Deep Q-Learning (DQN) - demo
```
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'DQN_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
torch.manual_seed(torch.randint(10000,())) # random seed for pythorch random generator
import time
import numpy as np
import os
import pickle
import gym
import matplotlib
import matplotlib.pyplot as plt
from IPython import display
from collections import namedtuple
import random
from itertools import count
```
# Dataset
```
#Env parameters
env_seed = 1
render = True # display on
render = False # display off
#Initialize the environment with the same seed/initialization value
env = gym.make('CartPole-v0')
env.seed(env_seed)
#Reset the environment
state = env.reset()
print('init state:',state)
#Rollout one episode until it finishes
for t in count():
action = torch.LongTensor(1).random_(0,2).item() # randomly generated action=a in {0,1}
state, reward, done, _ = env.step(action) # receive next state=s' and reward=r
print('t=',t, 'action=',action, 'state=',np.array_str(state, precision=5), 'reward=',reward, 'done=',done )
if render:
env.render() # see the state
if done:
break
```
# Replay Memory
```
Transition = namedtuple( 'Transition', ('state', 'action', 'next_state', 'reward', 'done') )
# class of replay memory/experience
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def write(self, *args): # store transitions (s, a, s', r)
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def read(self, batch_size): # select a random batch of transitions
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# Initialize the replay memory
memory = ReplayMemory(10000)
#Write/store the transition in memory
state = torch.FloatTensor(env.reset())
action = torch.tensor(0).long()
next_state = torch.FloatTensor(env.reset())
reward = torch.tensor(1).float()
done = torch.tensor(0).long()
memory.write(state,action,next_state,reward,done)
print('memory size',memory.__len__())
#Read a batch of transitions (s, a, s', r) from replay memory
batch_size = 1
batch_transitions = memory.read(batch_size)
print('batch_transitions',batch_transitions)
```
# Define the policy network
```
# class of policy network
class Q_NN(nn.Module):
def __init__(self, net_parameters):
super(Q_NN, self).__init__()
input_dim = net_parameters['input_dim']
hidden_dim = net_parameters['hidden_dim']
output_dim = net_parameters['output_dim']
self.fc1 = nn.Linear(input_dim, hidden_dim, bias=True)
self.fc2 = nn.Linear(hidden_dim, output_dim, bias=True)
def forward(self, x):
x = torch.relu(self.fc1(x))
Q_scores = self.fc2(x) # scores over actions
return Q_scores
def select_action(self, state, rand_act_pr): # select action w/ Q network
Q_scores = self.forward(state) # Q(a|s) scores of action a in state s
coin = random.random()
if coin < rand_act_pr: # (state,action) exploration process parametrized by rand_act_pr
action = torch.randint(0,2,()).item()
else:
action = Q_scores.argmax().item()
return action
def loss(self, memory, baseline_Q_net, opt_parameters):
batch_size = opt_parameters['batch_size']
gamma = opt_parameters['gamma']
if memory.__len__()>=batch_size: # read a batch of transitions (s,a,s',r) in replay memory
batch_transitions = Transition(*zip(*memory.read(batch_size)))
else:
batch_transitions = Transition(*zip(*memory.read(memory.__len__())))
batch_states = torch.stack([x for x in batch_transitions.state]).float() # state=s, size=B x 4
batch_next_states = torch.stack([x for x in batch_transitions.next_state]).float() # next_state=s', size=B x 4
batch_rewards = torch.stack([x for x in batch_transitions.reward]).float() # reward=r, size=B
batch_actions = torch.stack([x for x in batch_transitions.action]).long() # action=a, size=B
batch_dones = torch.stack([x for x in batch_transitions.done]).float() # done, size=B
Q = self.forward(batch_states).gather(dim=1,index=batch_actions.unsqueeze(1)) # Q_W(a|s), size=B x 1
max_baseline_Q_net = baseline_Q_net.forward(batch_next_states).max(dim=1)[0].detach() * batch_dones
Q_target = batch_rewards.unsqueeze(1) + \
gamma * max_baseline_Q_net.unsqueeze(1) # Q_target = r + gamma . max_a' Q_W^BL(a'|s'), size=B x 1
loss = nn.MSELoss()(Q,Q_target) # MSE_Loss(error = Q_target - Q_W)
return loss
# class of rollout episodes
class Rollout_Episodes():
def __init__(self):
super(Rollout_Episodes, self).__init__()
def rollout_batch_episodes(self, env, memory, opt_parameters, Q_Net, write_memory=True):
nb_episodes_per_batch = opt_parameters['nb_episodes_per_batch']
env_seeds = opt_parameters['env_seed']
rand_act_pr = opt_parameters['rand_act_pr']
batch_episode_lengths = []
for episode in range(nb_episodes_per_batch):
env.seed(env_seeds[episode].item()) # start with random seed
state = env.reset() # initial state
for t in range(1000): # rollout one episode until it finishes
state_pytorch = torch.from_numpy(state).float().unsqueeze(0) # state=s
action = Q_Net.select_action(state_pytorch, rand_act_pr) # select action=a from state=s
next_state, reward, done, _ = env.step(action) # receive next_state=s' and reward=r
done_mask = 0.0 if done else 1.0
if write_memory:
memory.write(torch.tensor(state),torch.tensor(action),torch.tensor(next_state),
torch.tensor(reward),torch.tensor(done_mask))
state = next_state
if done:
batch_episode_lengths.append(t)
break
return batch_episode_lengths
# network parameters
net_parameters = {}
net_parameters['input_dim'] = 4
net_parameters['hidden_dim'] = 128
net_parameters['output_dim'] = 2
# instantiate networks
Qnet = Q_NN(net_parameters)
print(Qnet)
baseline_Qnet = Q_NN(net_parameters).eval()
baseline_Qnet.load_state_dict(Qnet.state_dict())
print(baseline_Qnet)
# instantiate rollout
rollout_Qnet = Rollout_Episodes()
memory = ReplayMemory(10000)
print('memory size',memory.__len__())
# optimization parameters
opt_parameters = {}
opt_parameters['nb_episodes_per_batch'] = 3
opt_parameters['env_seed'] = torch.LongTensor(opt_parameters['nb_episodes_per_batch']).random_(1,10000)
opt_parameters['rand_act_pr'] = 0.01
env = gym.make('CartPole-v0')
batch_episode_lengths = rollout_Qnet.rollout_batch_episodes(env, memory, opt_parameters, Qnet)
print('batch_episode_lengths:',batch_episode_lengths)
print('memory size',memory.__len__())
```
# Test forward pass
```
# instantiate memory
memory = ReplayMemory(10000)
# optimization parameters
opt_parameters = {}
opt_parameters['lr'] = 0.001
opt_parameters['nb_episodes_per_batch'] = 3
opt_parameters['nb_batches_per_epoch'] = 10
opt_parameters['env_seed'] = torch.LongTensor(opt_parameters['nb_episodes_per_batch']).random_(1,10000)
opt_parameters['batch_size'] = 10
opt_parameters['gamma'] = 0.999
opt_parameters['rand_act_pr'] = 0.01
batch_episode_lengths = Rollout_Episodes().rollout_batch_episodes(env, memory, opt_parameters, Qnet)
print('batch_episode_lengths:',batch_episode_lengths)
print('memory size',memory.__len__())
```
# Test backward pass
```
# Loss
loss = Qnet.loss(memory, baseline_Qnet, opt_parameters)
print('loss:',loss)
# Backward pass
lr = opt_parameters['lr']
optimizer = torch.optim.Adam(Qnet.parameters(), lr=lr)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
# Train one epoch
```
def train_one_epoch(env, memory, Q_net, baseline_Q_net, opt_parameters, optimizer):
Q_net.train()
baseline_Q_net.eval()
rollout_Q_net = Rollout_Episodes()
epoch_loss = 0
nb_data = 0
epoch_episode_length = 0
epoch_episode_lengths = []
nb_batches_per_epoch = opt_parameters['nb_batches_per_epoch']
for iter in range(nb_batches_per_epoch):
opt_parameters['env_seed'] = torch.LongTensor(opt_parameters['nb_episodes_per_batch']).random_(1,10000)
batch_episode_lengths = rollout_Q_net.rollout_batch_episodes(env, memory, opt_parameters, Q_net)
loss = Q_net.loss(memory, baseline_Q_net, opt_parameters)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.detach().item()
nb_data += len(batch_episode_lengths)
epoch_episode_length += torch.tensor(batch_episode_lengths).float().sum()
epoch_episode_lengths.append(epoch_episode_length)
epoch_loss /= nb_data
epoch_episode_length /= nb_data
return epoch_loss, epoch_episode_length, epoch_episode_lengths
```
# Train NN
```
# network parameters
net_parameters = {}
net_parameters['input_dim'] = 4
net_parameters['hidden_dim'] = 256
net_parameters['output_dim'] = 2
# instantiate network
Q_net = Q_NN(net_parameters)
baseline_Q_net = Q_NN(net_parameters).eval()
baseline_Q_net.load_state_dict(Q_net.state_dict())
print(Q_net)
print(baseline_Q_net)
# instantiate memory
memory = ReplayMemory(50000)
# optimization parameters
opt_parameters = {}
opt_parameters['lr'] = 0.0005
opt_parameters['nb_episodes_per_batch'] = 1
opt_parameters['nb_batches_per_epoch'] = 50
opt_parameters['env_seed'] = torch.LongTensor(opt_parameters['nb_episodes_per_batch']).random_(1,10000)
opt_parameters['batch_size'] = 128
opt_parameters['gamma'] = 0.999
opt_parameters['baseline_update'] = 1
init_rand_act_pr = 0.2 # starting random action prob
opt_parameters['rand_act_pr'] = init_rand_act_pr
opt_parameters_baseline = opt_parameters.copy()
opt_parameters_baseline['nb_episodes_per_batch'] = 5
opt_parameters_baseline['rand_act_pr'] = 0.0
optimizer = torch.optim.Adam(Q_net.parameters(), lr=opt_parameters['lr'])
# select maximum episode length to learn
env = gym.make('CartPole-v0')
env._max_episode_steps = 400 # 200 400
env.spec.reward_threshold = 0.975* env._max_episode_steps
print('env._max_episode_steps',env._max_episode_steps)
# train loop
running_length = 10
all_epoch_lengths = []
batch_episode_lengths_update = 0
start = time.time()
for epoch in range(200):
# train one epoch
epoch_train_loss, epoch_episode_length, epoch_episode_lengths = \
train_one_epoch(env, memory, Q_net, baseline_Q_net, opt_parameters, optimizer)
# update random_action_prob(=lr), linear annealing from xx% to 1%
opt_parameters['rand_act_pr'] = max(0.01, init_rand_act_pr - init_rand_act_pr*(epoch/float(100)))
# update baseline if current policy better (use greedy mode for evaluation)
if not epoch%opt_parameters['baseline_update']:
opt_parameters_baseline['env_seed'] = torch.LongTensor(opt_parameters_baseline['nb_episodes_per_batch']).random_(1,10000)
opt_parameters_baseline['rand_act_pr'] = opt_parameters['rand_act_pr']
batch_episode_lengths_update = Rollout_Episodes().rollout_batch_episodes(env, memory, opt_parameters, Q_net, False)
batch_episode_lengths_update_baseline = Rollout_Episodes().rollout_batch_episodes(env, memory, opt_parameters, Q_net, False)
if torch.Tensor(batch_episode_lengths_update).mean() > torch.Tensor(batch_episode_lengths_update_baseline).mean():
print('UPDATE BASELINE - epoch:',epoch)
baseline_Q_net.load_state_dict(Q_net.state_dict())
else:
print('NO UPDATE BASELINE - epoch:',epoch)
# stop training when reward is high
if epoch_episode_length > env.spec.reward_threshold:
print('Training done.')
print("Last episode length is {}, epoch is {}, random_action_prob is {}".
format(epoch_episode_length, epoch, opt_parameters['rand_act_pr']))
break
# print intermediate info
if not epoch%1:
print('Epoch: {}, rand_act_pr: {:.4f}, time: {:.4f}, train_loss: {:.4f}, episode_length: {:.4f}'.format(epoch, opt_parameters['rand_act_pr'], time.time()-start, epoch_train_loss, epoch_episode_length))
print(' memory size: {}, Qnet eval: {:.4f}, Qnet baseline eval: {:.4f}'.format(memory.__len__(), torch.Tensor(batch_episode_lengths_update).mean().item(), torch.Tensor(batch_episode_lengths_update_baseline).mean().item() ))
# plot all epochs
all_epoch_lengths.append(epoch_episode_length)
if not epoch%1:
plt.figure(2)
plt.title('Training...')
plt.xlabel('Epochs')
plt.ylabel('Length of episodes batch')
plt.plot(torch.Tensor(all_epoch_lengths).numpy())
plt.pause(0.001)
display.clear_output(wait=True)
# Final plot
plt.figure(2)
plt.title('Training...')
plt.xlabel('Epochs')
plt.ylabel('Length of episodes batch')
plt.plot(torch.Tensor(all_epoch_lengths).numpy())
print("Last episode length is {}, epoch is {} and rand_act_pr {}".format(epoch_episode_length, epoch, opt_parameters['rand_act_pr']))
```
# Run it longer
```
env._max_episode_steps = 5000
state = env.reset() # reset environment
for t in range(env._max_episode_steps): # rollout one episode until it finishes or stop after 200 steps
state_pytorch = torch.from_numpy(state).float().unsqueeze(0) # state=s
action = Q_net.eval().select_action(state_pytorch, 0.0) # select action=a from state=s
state, reward, done, _ = env.step(action) # receive next state=s' and reward=r
env.render() # visualize state
if done:
print(t)
break
```
| github_jupyter |
```
import torch
import torch.nn as nn
import os
import numpy as np
from string import punctuation
char_to_int = {"'": 1, ',': 2, 'e': 3, 'a': 4, 'r': 5, 'i': 6, 's': 7, 'n': 8, 'o': 9, 't': 10, 'l': 11, 'c': 12, 'd': 13, 'm': 14, 'u': 15, 'h': 16, 'g': 17, 'p': 18, 'b': 19, 'k': 20, 'y': 21, '"': 22, 'f': 23, 'w': 24, 'v': 25, 'z': 26, 'j': 27, 'x': 28, 'q': 29, '-': 30, '.': 31, '[': 32, '1': 33, ']': 34}
train_on_gpu = torch.cuda.is_available()
print(train_on_gpu)
def load_model(filename):
save_filename = os.path.splitext(os.path.basename(filename))[0] + '.pt'
return torch.load(save_filename)
def pad_words(words, length):
features = np.zeros((len(words), length), dtype=int)
for i, row in enumerate(words):
features[i, -len(row):] = np.array(row)[:length]
return features
def tokenize_word(words):
test_ints = []
for word in words:
word = word.lower()
test_ints.append([char_to_int[char] for char in word])
test_ints = pad_words(test_ints, 28)
return test_ints
def predict_words(net, words):
net.eval()
test_ints = tokenize_word(words)
features = np.array(test_ints)
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
h = net.init_hidden(batch_size)
if (train_on_gpu):
feature_tensor = feature_tensor.cuda()
# get the output from the model
output, h = net(feature_tensor, h)
pred = torch.round(output.squeeze() / 1000)
return pred.cpu().data.numpy()
def predict_text(net):
print('Enter your paragraph: ')
text = str(input())
text = ''.join([c for c in text if c not in punctuation])
words = text.split(' ')
return predict_words(net, words), np.sum(predict_words(net, words))
class LSTM(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers):
"""
Initialize the model by setting up the layers.
"""
super(LSTM, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, batch_first=True)
#dropout layers
self.dropout = nn.Dropout(0.2)
# linear layers
self.fc1 = nn.Linear(hidden_dim, 256)
self.fc2 = nn.Linear(256, output_size)
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
# fully-connected layer
out = nn.functional.relu(self.fc1(lstm_out))
out = self.dropout(out)
out = self.fc2(out)
# reshape to be batch_size first
out = out.view(batch_size, -1)
out = out[:, -1] # get last batch of labels
return out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
net_lstm = torch.load('./trained_lstm_phase1.pt', map_location=torch.device("cpu"))
predict_text(net_lstm)
predict_text(net_lstm)
predict_text(net_lstm)
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Shape Constraints with Tensorflow Lattice
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Overview
This tutorial is an overview of the constraints and regularizers provided by the TensorFlow Lattice (TFL) library. Here we use TFL canned estimators on synthetic datasets, but note that everything in this tutorial can also be done with models constructed from TFL Keras layers.
Before proceeding, make sure your runtime has all required packages installed (as imported in the code cells below).
## Setup
Installing TF Lattice package:
```
#@test {"skip": true}
!pip install tensorflow-lattice
```
Importing required packages:
```
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
```
Default values used in this guide:
```
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
```
## Training Dataset for Ranking Restaurants
Imagine a simplified scenario where we want to determine whether or not users will click on a restaurant search result. The task is to predict the clickthrough rate (CTR) given input features:
- Average rating (`avg_rating`): a numeric feature with values in the range [1,5].
- Number of reviews (`num_reviews`): a numeric feature with values capped at 200, which we use as a measure of trendiness.
- Dollar rating (`dollar_rating`): a categorical feature with string values in the set {"D", "DD", "DDD", "DDDD"}.
Here we create a synthetic dataset where the true CTR is given by the formula:
$$
CTR = 1 / (1 + exp\{\mbox{b(dollar_rating)}-\mbox{avg_rating}\times log(\mbox{num_reviews}) /4 \})
$$
where $b(\cdot)$ translates each `dollar_rating` to a baseline value:
$$
\mbox{D}\to 3,\ \mbox{DD}\to 2,\ \mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5.
$$
This formula reflects typical user patterns. e.g. given everything else fixed, users prefer restaurants with higher star ratings, and "\\$\\$" restaurants will receive more clicks than "\\$", followed by "\\$\\$\\$" and "\\$\\$\\$\\$".
```
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
```
Let's take a look at the contour plots of this CTR function.
```
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
"""Generates contour plots for a list of (name, fn) functions."""
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
```
### Preparing Data
We now need to create our synthetic datasets. We start by generating a simulated dataset of restaurants and their features.
```
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
```
Let's produce the training, validation and testing datasets. When a restaurant is viewed in the search results, we can record user's engagement (click or no click) as a sample point.
In practice, users often do not go through all search results. This means that users will likely only see restaurants already considered "good" by the current ranking model in use. As a result, "good" restaurants are more frequently impressed and over-represented in the training datasets. When using more features, the training dataset can have large gaps in "bad" parts of the feature space.
When the model is used for ranking, it is often evaluated on all relevant results with a more uniform distribution that is not well-represented by the training dataset. A flexible and complicated model might fail in this case due to overfitting the over-represented data points and thus lack generalizability. We handle this issue by applying domain knowledge to add *shape constraints* that guide the model to make reasonable predictions when it cannot pick them up from the training dataset.
In this example, the training dataset mostly consists of user interactions with good and popular restaurants. The testing dataset has a uniform distribution to simulate the evaluation setting discussed above. Note that such testing dataset will not be available in a real problem setting.
```
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
```
Defining input_fns used for training and evaluation:
```
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
```
## Fitting Gradient Boosted Trees
Let's start off with only two features: `avg_rating` and `num_reviews`.
We create a few auxillary functions for plotting and calculating validation and test metrics.
```
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
```
We can fit TensorFlow gradient boosted decision trees on the dataset:
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
```
Even though the model has captured the general shape of the true CTR and has decent validation metrics, it has counter-intuitive behavior in several parts of the input space: the estimated CTR decreases as the average rating or number of reviews increase. This is due to a lack of sample points in areas not well-covered by the training dataset. The model simply has no way to deduce the correct behaviour solely from the data.
To solve this issue, we enforce the shape constraint that the model must output values monotonically increasing with respect to both the average rating and the number of reviews. We will later see how to implement this in TFL.
## Fitting a DNN
We can repeat the same steps with a DNN classifier. We can observe a similar pattern: not having enough sample points with small number of reviews results in nonsensical extrapolation. Note that even though the validation metric is better than the tree solution, the testing metric is much worse.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
```
## Shape Constraints
TensorFlow Lattice (TFL) is focused on enforcing shape constraints to safeguard model behavior beyond the training data. These shape constraints are applied to TFL Keras layers. Their details can be found in [our JMLR paper](http://jmlr.org/papers/volume17/15-243/15-243.pdf).
In this tutorial we use TF canned estimators to cover various shape constraints, but note that all these steps can be done with models created from TFL Keras layers.
As with any other TensorFlow estimator, TFL canned estimators use [feature columns](https://www.tensorflow.org/api_docs/python/tf/feature_column) to define the input format and use a training input_fn to pass in the data.
Using TFL canned estimators also requires:
- a *model config*: defining the model architecture and per-feature shape constraints and regularizers.
- a *feature analysis input_fn*: a TF input_fn passing data for TFL initialization.
For a more thorough description, please refer to the canned estimators tutorial or the API docs.
### Monotonicity
We first address the monotonicity concerns by adding monotonicity shape constraints to both features.
To instruct TFL to enforce shape constraints, we specify the constraints in the *feature configs*. The following code shows how we can require the output to be monotonically increasing with respect to both `num_reviews` and `avg_rating` by setting `monotonicity="increasing"`.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
```
Using a `CalibratedLatticeConfig` creates a canned classifier that first applies a *calibrator* to each input (a piece-wise linear function for numeric features) followed by a *lattice* layer to non-linearly fuse the calibrated features. We can use `tfl.visualization` to visualize the model. In particular, the following plot shows the two trained calibrators included in the canned classifier.
```
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
```
With the constraints added, the estimated CTR will always increase as the average rating increases or the number of reviews increases. This is done by making sure that the calibrators and the lattice are monotonic.
### Diminishing Returns
[Diminishing returns](https://en.wikipedia.org/wiki/Diminishing_returns) means that the marginal gain of increasing a certain feature value will decrease as we increase the value. In our case we expect that the `num_reviews` feature follows this pattern, so we can configure its calibrator accordingly. Notice that we can decompose diminishing returns into two sufficient conditions:
- the calibrator is monotonicially increasing, and
- the calibrator is concave.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
```
Notice how the testing metric improves by adding the concavity constraint. The prediction plot also better resembles the ground truth.
### 2D Shape Constraint: Trust
A 5-star rating for a restaurant with only one or two reviews is likely an unreliable rating (the restaurant might not actually be good), whereas a 4-star rating for a restaurant with hundreds of reviews is much more reliable (the restaurant is likely good in this case). We can see that the number of reviews of a restaurant affects how much trust we place in its average rating.
We can exercise TFL trust constraints to inform the model that the larger (or smaller) value of one feature indicates more reliance or trust of another feature. This is done by setting `reflects_trust_in` configuration in the feature config.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
```
The following plot presents the trained lattice function. Due to the trust constraint, we expect that larger values of calibrated `num_reviews` would force higher slope with respect to calibrated `avg_rating`, resulting in a more significant move in the lattice output.
```
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
```
### Smoothing Calibrators
Let's now take a look at the calibrator of `avg_rating`. Though it is monotonically increasing, the changes in its slopes are abrupt and hard to interpret. That suggests we might want to consider smoothing this calibrator using a regularizer setup in the `regularizer_configs`.
Here we apply a `wrinkle` regularizer to reduce changes in the curvature. You can also use the `laplacian` regularizer to flatten the calibrator and the `hessian` regularizer to make it more linear.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
```
The calibrators are now smooth, and the overall estimated CTR better matches the ground truth. This is reflected both in the testing metric and in the contour plots.
### Partial Monotonicity for Categorical Calibration
So far we have been using only two of the numeric features in the model. Here we will add a third feature using a categorical calibration layer. Again we start by setting up helper functions for plotting and metric calculation.
```
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
```
To involve the third feature, `dollar_rating`, we should recall that categorical features require a slightly different treatment in TFL, both as a feature column and as a feature config. Here we enforce the partial monotonicity constraint that outputs for "DD" restaurants should be larger than "D" restaurants when all other inputs are fixed. This is done using the `monotonicity` setting in the feature config.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
```
This categorical calibrator shows the preference of the model output: DD > D > DDD > DDDD, which is consistent with our setup. Notice there is also a column for missing values. Though there is no missing feature in our training and testing data, the model provides us with an imputation for the missing value should it happen during downstream model serving.
Here we also plot the predicted CTR of this model conditioned on `dollar_rating`. Notice that all the constraints we required are fulfilled in each of the slices.
### Output Calibration
For all the TFL models we have trained so far, the lattice layer (indicated as "Lattice" in the model graph) directly outputs the model prediction. Sometimes we are not sure whether the lattice output should be rescaled to emit model outputs:
- the features are $log$ counts while the labels are counts.
- the lattice is configured to have very few vertices but the label distribution is relatively complicated.
In those cases we can add another calibrator between the lattice output and the model output to increase model flexibility. Here let's add a calibrator layer with 5 keypoints to the model we just built. We also add a regularizer for the output calibrator to keep the function smooth.
```
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
```
The final testing metric and plots show how using common-sense constraints can help the model avoid unexpected behaviour and extrapolate better to the entire input space.
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from pandas.io.json import json_normalize
import json
import matplotlib.pyplot as plt
import lightgbm as lgb
import datetime
import seaborn as sns
from bayes_opt import BayesianOptimization
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
import gc
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import KFold,train_test_split
import warnings
warnings.filterwarnings('ignore')
# Any results you write to the current directory are saved as output.
def load_df(csv_path='../input/train.csv', nrows=None):
JSON_COLUMNS = ['device', 'geoNetwork', 'totals', 'trafficSource']
df = pd.read_csv(csv_path,
converters={column: json.loads for column in JSON_COLUMNS},
dtype={'fullVisitorId': 'str'}, # Important!!
nrows=nrows)
for column in JSON_COLUMNS:
column_as_df = json_normalize(df[column])
column_as_df.columns = [f"{column}.{subcolumn}" for subcolumn in column_as_df.columns]
df = df.drop(column, axis=1).merge(column_as_df, right_index=True, left_index=True)
print(f"Loaded {os.path.basename(csv_path)}. Shape: {df.shape}")
return df
def add_time_features(df):
df['date'] = pd.to_datetime(df['date'], format='%Y%m%d', errors='ignore')
df['year'] = df['date'].apply(lambda x: x.year)
df['month'] = df['date'].apply(lambda x: x.month)
df['day'] = df['date'].apply(lambda x: x.day)
df['weekday'] = df['date'].apply(lambda x: x.weekday())
df['visitStartTime_'] = pd.to_datetime(df['visitStartTime'],unit="s")
df['visitStartTime_year'] = df['visitStartTime_'].apply(lambda x: x.year)
df['visitStartTime_month'] = df['visitStartTime_'].apply(lambda x: x.month)
df['visitStartTime_day'] = df['visitStartTime_'].apply(lambda x: x.day)
df['visitStartTime_weekday'] = df['visitStartTime_'].apply(lambda x: x.weekday())
return df
date_features = [#"year","month","day","weekday",'visitStartTime_year',
"visitStartTime_month","visitStartTime_day","visitStartTime_weekday"]
%%time
train_df = load_df("../input/train.csv")
test_df = load_df("../input/test.csv")
```
# Glimpse of data
## Samples of train data
```
train_df.head()
```
## Data type
```
train_df.dtypes
test_df.info()
pd.value_counts(train_df.dtypes).plot(kind="bar")
plt.title("type of train data")
def bar_plot(column,**args):
pd.value_counts(train_df[column]).plot(kind="bar",**args)
```
# Remove constant column
```
constant_column = [col for col in train_df.columns if len(train_df[col].unique()) == 1]
print(list(constant_column))
train_df.drop(columns=constant_column,inplace=True)
num_col = ["totals.hits", "totals.pageviews", "visitNumber",
'totals.bounces', 'totals.newVisits']
for col in num_col:
train_df[col] = train_df[col].fillna("0").astype("int32")
test_df[col] = test_df[col].fillna("0").astype("int32")
train_df.dtypes
train_df.head()
```
# Univariant viariable analysis
## ChannelGrouping
```
The channel via which the user came to the Store.
```
```
bar_plot("channelGrouping")
```
## device.browser
```
bar_plot("device.browser",figsize=(12,10))
```
## device.isMobile
```
bar_plot("device.isMobile")
```
## device.deviceCategory
```
bar_plot("device.deviceCategory")
```
# Feature engineering
```
new_features = ["hits_per_pageviews"]
new_category_features = ["is_high_hits"]
def feature_engineering(df):
line = 4
df['hits_per_pageviews'] = (df["totals.hits"]/(df["totals.pageviews"])).apply(lambda x: 0 if np.isinf(x) else x)
df['is_high_hits'] = np.logical_or(train_df["totals.hits"]>line,train_df["totals.pageviews"]>line).astype(np.int32)
feature_engineering(train_df)
feature_engineering(test_df)
add_time_features(train_df)
_ = add_time_features(test_df)
category_features = ["channelGrouping", "device.browser",
"device.deviceCategory", "device.operatingSystem",
"geoNetwork.city", "geoNetwork.continent",
"geoNetwork.country", "geoNetwork.metro",
"geoNetwork.networkDomain", "geoNetwork.region",
"geoNetwork.subContinent",
#"trafficSource.adContent",
#"trafficSource.adwordsClickInfo.adNetworkType",
#"trafficSource.adwordsClickInfo.gclId",
#"trafficSource.adwordsClickInfo.page",
#"trafficSource.adwordsClickInfo.slot",
#"trafficSource.campaign",
#"trafficSource.keyword",
"trafficSource.medium",
#"trafficSource.referralPath",
"trafficSource.source",
#'trafficSource.adwordsClickInfo.isVideoAd',
'trafficSource.isTrueDirect',
#"filtered_keyword"
] + date_features
target = 'totals.transactionRevenue'
useless_col = ["trafficSource.adContent",
"trafficSource.adwordsClickInfo.adNetworkType",
"trafficSource.adwordsClickInfo.page",
"trafficSource.adwordsClickInfo.slot",
"trafficSource.campaign",
"trafficSource.referralPath",
'trafficSource.adwordsClickInfo.isVideoAd',
"trafficSource.adwordsClickInfo.gclId",
"trafficSource.keyword"]
train_df.head()
```
## Useless features
```
useless_df = train_df[useless_col]
useless_df.info()
for col in useless_col:
print("-"*10,col,"-"*10)
print("unique value numbers:",len(useless_df[col].unique()))
print("null rate:",useless_df[col].isna().sum()/len(useless_df[col]))
for col in category_features:
print("-"*10,col,"-"*10)
print("unique value numbers:",len(train_df[col].unique()))
print("null rate:",train_df[col].isna().sum()/len(train_df[col]))
train_df[target] = train_df[target].fillna("0").astype("int32")
all_features = category_features+num_col+new_features+new_category_features
all_features
# dev_df = train_df[train_df['date']<=pd.to_datetime('20170531', format='%Y%m%d')]
# val_df = train_df[train_df['date']>pd.to_datetime('20170531', format='%Y%m%d')]
# dev_x = dev_df[all_features]
# dev_y = dev_df[target]
# val_x = val_df[all_features]
# val_y = val_df[target]
# test_x = test_df[all_features]
# for col in category_features:
# print("transform column {}".format(col))
# lbe = LabelEncoder()
# lbe.fit(pd.concat([train_df[col],test_x[col]]).astype("str"))
# dev_x[col] = lbe.transform(dev_x[col].astype("str"))
# val_x[col] = lbe.transform(val_x[col].astype("str"))
# test_x[col] = lbe.transform(test_x[col].astype("str"))
```
# Hits and Pageviews
## totals.hits
```
train_df["totals.hits"].describe()
sns.distplot(train_df["totals.hits"],kde=False)
```
Most of hits are less than 4.
## totals.pageviews
```
train_df["totals.pageviews"].describe()
sns.distplot(train_df["totals.pageviews"],kde=False)
```
Pageviews has same distribution with hits.
```
sns.jointplot("totals.pageviews","totals.hits",data=train_df)
sns.jointplot("totals.pageviews",target,data=train_df)
sns.jointplot("totals.hits",target,data=train_df)
```
**Filter out high hits or pageviews record**
```
line = 4
high_hits_pageviews_df = train_df[np.logical_or(train_df["totals.hits"]>line,train_df["totals.pageviews"]>line)]
low_hits_pageviews_df = train_df[np.logical_and(train_df["totals.hits"]<=line,train_df["totals.pageviews"]<=line)]
print("high rate :",high_hits_pageviews_df.shape[0]/train_df.shape[0])
print("low rate :",low_hits_pageviews_df.shape[0]/train_df.shape[0])
high_hits_pageviews_df[target].describe()
low_hits_pageviews_df[target].describe()
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(16,8))
sns.distplot(high_hits_pageviews_df[target],kde=False,ax=axes[0])
axes[0].set_title("distribution of high hits transactionRevenue")
sns.distplot(low_hits_pageviews_df[target],kde=False,ax=axes[1])
axes[1].set_title("distribution of low hits transactionRevenue")
print("zero rate of transactionRevenue:",(train_df[target]==0).sum()/train_df.shape[0])
print("zero rate of high hits transactionRevenue:",(high_hits_pageviews_df[target]==0).sum()/ high_hits_pageviews_df.shape[0])
print("zero rate of low hits transactionRevenue:",(low_hits_pageviews_df[target]==0).sum()/ low_hits_pageviews_df.shape[0])
```
high hits and pageviews has low zero rate, low hits and pageviews has high zero rate
**how about hits per pageviews? **
```
train_df["hits_per_pageviews"].describe()
sns.jointplot("hits_per_pageviews",target,data=train_df)
```
**High hits_per_pageviews has 0 revenue. Maybe I can use this feature.**
# visitStartTime
```
visitStartTime_df = train_df[["visitStartTime",'visitStartTime_year',"visitStartTime_month","visitStartTime_day","visitStartTime_weekday",target]]
visitStartTime_df["visitStartTime"] = pd.to_datetime(visitStartTime_df["visitStartTime"],unit="s")
visitStartTime_df["visitStartDate"] = visitStartTime_df["visitStartTime"].apply(lambda x: x.date())
def plot_dist_date(col,kind="bar"):
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(12,6))
visitStartTime_df.groupby(col)[target].agg(["sum"]).plot(kind=kind,title="sum of transactionRevenue:"+col,ax=axes[0])
visitStartTime_df.groupby(col)[target].agg(["count"]).plot(kind=kind,title="count of transactionRevenue:"+col,ax=axes[1])
plt.tight_layout()
plot_dist_date("visitStartDate",kind="line")
```
When I convert visitStartTime to numerical type, the feature has high importance. It can not make sence. I try to remove this feature. LB score decrease.
**What happen?**
```
test_visitStartTime_df = test_df[["visitStartTime",'visitStartTime_year',"visitStartTime_month","visitStartTime_day","visitStartTime_weekday"]]
test_visitStartTime_df["visitStartTime"] = pd.to_datetime(test_visitStartTime_df["visitStartTime"],unit="s")
test_visitStartTime_df["visitStartDate"] = test_visitStartTime_df["visitStartTime"].apply(lambda x: x.date())
test_visitStartTime_df.groupby("visitStartDate")["visitStartTime"].agg("count").plot(figsize=(8,6),title="count of test")
```
train_data: (2016-08,2017-08)
test_data: (2017-08,2018-05)
The count distribution of test data is different from train data
* The year feature may be useless
```
plot_dist_date("visitStartTime_year")
plot_dist_date("visitStartTime_month")
plot_dist_date("visitStartTime_day")
```
Transaction count in day is uniform distribution, unless 31, because of many months no 31 days.
transaction revenue is not uniform distribution, **visitStartTime_day may be a useful feature .**
```
plot_dist_date("visitStartTime_weekday")
```
# trafficSource.keyword
Transaction revenue must related to goods. The keyword may have relation to goods.
continue to analyse
```
col = "trafficSource.cleanedkeyword"
train_df[col] = train_df["trafficSource.keyword"].apply(lambda x :x if isinstance(x,float) and np.isnan(x) else x.lower()).apply(lambda x :x if isinstance(x,float) and np.isnan(x) else x.replace("+", ""))
test_df[col] = test_df["trafficSource.keyword"].apply(lambda x :x if isinstance(x,float) and np.isnan(x) else x.lower())
print("-"*10,"train","-"*10)
print("unique value numbers:",len(train_df[col].unique()))
print("null rate:",train_df[col].isna().sum()/len(train_df[col]))
print("-"*10,"test","-"*10)
print("unique value numbers:",len(test_df[col].unique()))
print("null rate:",test_df[col].isna().sum()/len(test_df[col]))
pd.value_counts(train_df[col]).sort_values(ascending=False)[0:20]
pd.value_counts(test_df[col]).sort_values(ascending=False)[0:20]
train_df.groupby(col)[target].agg("sum").sort_values(ascending=False)[0:29].apply(lambda x: np.log1p(x)).plot(kind="bar",figsize=(20,5),title="sum revenue of keyword")
```
There are only 3 keywords has none zero revenues.
Does these keywords occurs in test data?
How do I encode this feature?
```
none_zero_keywords= set(train_df.groupby(col)[target].agg("sum").sort_values(ascending=False)[0:28].index)
test_keywords_set = set(test_df[col].unique())
intersection_keyword = none_zero_keywords.intersection(test_keywords_set)
print("len:",len(intersection_keyword))
intersection_keyword
train_df.groupby(col)[target].agg("sum").sort_values(ascending=False)[0:29].apply(lambda x: np.log1p(x)).plot(kind="bar",figsize=(20,5),title="zero revenue of keyword")
def add_keyword_feature(df):
col_name ="filtered_keyword"
sets = intersection_keyword.difference({'(automatic matching)',
'(not provided)',
'(remarketing/content targeting)'})
df[col_name] = df[col].apply(lambda x: x if x in sets else "other")
add_keyword_feature(train_df)
add_keyword_feature(test_df)
# no improvement ,something wrong
none_zero_keywords.difference(test_keywords_set)
train_df.groupby(col)[target].agg("count").sort_values(ascending=False)[0:40].apply(lambda x: np.log1p(x)).plot(kind="bar",figsize=(20,5),title="count revenue of keyword")
```
# LGBM
```
train_x = train_df[all_features]
train_y = train_df[target]
test_x = test_df[all_features]
for col in category_features:
print("transform column {}".format(col))
lbe = LabelEncoder()
lbe.fit(pd.concat([train_df[col],test_x[col]]).astype("str"))
train_x[col] = lbe.transform(train_x[col].astype("str"))
test_x[col] = lbe.transform(test_x[col].astype("str"))
def lgb_eval(num_leaves,max_depth,lambda_l2,lambda_l1,min_child_samples,bagging_fraction,feature_fraction):
params = {
"objective" : "regression",
"metric" : "rmse",
"num_leaves" : int(num_leaves),
"max_depth" : int(max_depth),
"lambda_l2" : lambda_l2,
"lambda_l1" : lambda_l1,
"num_threads" : 4,
"min_child_samples" : int(min_child_samples),
"learning_rate" : 0.03,
"bagging_fraction" : bagging_fraction,
"feature_fraction" : feature_fraction,
"subsample_freq" : 5,
"bagging_seed" : 42,
"verbosity" : -1
}
lgtrain = lgb.Dataset(train_x, label=np.log1p(train_y.apply(lambda x : 0 if x < 0 else x)),categorical_feature=category_features)
cv_result = lgb.cv(params,
lgtrain,
10000,
categorical_feature=category_features,
early_stopping_rounds=100,
stratified=False,
nfold=5)
return -cv_result['rmse-mean'][-1]
def lgb_train(num_leaves,max_depth,lambda_l2,lambda_l1,min_child_samples,bagging_fraction,feature_fraction):
params = {
"objective" : "regression",
"metric" : "rmse",
"num_leaves" : int(num_leaves),
"max_depth" : int(max_depth),
"lambda_l2" : lambda_l2,
"lambda_l1" : lambda_l1,
"num_threads" : 4,
"min_child_samples" : int(min_child_samples),
"learning_rate" : 0.01,
"bagging_fraction" : bagging_fraction,
"feature_fraction" : feature_fraction,
"subsample_freq" : 5,
"bagging_seed" : 42,
"verbosity" : -1
}
t_x,v_x,t_y,v_y = train_test_split(train_x,train_y,test_size=0.2)
lgtrain = lgb.Dataset(t_x, label=np.log1p(t_y.apply(lambda x : 0 if x < 0 else x)),categorical_feature=category_features)
lgvalid = lgb.Dataset(v_x, label=np.log1p(v_y.apply(lambda x : 0 if x < 0 else x)),categorical_feature=category_features)
model = lgb.train(params, lgtrain, 2000, valid_sets=[lgvalid], early_stopping_rounds=100, verbose_eval=100)
pred_test_y = model.predict(test_x, num_iteration=model.best_iteration)
return pred_test_y, model
def param_tuning(init_points,num_iter,**args):
lgbBO = BayesianOptimization(lgb_eval, {'num_leaves': (25, 50),
'max_depth': (5, 15),
'lambda_l2': (0.0, 0.05),
'lambda_l1': (0.0, 0.05),
'bagging_fraction': (0.5, 0.8),
'feature_fraction': (0.5, 0.8),
'min_child_samples': (20, 50),
})
lgbBO.maximize(init_points=init_points, n_iter=num_iter,**args)
return lgbBO
result = param_tuning(5,15)
result.res['max']['max_params']
prediction1,model1 = lgb_train(**result.res['max']['max_params'])
prediction2,model2 = lgb_train(**result.res['max']['max_params'])
prediction3,model3 = lgb_train(**result.res['max']['max_params'])
# param = {'num_leaves': 45.61216380347129,
# 'max_depth': 11.578579827303919,
# 'lambda_l2': 0.0107663924764632,
# 'lambda_l1': 0.046541310399201855,
# 'bagging_fraction': 0.7851516324443661,
# 'feature_fraction': 0.7944881085591733,
# 'min_child_samples': 28.5601473698899}
# prediction,model = lgb_train(**param)
test_df['PredictedLogRevenue'] = (np.expm1(prediction1)+np.expm1(prediction2)+np.expm1(prediction3))/3
submit = test_df[['fullVisitorId', 'PredictedLogRevenue']].groupby('fullVisitorId').sum()['PredictedLogRevenue'].apply(np.log1p).fillna(0).reset_index()
submit.to_csv('submission.csv', index=False)
lgb.plot_importance(model, figsize=(15, 10),height=0.8)
plt.show()
lgb.plot_importance(model, figsize=(15, 10),height=0.8,importance_type="gain")
plt.show()
```
## If this kernel is useful to you, please give a vote. This is my first depth EDA kernel. Your vote is my inspiration.
| github_jupyter |
# Day 2, session 3: Detecting features in Hi-C maps
In this session we will be looking at ways to automatically find regions with features of interest.
This includes both supervised and unsupervised methods depending on the question.
## Unsupervised detection
### Differential contacts
The classic approach, much like in differential gene expression analysis, is to look at regions with differing contact counts.
There are some well established tools to do this type of analyses, like diffhic or ACCOST.
### Structural changes
A more recent approach, implemented as CHESS (python package chess-hic) uses the notion of structural changes. This is also unsupervised, but attempts to find differential features as "vignettes" in the map.
Those features can then be clustered by similarity so that the user can identify what they represent (loops, stripes, borders, ...)
More info about CHESS in the official docs: https://chess-hic.readthedocs.io/en/latest/?badge=latest
## Supervised pattern detection
There are many methods to detect a specific type of pattern, especially loops and TADS.
Most of those softwares are listed here: https://github.com/mdozmorov/HiC_tools#loop-callers
### Chromosight
Much like other tools, Chromosight can detect patterns in Hi-C contact maps. Instead of being limited to loops or TADS, it uses template matching (a computer vision algorithm) to detect various patterns.
More info about chromosight in the official docs: https://chromosight.readthedocs.io/en/latest/
```
%%bash
chromosight detect --pattern=loops -p 0.4 --min-dist 5000 --max-dist 200_000 \
--perc-undetected=50 --perc-zero=10 data/G1_2kb.cool data/G1_loops_p05
chromosight detect --pattern=loops -p 0.4 --min-dist 5000 --max-dist 200_000 \
--perc-undetected=50 --perc-zero=10 data/M_2kb.cool data/M_loops_p05
import cooler
import chromosight.utils.preprocessing as cup
import os
import pandas as pd
bank = "M"
chrom = "chr15"
clr = cooler.Cooler("data/" + bank + "_2kb.cool")
M = clr.matrix(sparse=True, balance=True).fetch(chrom)
M_det = cup.detrend(M).toarray()
M = M.toarray()
data = pd.read_csv("data/" + bank + "_loops_p05.tsv", sep='\t')
data = data[data["chrom1"] == chrom]
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
mat = M
plt.subplots(figsize=(10,10))
plt.imshow(mat, vmax=np.nanpercentile(mat, 99.), cmap="Reds")
plt.colorbar()
plt.scatter(data.start2 / 2000, data.start1 / 2000, color='none', edgecolors='yellow')
```
## Using external tracks
Often instead of detecting those features solely from the Hi-C signal, we use external tracks such as ChIP-seq of proteins of interest (cohesin for example) to find the regions.
This can be used for example to train machine learning methods such as [Peakachu](https://github.com/tariks/peakachu)
There are also helpful tools like [coolpuppy](https://github.com/open2c/coolpuppy) which can do 2D aggregation of Hi-C maps using 1D signals.
```
%%script false --no-raise-error
%%bash
coolpup.py --maxdist 200000 \
--mindist 5000 \
data/M_2kb.cool \
data/scc1_chip_seq.bed
```
Chromosight can also quantify correlation scores with a given pattern at 2D positions given by a BED2D file.
```
%%bash
# We first need to get 1D positions into 2D combinations
# E.g. to generate all combinations of positions spaced by more than 5kb but less than 200kb:
MINDIST=5000
MAXDIST=200000
BED=data/scc1_chip_seq.bed
bedtools window -a $BED -b $BED -w $MAXDIST |
awk -vmd=$MINDIST '$1 == $4 && ($5 - $2) >= md {print}' |
sort -k1,1 -k2,2n -k4,4 -k5,5n > data/scc1_chip.bed2d
```
Now we can quantify loop scores at those combination of Scc1 sites between the two conditions:
```
%%bash
SCC1=data/scc1_chip.bed2d
G1=data/G1_2kb.cool
M=data/M_2kb.cool
chromosight quantify -w npy --pattern=loops -z 50 $SCC1 $G1 data/loops_scc1_g1
chromosight quantify -w npy --pattern=loops -z 50 $SCC1 $M data/loops_scc1_m
```
To load the tabular output, we can use pandas dataframes:
```
loops_scc1_g1 = pd.read_csv('data/loops_scc1_g1.tsv', sep='\t')
loops_scc1_m = pd.read_csv('data/loops_scc1_m.tsv', sep='\t')
loops_scc1_m.head()
#%matplotlib notebook
fig, ax = plt.subplots(2, 1, sharex=True)
_ = ax[0].hist(loops_scc1_g1.score, 100)
_ = ax[1].hist(loops_scc1_m.score, 100)
ax[0].set_title("G1 loop scores")
ax[1].set_title("M loop scores")
plt.suptitle("Loop scores at combinations of nanog-bound sites")
```
Comparing distribution of loop scores does not reveal major differences.
Let's import the individual matrix slices around those combinations to inspect them separately at different distances.
```
scc1_g1_imgs = np.load('data/loops_scc1_g1.npy')
scc1_m_imgs = np.load('data/loops_scc1_m.npy')
#%matplotlib notebook
dist_ranges = [5000, 50_000, 100_000, 500_000]
loop_sizes = np.abs(loops_scc1_g1.start2 - loops_scc1_g1.start1)
pileup = lambda arr: np.apply_along_axis(np.nanmedian, 0, arr)
fig, ax = plt.subplots(2, len(dist_ranges)-1, figsize=(10,10))
for i in range(len(dist_ranges)-1):
# Check which loops are in the distance range
in_range = (loop_sizes >= dist_ranges[i]) & (loop_sizes < dist_ranges[i+1])
# Compute pileups (average image stacks)
g1_pu = pileup(scc1_g1_imgs[in_range, :, :])
m_pu = pileup(scc1_m_imgs[in_range, :, :])
# Plot heatmaps for each distance range
ax[0, i].imshow(g1_pu, cmap='seismic', vmin=0.5, vmax=1)
ax[1, i].imshow(m_pu, cmap='seismic', vmin=0.7, vmax=1.15)
# Add title to heatmaps
ax[0, i].set_title(f'G1, {int(dist_ranges[i]/1000)}kb')
ax[1, i].set_title(f'M, {int(dist_ranges[i]/1000)}kb')
```
Here, we do not have a nice loop score and loop pattern associated with the ChIP-seq peak. This might be due to the fact that the ChIP-seq is from a yeast are stopped in G1 and then relese for 60 minutes to reach M phase, whereas the HiC matrix are from experiment where the yeast is stopped in metaphase with nocodazole.
| github_jupyter |
## 모듈 불러오기
```
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import layers
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import json
from tqdm import tqdm
```
## 시각화 함수
```
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string], '')
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
```
## 학습 데이터 경로 정의
```
DATA_IN_PATH = './data_in/'
DATA_OUT_PATH = './data_out/'
INPUT_TRAIN_DATA = 'nsmc_train_input.npy'
LABEL_TRAIN_DATA = 'nsmc_train_label.npy'
DATA_CONFIGS = 'data_configs.json'
```
## 랜덤 시드 고정
```
SEED_NUM = 1234
tf.random.set_seed(SEED_NUM)
```
## 파일 로드
```
train_input = np.load(open(DATA_IN_PATH + INPUT_TRAIN_DATA, 'rb'))
train_label = np.load(open(DATA_IN_PATH + LABEL_TRAIN_DATA, 'rb'))
prepro_configs = json.load(open(DATA_IN_PATH + DATA_CONFIGS, 'r'))
```
## 모델 하이퍼파라메터 정의
```
model_name = 'cnn_classifier_kr'
BATCH_SIZE = 512
NUM_EPOCHS = 10
VALID_SPLIT = 0.1
MAX_LEN = train_input.shape[1]
kargs = {'model_name': model_name,
'vocab_size': prepro_configs['vocab_size'],
'embedding_size': 128,
'num_filters': 100,
'dropout_rate': 0.5,
'hidden_dimension': 250,
'output_dimension':1}
```
## 모델 선언 및 컴파일
```
class CNNClassifier(tf.keras.Model):
def __init__(self, **kargs):
super(CNNClassifier, self).__init__(name=kargs['model_name'])
self.embedding = layers.Embedding(input_dim=kargs['vocab_size'],
output_dim=kargs['embedding_size'])
self.conv_list = [layers.Conv1D(filters=kargs['num_filters'],
kernel_size=kernel_size,
padding='valid',
activation=tf.keras.activations.relu,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
for kernel_size in [3,4,5]]
self.pooling = layers.GlobalMaxPooling1D()
self.dropout = layers.Dropout(kargs['dropout_rate'])
self.fc1 = layers.Dense(units=kargs['hidden_dimension'],
activation=tf.keras.activations.relu,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
self.fc2 = layers.Dense(units=kargs['output_dimension'],
activation=tf.keras.activations.sigmoid,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
def call(self, x):
x = self.embedding(x)
x = self.dropout(x)
x = tf.concat([self.pooling(conv(x)) for conv in self.conv_list], axis=-1)
x = self.fc1(x)
x = self.fc2(x)
return x
model = CNNClassifier(**kargs)
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')])
```
## Callback 선언
```
# overfitting을 막기 위한 ealrystop 추가
earlystop_callback = EarlyStopping(monitor='val_accuracy', min_delta=0.0001,patience=2)
# min_delta: the threshold that triggers the termination (acc should at least improve 0.0001)
# patience: no improvment epochs (patience = 1, 1번 이상 상승이 없으면 종료)\
checkpoint_path = DATA_OUT_PATH + model_name + '/weights.h5'
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create path if exists
if os.path.exists(checkpoint_dir):
print("{} -- Folder already exists \n".format(checkpoint_dir))
else:
os.makedirs(checkpoint_dir, exist_ok=True)
print("{} -- Folder create complete \n".format(checkpoint_dir))
cp_callback = ModelCheckpoint(
checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=True)
```
## 모델 학습
```
history = model.fit(train_input, train_label, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,
validation_split=VALID_SPLIT, callbacks=[earlystop_callback, cp_callback])
```
## 결과 플롯
```
plot_graphs(history, 'loss')
plot_graphs(history, 'accuracy')
```
## 결과 평가하기
```
DATA_OUT_PATH = './data_out/'
INPUT_TEST_DATA = 'nsmc_test_input.npy'
LABEL_TEST_DATA = 'nsmc_test_label.npy'
SAVE_FILE_NM = 'weights.h5' #저장된 best model 이름
test_input = np.load(open(DATA_IN_PATH + INPUT_TEST_DATA, 'rb'))
test_input = pad_sequences(test_input, maxlen=test_input.shape[1])
test_label_data = np.load(open(DATA_IN_PATH + LABEL_TEST_DATA, 'rb'))
model.load_weights(os.path.join(DATA_OUT_PATH, model_name, SAVE_FILE_NM))
model.evaluate(test_input, test_label_data)
```
| github_jupyter |
<center>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# K-Nearest Neighbors
Estimated time needed: **25** minutes
## Objectives
After completing this lab you will be able to:
- Use K Nearest neighbors to classify data
In this Lab you will load a customer dataset, fit the data, and use K-Nearest Neighbors to predict a data point. But what is **K-Nearest Neighbors**?
**K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification.
### Here's an visualization of the K-Nearest Neighbors algorithm.
<img src="https://ibm.box.com/shared/static/mgkn92xck0z05v7yjq8pqziukxvc2461.png">
In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A.
In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#about_dataset">About the dataset</a></li>
<li><a href="#visualization_analysis">Data Visualization and Analysis</a></li>
<li><a href="#classification">Classification</a></li>
</ol>
</div>
<br>
<hr>
Lets load required libraries
```
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
```
<div id="about_dataset">
<h2>About the dataset</h2>
</div>
Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict class of a new or unknown case.
The example focuses on using demographic data, such as region, age, and marital, to predict usage patterns.
The target field, called **custcat**, has four possible values that correspond to the four customer groups, as follows:
1- Basic Service
2- E-Service
3- Plus Service
4- Total Service
Our objective is to build a classifier, to predict the class of unknown cases. We will use a specific type of classification called K nearest neighbour.
Lets download the dataset. To download the data, we will use !wget to download it from IBM Object Storage.
```
import urllib.request
url = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/teleCust1000t.csv'
filename = 'teleCust1000t.csv '
urllib.request.urlretrieve(url, filename)
```
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Load Data From CSV File
```
df = pd.read_csv('teleCust1000t.csv')
df.head()
```
<div id="visualization_analysis">
<h2>Data Visualization and Analysis</h2>
</div>
#### Let’s see how many of each class is in our data set
```
df['custcat'].value_counts()
```
#### 281 Plus Service, 266 Basic-service, 236 Total Service, and 217 E-Service customers
You can easily explore your data using visualization techniques:
```
df.hist(column='income', bins=50)
```
### Feature set
Lets define feature sets, X:
```
df.columns
```
To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:
```
X = df[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']].values #.astype(float)
X[0:5]
```
What are our labels?
```
y = df['custcat'].values
y[0:5]
```
## Normalize Data
Data Standardization give data zero mean and unit variance, it is good practice, especially for algorithms such as KNN which is based on distance of cases:
```
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
```
### Train Test Split
Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.
It is important that our models have a high, out-of-sample accuracy, because the purpose of any model, of course, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
<div id="classification">
<h2>Classification</h2>
</div>
<h3>K nearest neighbor (KNN)</h3>
#### Import library
Classifier implementing the k-nearest neighbors vote.
```
from sklearn.neighbors import KNeighborsClassifier
```
### Training
Lets start the algorithm with k=4 for now:
```
k = 4
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
neigh
```
### Predicting
we can use the model to predict the test set:
```
yhat = neigh.predict(X_test)
yhat[0:5]
```
### Accuracy evaluation
In multilabel classification, **accuracy classification score** is a function that computes subset accuracy. This function is equal to the jaccard_similarity_score function. Essentially, it calculates how closely the actual labels and predicted labels are matched in the test set.
```
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))
```
## Practice
Can you build the model again, but this time with k=6?
```
# write your code here
```
Double-click **here** for the solution.
<!-- Your answer is below:
k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))
-->
#### What about other K?
K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by the User. So, how can we choose right value for K?
The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.
We can calculate the accuracy of KNN for different Ks.
```
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
ConfustionMx = [];
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
mean_acc
```
#### Plot model accuracy for Different number of Neighbors
```
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Nabors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="https://www.ibm.com/analytics/spss-statistics-software">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://www.ibm.com/cloud/watson-studio">Watson Studio</a>
### Thank you for completing this lab!
## Author
Saeed Aghabozorgi
### Other Contributors
<a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a>
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ---------------------------------- |
| 2020-08-27 | 0.1 | Lavanya | Moved lab to course repo in GitLab |
| | | | |
| | | | |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
# Clean Slate: Estimating offenses eligible for expungement under varying conditions
> Prepared by [Laura Feeney](https://github.com/laurafeeney) for Code for Boston's [Clean Slate project](https://github.com/codeforboston/clean-slate).
## Summary
This notebook takes somewhat processed data from the Middlesex DA and attempts to answer how many individuals may be eligible for expungement under varying conditions.
This dataset does not contain any information to identify specific individuals across multiple cases. We can see what charges are heard in Juvenile court, but we do not otherwise have an indicator of age.
So, we can provide a count and % of incidents heard in Juvenile court that are expungeable.
### Original Questions
1. How many people (under age 21) are eligible for expungement today? This would be people with only **one charge** that is not part of the list of ineligible offenses (per section 100J).
2. How many people (under age 21) would be eligible based on only having **one incident** (which could include multiple charges) that are not part of the list of ineligible offenses?
- How many people (under age 21) would be eligible based on only having **one incident** if only sex-based offenses or murder were excluded from expungement?
3. How many people (under age 21) would be eligible based on who has **not been found guilty** (given current offenses that are eligible for expungement)?
- How many people (under age 21) would be eligible based on who has **not been found guilty** for all offenses except for murder or sex-based offenses?
-----
### Step 0
Import data, programs, etc.
-----
```
import pandas as pd
pd.set_option("display.max_rows", 200)
import numpy as np
import regex as re
import glob, os
import datetime
from datetime import date
from collections import defaultdict, Counter
# processed individual-level data from MS district with expungability.
ms = pd.read_csv('../../data/processed/merged_ms.csv', encoding='utf8',
dtype={'Analysis notes':str, 'extra_criteria':str, 'Expungeable': str}, low_memory=False)
ms_original = ms
ms.loc[ms.Expungeable =='m', 'Expungeable'] = None
print("Middlesex Expungement Counts")
a = ms['Expungeable'].value_counts(dropna=False).rename_axis('count').to_frame('counts')
b = ms['Expungeable'].value_counts(dropna=False, normalize = True).rename_axis('percent').to_frame('percent')
exp_stats = pd.concat([a, b], axis=1)
exp_stats.style.format({ 'counts' : '{:,}', 'percent' : '{:,.1%}'})
ms['offenses_per_case']=ms.groupby('Case Number')['Case Number'].transform('count')
ms.info()
# Only indication of juvenile is if tried in juvenile court. Looks like no cases are heard in 2 courts (presumably would get
# a different case number)
ms['juvenile'] = ms.groupby('Case Number')['JuvenileC'].transform('max')
pd.crosstab(ms['JuvenileC'], ms['juvenile'])
```
## Step 0.5 - Prepare data
- Drop CMR offenses
- Prepare dates, date since offense
- Generate indicators for incidents, and code incidents as expungeable, sex-related etc
- Generate indicator for found guilty / not found guilty
**CMR** : There are many offenses that are violations of the Code of Massachusetts Regulations (CMR) rather than a criminal offense. These include things like some driving or boating infractions (e.g., not having headlights on), or not having a hunting/fishing license. Per conversations with Sana, dropping all CMR offenses.
```
### dates ###
# The file source said, "The following is data from our Damion Case Management
# System pertaining to prosecution statistics for the time period from
# January 1, 2014, through January 1, 2020."
reference_date = datetime.date(2020, 9, 1) # using "today.date() wouldn't be stable"
ms['Offense Date'] = pd.to_datetime(ms['Offense Date']).dt.date
ms = ms[~ms['Offense Date'].isnull()]
offenses_2014_2019 = ms['Offense Date'].loc[
(ms['Offense Date'] >= datetime.date(2014, 1, 1)) &
(ms['Offense Date'] <= datetime.date(2019, 12, 31))].count()
Percent_14_19 = "{:.1%}".format(offenses_2014_2019/ms['Offense Date'].count())
print(Percent_14_19, 'percent of offenses are between Jan 1 2014 and Dec 31 2019')
print("The earliest offense date is", min(ms['Offense Date']))
print("The max offense date is", max(ms['Offense Date']), "\n")
print(ms['years_since_offense'].describe())
# CMR offenses -- Drop all CMR offenses and Drop CMR-related columns
print(f'There are {ms.shape[0]} total offenses including CMR.')
ms = ms.loc[ms['CMRoffense'] == False]
ms = ms.drop(columns = ['CMRoffense'])
print(f'After we drop CMR, there are {ms.shape[0]} total offenses.')
# Check that the 'expungeable' column no longer has CMRs
print("Middlesex Expungement Counts")
a = ms['Expungeable'].value_counts(dropna=False).rename_axis('count').to_frame('counts')
b = ms['Expungeable'].value_counts(dropna=False, normalize = True).rename_axis('percent').to_frame('percent')
exp_stats = pd.concat([a, b], axis=1)
exp_stats.style.format({ 'counts' : '{:,}', 'percent' : '{:,.1%}'})
#Data prep.
# We only have Case Number, and cases are all for an offense on the same date.
# If an incident includes one offense that is not expungeable, we mark the entire incident as not expungeable.
#Attempts *are not* considered expungeable in this one.
ms['Exp'] = ms['Expungeable']=="Yes"
ms['Inc_Expungeable_Attempts_Not'] = ms.groupby(['Case Number'])['Exp'].transform('min')
# If an incident includes one offense that is not expungeable, we mark the entire incident as not expungeable.
#Attempts *are* considered expungeable in this one.
ms['ExpAtt'] = (ms['Expungeable']=="Yes") | (ms['Expungeable']=="Attempt")
ms['Inc_Expungeable_Attempts_Are'] = ms.groupby(['Case Number'])['ExpAtt'].transform('min')
# If an incident includes an offense that is a murder and/or sex crime, we code the whole incident as regarding
# murder and/or sex.
ms['sm'] = (ms['sex'] == 1) | (ms['murder'] ==1)
ms['Incident_Murder_Sex'] = ms.groupby(['Case Number'])['sm'].transform('max')
#unneeded calculation columns
ms = ms.drop(columns=['Exp', 'ExpAtt', 'sm'])
sorted(ms['Disposition Description'].unique())
guilty_dispos = ['DELINQUENT BENCH TRIAL', 'DELINQUENT CHANGE OF PLEA',
'DELINQUENT CHANGE OF PLEA LESSER OFFENSE', 'DELINQUENT JURY TRIAL',
'GUILTY BENCH TRIAL', 'GUILTY BENCH TRIAL LESSER INCLUDED',
'GUILTY CHANGE OF PLEA', 'GUILTY CHANGE OF PLEA LESSER OFFENSE',
'GUILTY FILED', 'GUILTY FINES', 'GUILTY JURY TRIAL',
'GUILTY JURY TRIAL LESSER INCLUDED',
'Guilty Jury Trial (and Bench) Lesser Included', 'RESPONSIBLE']
ms['guilty'] = ms['Disposition Description'].isin(guilty_dispos)
# there are no 'missing' values for guilty or dispo description, so no need to recode missing as 2 or -1 as in Suff & NW
assert ms['guilty'].count() == len(ms) == ms['Disposition Description'].count()
ms['Incident_Guilty'] = ms.groupby(['Case Number', 'Offense Date'])['guilty'].transform('max')
```
## Step 1
Summary stats
```
# distribution of # of charges
print(ms['offenses_per_case'].describe())
print('\n cutting off top 1%: \n', ms['offenses_per_case'].loc[ms['offenses_per_case']< ms['offenses_per_case'].quantile(.99)].describe())
### No indicator for unique individuals. Only proxy is case number. This will mean any estimates are well over estimated.
# offenses and incidents (cases)
Nu_tot = ms['Charge'].count()
Incidents_tot = ms['Case Number'].nunique()
one_off = ms[ (ms['offenses_per_case']==1)]['Case Number'].nunique()
# offenses related to sex or murder
Nu_sex = ms[ms['sex'] == 1]['sex'].count()
Nu_murder = ms[ms['murder'] == 1]['murder'].count()
Nu_sex_murder = ms[ms['Incident_Murder_Sex'] == 1]['Incident_Murder_Sex'].count()
Nu_sex_murder_inc = ms[ms['Incident_Murder_Sex'] == 1].groupby(['Case Number']).ngroups
# Juvenile stats
Number_Cases_Juvenile = ms[ms['juvenile']==True]['Case Number'].nunique()
Number_Off_Juvenile = ms[ms['juvenile']==True]['Case Number'].count()
Juvenile_one_off = ms[ (ms['offenses_per_case']==1) &
(ms['juvenile']==True)]['Case Number'].nunique()
offenses = ['Total offenses', Nu_tot, '','']
total = ['Total incidents (cases)', Incidents_tot, '','' ]
oneoff = ['Incidents with a single offense', one_off, '{:,.2%}'.format(one_off/Nu_tot), '{:,.2%}'.format(one_off/Incidents_tot)]
juv_header = ['Juvenile stats', 0, '', '']
juvenile_off = ['Total juvenile offenses', Number_Off_Juvenile, '{:,.2%}'.format(Number_Off_Juvenile/Nu_tot), '']
juvenile_inc = ['Total juvenile incidents', Number_Cases_Juvenile, '', '{:,.2%}'.format(Number_Cases_Juvenile/Incidents_tot)]
juv_one = ['Juvenile incidents with a single offense', Juvenile_one_off, '{:,.2%}'.format(Juvenile_one_off/Nu_tot), '{:,.2%}'.format(Juvenile_one_off/Incidents_tot)]
sm_header = ['Sex and murder stats (all ages)', 0, '', '']
sex_offenses = ['Sex offenses', Nu_sex, '{:,.2%}'.format(Nu_sex / Nu_tot),'']
murder = ['Murder offenses', Nu_murder, '{:,.2%}'.format(Nu_murder / Nu_tot),'']
sex_murder = ['Incidents with sex or murder', Nu_sex_murder_inc, '', '{:,.2%}'.format(Nu_sex_murder_inc / Incidents_tot)]
stats = [offenses, total, oneoff, juv_header, juvenile_off, juvenile_inc, juv_one, sm_header, sex_offenses, murder, sex_murder]
statsdf = pd.DataFrame(stats, columns = ['Question', 'Number', '% total offenses', '% total incidents'])
statsdf = statsdf.set_index('Question')
statsdf.style.format({'Number' : '{:,}'})
```
**Dispositions and Guilty**
Referencing this sheet to determine which to code as not found guilty vs found guilty.
https://docs.google.com/spreadsheets/d/1axzGGxgQFPwpTw7EbBlC519L43fOkqC5/edit#gid=487812267
```
print("Top 10 dispositions - all cases")
a = ms['Disposition Description'].value_counts().rename_axis('Dispositions').to_frame('counts')
b = ms['Disposition Description'].value_counts(normalize=True).rename_axis('Dispositions').to_frame('percent')
disp_stats = pd.concat([a, b], axis=1)
disp_stats['cumulative percent'] = disp_stats.percent.cumsum()
disp_stats[0:10].style.format({ 'counts' : '{:,}', 'percent' : '{:,.1%}', 'cumulative percent' : '{:,.1%}'})
a = ms['Disposition Description'].loc[ms['juvenile']==True].value_counts().rename_axis('Dispositions').to_frame('counts')
b = ms['Disposition Description'].loc[ms['juvenile']==True].value_counts(normalize=True).rename_axis('Dispositions').to_frame('percent')
disp_stats = pd.concat([a, b], axis=1)
disp_stats['cumulative percent'] = disp_stats.percent.cumsum()
print('top 10 dispositions for all cases in juvenile court')
disp_stats[0:10].style.format({ 'counts' : '{:,}', 'percent' : '{:,.1%}', 'cumulative percent' : '{:,.1%}'})
print('Guilty dispositions')
a = ms['guilty'].value_counts(normalize=True).rename_axis('Found Guilty').to_frame('Percent')
b = ms['Incident_Guilty'].value_counts(normalize=True).rename_axis('Found Guilty').to_frame('Percent')
guilty_stats = pd.concat([a, b], keys=['Offenses', 'Incidents'])
guilty_stats.style.format({ 'Percent' : '{:,.1%}'})
```
## Question 1
Original: 1. How many people (under age 21) are eligible for expungement today? This would be people with only **one charge** that is not part of the list of ineligible offenses (per section 100J).
What we can answer:
- How many cases include only 1 offense, heard in a Juvenile court, and the charge is not part of the list of ineligible offenses from section 100J.
----
```
def date_range(x):
greater3 = x.loc[(x['years_since_offense'] > 3)]['Case Number'].nunique()
greater7 = x.loc[(x['years_since_offense'] > 7)]['Case Number'].nunique()
print(greater3, "occured more than 3 years before", reference_date)
print(greater7, "occured more than 7 years before", reference_date)
def eligible_juvs(y):
People_eligible = y['Case Number'].nunique()
pct_juv = '{:.2%}'.format(People_eligible/Number_Cases_Juvenile)
return People_eligible , pct_juv
def eligible_all_ages(y):
People_eligible = y['Case Number'].nunique()
pct_tot = '{:.2%}'.format(People_eligible/Incidents_tot)
return People_eligible, pct_tot
x = ms.loc[
(ms['offenses_per_case']==1) &
(ms['Expungeable'] != 'No') &
(ms['juvenile'] == True)
]
q1 = 'q1', 'Incidents with a single offense: no offense ineligible', eligible_juvs(x)
print(q1)
#all ages
x = ms.loc[
(ms['offenses_per_case']==1) &
(ms['Expungeable'] != 'No')
]
q1 = q1, eligible_all_ages(x)
#date_range(x)
#print(x['Disposition Description'].value_counts(dropna=False)[0:10])
```
## Question 2
Original: How many people (under age 21) would be eligible based on only having one incident (which could include multiple charges) that are not part of the list of ineligible offenses?
*We cannot answer this -- we do not have a person-level identifier or any proxy for an identifier.*
- How many incidents are heard in juvenile court where no offenses are on the list of ineligible offenses
```
x = ms.loc[
(ms['Inc_Expungeable_Attempts_Are'] == True) &
(ms['juvenile'] == True)
]
q2 = 'q2', 'Incidents: no offenses ineligible', eligible_juvs(x)
print(q2)
#date_range(x)
print(x['Disposition Description'].value_counts(dropna=False)[0:10])
#all ages
x = ms.loc[
(ms['Inc_Expungeable_Attempts_Are'] == True)
]
q2 = q2, eligible_all_ages(x)
```
## Verdict: Disposition / guilty
Original Q3: How many people (under age 21) would be eligible based on who has not been found guilty (given current offenses that are eligible for expungement)?
Guilty is defined above in section 0.5
*Because we do not have an individual identifier, this is just a sub-set of the previous question. This will remove any incidents where at least 1 offense had a disposition indicating guilty (looks like its mostly taking form delinquent change of plea or guilty change of plea). If we had an indicator of individuals across offenses, this might increase the number of people eligible for expungement, because it would waive the single offense/incident criterion. In this case, it reduces the number of incidents eligible, because it restricts to only those not found guilty.*
- How many incidents have no offenses ineligible and no offenses with a guilty verdict
```
x = ms.loc[
(ms['Inc_Expungeable_Attempts_Are']) &
(ms['juvenile'] == True) &
(ms['Incident_Guilty'] != True)
]
q3 = 'q3', 'Incidents: no offenses ineligible, no guilty dispositions', eligible_juvs(x)
print(q3)
#date_range(x)
x['Disposition Description'].value_counts(dropna=False)[0:10]
#all ages
x = ms.loc[
(ms['Inc_Expungeable_Attempts_Are']) &
(ms['Incident_Guilty'] != True)
]
q3 = q3, eligible_all_ages(x)
```
## Sex or murder related
Original: How many people (under age 21) would be eligible based on only having one incident if only sex-based offenses or murder were excluded from expungement?
*We cannot answer this -- we do not have a person-level identifier or any proxy for an identifier.*
- Incidents heard in juvenile court are eligible where no charges are related to sex or murder
```
x = ms.loc[
(ms['Incident_Murder_Sex'] == False) &
(ms['juvenile'] == True)
]
q4 = 'q4', 'Incidents: no offenses related to sex or murder', eligible_juvs(x)
print(q4)
date_range(x)
x['Disposition Description'].value_counts(dropna=False)[0:10]
#all ages
x = ms.loc[
(ms['Incident_Murder_Sex'] == False)
]
q4 = q4, eligible_all_ages(x)
```
### No Sex or Murder and not found Guilty
Original: How many people (under age 21) would be eligible based on who has not been found guilty for all offenses except for murder or sex-based offenses?
*Because we do not have an individual identifier, this is just a sub-set of Question 2b. This will remove any incidents where at least 1 offense had a disposition indicating guilty (looks like its mostly taking form delinquent change of plea or guilty change of plea). If we had an indicator of individuals across offenses, this might increase the number of people eligible for expungement, because it would waive the single offense/incident criterion. In this case, it reduces the number of incidents eligible, because it restricts to only those not found guilty.*
- Incidents where no charges are related to sex or murder and no offenses have a guilty disposition
```
x = ms.loc[
(ms['Incident_Murder_Sex'] == False) &
(ms['juvenile'] == True) &
(ms['Incident_Guilty'] != True)
]
q5 = 'q5','Incidents: no offenses related to sex or murder, no guilty dispositions', eligible_juvs(x)
print(q5)
#date_range(x)
x['Disposition Description'].value_counts(dropna=False)[0:10]
# all ages
x = ms.loc[
(ms['Incident_Murder_Sex'] == False) &
(ms['Incident_Guilty'] != True)
]
q5 = q5, eligible_all_ages(x)
a = [q1, q2, q3, q4, q5]
ans = pd.DataFrame(a , columns = ['A', 'B'])
ans[['q', 'Question', 'Juv']] = pd.DataFrame(ans['A'].tolist())
ans[['# Juv Incidents', '% Juv']] = pd.DataFrame(ans['Juv'].tolist())
ans[['# All Age Incidents', '% All Ages']] = pd.DataFrame(ans['B'].tolist())
ans = ans[['q', 'Question', '# Juv Incidents', '% Juv', '# All Age Incidents', '% All Ages']].set_index('q')
ans.style.format({'# Juv Incidents':'{:,}', '# All Age Incidents':'{:,}'})
```
| github_jupyter |
TSG035 - Spark History logs
===========================
Description
-----------
Steps
-----
### Parameters
```
import re
tail_lines = 2000
pod = None # All
container='hadoop-livy-sparkhistory'
log_files = [ "/var/log/supervisor/log/sparkhistory*" ]
expressions_to_analyze = [
re.compile(".{23} WARN "),
re.compile(".{23} ERROR ")
]
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG112 - App-Deploy Proxy Nginx Logs](../log-analyzers/tsg112-get-approxy-nginx-logs.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA\_NAMESPACE, before starting
Azure Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Get tail for log
```
# Display the last 'tail_lines' of files in 'log_files' list
pods = api.list_namespaced_pod(namespace)
entries_for_analysis = []
for p in pods.items:
if pod is None or p.metadata.name == pod:
for c in p.spec.containers:
if container is None or c.name == container:
for log_file in log_files:
print (f"- LOGS: '{log_file}' for CONTAINER: '{c.name}' in POD: '{p.metadata.name}'")
try:
output = stream(api.connect_get_namespaced_pod_exec, p.metadata.name, namespace, command=['/bin/sh', '-c', f'tail -n {tail_lines} {log_file}'], container=c.name, stderr=True, stdout=True)
except Exception:
print (f"FAILED to get LOGS for CONTAINER: {c.name} in POD: {p.metadata.name}")
else:
for line in output.split('\n'):
for expression in expressions_to_analyze:
if expression.match(line):
entries_for_analysis.append(line)
print(line)
print("")
print(f"{len(entries_for_analysis)} log entries found for further analysis.")
```
### Analyze log entries and suggest relevant Troubleshooting Guides
```
# Analyze log entries and suggest further relevant troubleshooting guides
from IPython.display import Markdown
import os
import json
import requests
import ipykernel
import datetime
from urllib.parse import urljoin
from notebook import notebookapp
def get_notebook_name():
"""Return the full path of the jupyter notebook. Some runtimes (e.g. ADS)
have the kernel_id in the filename of the connection file. If so, the
notebook name at runtime can be determined using `list_running_servers`.
Other runtimes (e.g. azdata) do not have the kernel_id in the filename of
the connection file, therefore we are unable to establish the filename
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
# If the runtime has the kernel_id in the connection filename, use it to
# get the real notebook name at runtime, otherwise, use the notebook
# filename from build time.
try:
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
except:
pass
else:
for servers in list(notebookapp.list_running_servers()):
try:
response = requests.get(urljoin(servers['url'], 'api/sessions'), params={'token': servers.get('token', '')}, timeout=.01)
except:
pass
else:
for nn in json.loads(response.text):
if nn['kernel']['id'] == kernel_id:
return nn['path']
def load_json(filename):
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def get_notebook_rules():
"""Load the notebook rules from the metadata of this notebook (in the .ipynb file)"""
file_name = get_notebook_name()
if file_name == None:
return None
else:
j = load_json(file_name)
if "azdata" not in j["metadata"] or \
"expert" not in j["metadata"]["azdata"] or \
"log_analyzer_rules" not in j["metadata"]["azdata"]["expert"]:
return []
else:
return j["metadata"]["azdata"]["expert"]["log_analyzer_rules"]
rules = get_notebook_rules()
if rules == None:
print("")
print(f"Log Analysis only available when run in Azure Data Studio. Not available when run in azdata.")
else:
hints = 0
if len(rules) > 0:
for entry in entries_for_analysis:
for rule in rules:
if entry.find(rule[0]) != -1:
print (entry)
display(Markdown(f'HINT: Use [{rule[2]}]({rule[3]}) to resolve this issue.'))
hints = hints + 1
print("")
print(f"{len(entries_for_analysis)} log entries analyzed (using {len(rules)} rules). {hints} further troubleshooting hints made inline.")
print('Notebook execution complete.')
```
| github_jupyter |
```
import numpy as np
import cv2
import time
import matplotlib.pyplot as plt
img_left_color=cv2.imread('Left/ImageL1.png')
img_right_color=cv2.imread('Right/ImageR1.png')
img_left_bw = cv2.blur(cv2.cvtColor(img_left_color, cv2.COLOR_RGB2GRAY),(5,5))
img_right_bw = cv2.blur(cv2.cvtColor(img_right_color, cv2.COLOR_RGB2GRAY),(5,5))
import numpy as np
import cv2
# Filtering
kernel = np.ones((3, 3), np.uint8)
# Termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
criteria_stereo = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Prepare object points
objp = np.zeros((8 * 6, 3), np.float32)
objp[:, :2] = np.mgrid[0:8, 0:6].T.reshape(-1, 2)
# Arrays to store object points and image points from all images
objpoints = [] # 3d points in real world space
imgpointsR = [] # 2d points in image plane
imgpointsL = []
# Start calibration from the camera
print('Starting calibration for the 2 cameras... ')
# Call all saved images
for i in range(0, 76): # Put the amount of pictures you have taken for the calibration inbetween range(0,?) wenn starting from the image number 0
#print(i)
t = str(i)
ChessImaR = cv2.imread('Right/chessboard-R' + t + '.png', 0) # Right side
ChessImaL = cv2.imread('Left/chessboard-L' + t + '.png', 0) # Left side
retR, cornersR = cv2.findChessboardCorners(ChessImaR, (8, 6),None) # Define the number of chees corners we are looking for
retL, cornersL = cv2.findChessboardCorners(ChessImaL, (8, 6), None) # Left side
if (True == retR) & (True == retL):
objpoints.append(objp)
cornersR = cv2.cornerSubPix(ChessImaR, cornersR, (11, 11), (-1, -1), criteria)
cornersL = cv2.cornerSubPix(ChessImaL, cornersL, (11, 11), (-1, -1), criteria)
imgpointsR.append(cornersR)
imgpointsL.append(cornersL)
# Right Side
retR, mtxR, distR, rvecsR, tvecsR = cv2.calibrateCamera(objpoints,
imgpointsR,
ChessImaR.shape[::-1], None, None)
hR, wR = ChessImaR.shape[:2]
OmtxR, roiR = cv2.getOptimalNewCameraMatrix(mtxR, distR,
(wR, hR), 1, (wR, hR))
# Left Side
retL, mtxL, distL, rvecsL, tvecsL = cv2.calibrateCamera(objpoints,
imgpointsL,
ChessImaL.shape[::-1], None, None)
hL, wL = ChessImaL.shape[:2]
OmtxL, roiL = cv2.getOptimalNewCameraMatrix(mtxL, distL, (wL, hL), 1, (wL, hL))
# StereoCalibrate function
flags = 0
flags |= cv2.CALIB_FIX_INTRINSIC
retS, MLS, dLS, MRS, dRS, R, T, E, F = cv2.stereoCalibrate(objpoints,
imgpointsL,
imgpointsR,
mtxL,
distL,
mtxR,
distR,
ChessImaR.shape[::-1],
criteria_stereo,
flags)
# StereoRectify function
rectify_scale = 0 # if 0 image croped, if 1 image nor croped
RL, RR, PL, PR, Q, roiL, roiR = cv2.stereoRectify(MLS, dLS, MRS, dRS,
ChessImaR.shape[::-1], R, T,
rectify_scale,
(0, 0)) # last paramater is alpha, if 0= croped, if 1= not croped
# initUndistortRectifyMap function
Left_Stereo_Map = cv2.initUndistortRectifyMap(MLS, dLS, RL, PL,
ChessImaR.shape[::-1],
cv2.CV_16SC2) # cv2.CV_16SC2 this format enables us the programme to work faster
Right_Stereo_Map = cv2.initUndistortRectifyMap(MRS, dRS, RR, PR,
ChessImaR.shape[::-1], cv2.CV_16SC2)
Left1=Left_Stereo_Map[1].reshape((480,640,1))
Left=np.concatenate((Left_Stereo_Map[0],Left1),axis=2)
LFTdata=Left.reshape(Left.shape[0],-1)
Right1=Right_Stereo_Map[1].reshape((480,640,1))
Right=np.concatenate((Right_Stereo_Map[0],Right1),axis=2)
RGTdata=Right.reshape(Right.shape[0],-1)
np.savetxt('Left_Stereo0.txt', LFTdata)
np.savetxt('Right_Stereo0.txt', RGTdata)
def showImg(img):
plt.imshow(cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
def write_ply(fn, verts, colors):
ply_header = '''ply
format ascii 1.0
element vertex %(vert_num)d
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header
'''
out_colors = colors.copy()
verts = verts.reshape(-1, 3)
verts = np.hstack([verts, out_colors])
with open(fn, 'wb') as f:
f.write((ply_header % dict(vert_num=len(verts))).encode('utf-8'))
np.savetxt(f, verts, fmt='%f %f %f %d %d %d ')
plt.imshow(img_right_bw, cmap='gray')
#stereo = cv2.StereoBM_create(numDisparities=96, blockSize=7)
window_size = 3
min_disp = 2
num_disp = 130-min_disp
stereo = cv2.StereoSGBM_create(minDisparity = min_disp,
numDisparities = num_disp,
blockSize = window_size,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32,
disp12MaxDiff = 5,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2)
# Used for the filtered image
stereoR=cv2.ximgproc.createRightMatcher(stereo) # Create another stereo for right this time
kernel= np.ones((3,3),np.uint8)
# WLS FILTER Parameters
lmbda = 80000
sigma = 0.5
visual_multiplier = 1.0
wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=stereo)
wls_filter.setLambda(lmbda)
wls_filter.setSigmaColor(sigma)
# StereoVision
frameL=cv2.imread('LeftNew/ImageobjectL3.png')
frameR=cv2.imread('RightNew/ImageobjectR3.png')
# Rectify the images on rotation and alignement
Left_nice= cv2.remap(frameL,Left_Stereo_Map[0],Left_Stereo_Map[1], cv2.INTER_LANCZOS4, cv2.BORDER_CONSTANT, 0) # Rectify the image using the kalibration parameters founds during the initialisation
Right_nice= cv2.remap(frameR,Right_Stereo_Map[0],Right_Stereo_Map[1], cv2.INTER_LANCZOS4, cv2.BORDER_CONSTANT, 0)
grayR = cv2.cvtColor(Right_nice, cv2.COLOR_BGR2GRAY)
grayL = cv2.cvtColor(Left_nice, cv2.COLOR_BGR2GRAY)
# Compute the 2 images for the Depth_image
# Compute the 2 images for the Depth_image
disp= stereo.compute(grayL,grayR)#.astype(np.float32)/ 16
dispL= disp
dispR= stereoR.compute(grayR,grayL)
dispL= np.int16(dispL)
dispR= np.int16(dispR)
# Using the WLS filter
filteredImg= wls_filter.filter(dispL,grayL,None,dispR)
filteredImg = cv2.normalize(src=filteredImg, dst=filteredImg, beta=0, alpha=255, norm_type=cv2.NORM_MINMAX);
filteredImg = np.uint8(filteredImg)
#cv2.imshow('Disparity Map', filteredImg)
disp= ((disp.astype(np.float32)/ 16)-min_disp)/num_disp # Calculation allowing us to have 0 for the most distant object able to detect
## # Resize the image for faster executions
## dispR= cv2.resize(disp,None,fx=0.7, fy=0.7, interpolation = cv2.INTER_AREA)
# Filtering the Results with a closing filter
closing= cv2.morphologyEx(disp,cv2.MORPH_CLOSE, kernel) # Apply an morphological filter for closing little "black" holes in the picture(Remove noise)
# Colors map
dispc= (closing-closing.min())*255
dispC= dispc.astype(np.uint8) # Convert the type of the matrix from float32 to uint8, this way you can show the results with the function cv2.imshow()
disp_Color= cv2.applyColorMap(dispC,cv2.COLORMAP_OCEAN) # Change the Color of the Picture into an Ocean Color_Map
#filt_Color= cv2.applyColorMap(filteredImg,cv2.COLORMAP_OCEAN)
# Show the result for the Depth_image
#cv2.imshow('Disparity', disp)
#cv2.imshow('Closing',closing)
#cv2.imshow('Color Depth',disp_Color)
#cv2.imshow('Filtered Color Depth',filt_Color)
#disparity = stereo.compute(img_left_bw,img_right_bw)
img = dispc.copy()
plt.imshow(disp_Color)
calib_matrix_1=np.loadtxt("Left_Stereo0.txt",dtype=np.float32).reshape(3,-1)
calib_matrix_2=np.loadtxt("Right_Stereo0.txt",dtype=np.float32).reshape(3,-1)
print(calib_matrix_2.shape)
# Calculate depth-to-disparity
cam1 = calib_matrix_1[:,:3] # left image - P2
cam2 = calib_matrix_2[:,:3] # right image - P3
print(cam1.shape)
Tmat = np.array([0.10, 0., 0.])
rev_proj_matrix = np.zeros((4,4))
cv2.stereoRectify(cameraMatrix1 = cam1,cameraMatrix2 = cam2, \
distCoeffs1 = 0, distCoeffs2 = 0, \
imageSize = img_left_color.shape[:2], \
R = np.identity(3), T = Tmat, \
R1 = None, R2 = None, \
P1 = None, P2 = None, Q = rev_proj_matrix);
points = cv2.reprojectImageTo3D(img, rev_proj_matrix)
#reflect on x axis
reflect_matrix = np.identity(3)
reflect_matrix[0] *= -1
points = np.matmul(points,reflect_matrix)
#extract colors from image
colors = cv2.cvtColor(img_left_color, cv2.COLOR_BGR2RGB)
#filter by min disparity
mask = img > img.min()
out_points = points[mask]
out_colors = colors[mask]
#filter by dimension
idx = np.fabs(out_points[:,0]) < 4.5
out_points = out_points[idx]
out_colors = out_colors.reshape(-1, 3)
out_colors = out_colors[idx]
write_ply('out.ply', out_points, out_colors)
print('%s saved' % 'out.ply')
print(points.shape)
reflected_pts = np.matmul(out_points, reflect_matrix)
print(np.array([0., 0., 0.]).shape)
projected_img,_ = cv2.projectPoints(reflected_pts, np.array([0., 0., 0.]), np.array([0., 0., 0.]), cam2[:3,:3], np.array([0., 0., 0., 0.]))
print(projected_img.shape)
#projected_img = projected_img.reshape(-1, 3)
blank_img = np.zeros(img_left_color.shape, 'uint8')
img_colors = img_right_color[mask][idx].reshape(-1,3)
for i, pt in enumerate(projected_img):
pt_x = int(pt[0][0])
pt_y = int(pt[0][1])
if pt_x > 0 and pt_y > 0:
# use the BGR format to match the original image type
col = (int(img_colors[i, 2]), int(img_colors[i, 1]), int(img_colors[i, 0]))
cv2.circle(blank_img, (pt_x, pt_y), 1, col)
showImg(blank_img)
```
| github_jupyter |
# Functions
Functions are key to creating reusable software, testing, and working in teams.
This lecture motivates the use of functions, discusses how functions are defined in python, and
introduces a workflow that starts with exploratory code and produces a function.
**Topics**
- Creating reusable software components
- Motivating example
- Python function syntax
- Name scoping in functions
- Keyword arguments
- Exercise
- Function Driven Workflow
## Creating Reusable Software Components
- What makes a component reusable?
- Signature of a software component
- Inputs
- How they are "passed"
- Data types
- Semantics
- Outputs
- Side effects
## Motivating Example
```
# Our prime number example from week 1
N = 10
for candidate in range(2, N):
# n is candidate prime. Check if n is prime
is_prime = True
for m in range(2, candidate):
if (candidate % m) == 0:
is_prime = False
break
if is_prime:
print("%d is prime!" % candidate)
```
Issues with making a function:
1. What does it do?
- Finds all primes less than or equal to N
1. What are the inputs?
- Input 1: start range (int)
- Input 2: end range (int)
1. What are the outputs?
- Output: list-int
```
# Our prime number example from week 1
N = 10
result = []
for candidate in range(2, N):
# n is candidate prime. Check if n is prime
is_prime = True
for m in range(2, candidate):
if (candidate % m) == 0:
is_prime = False
break
if is_prime:
result.append(candidate)
print(result)
```
Some questions
1. How can we recast this script as a component?
- Inputs
- Outputs
2. Should the component itself be recast as having another reusable component?
## Python Function Syntax
Transform the above script into a python function.
1. Function definition
1. Formal arguments
1. Calling a function
```
def identify_primes(start_range, end_range):
result = []
for candidate in range(start_range, end_range):
# n is candidate prime. Check if n is prime
is_prime = True
for m in range(2, candidate):
if (candidate % m) == 0:
is_prime = False
break
if is_prime:
result.append(candidate)
return(result)
identify_primes(2, 10)
identify_primes(5, 10)
```
## Name Scoping in Functions
```
# Example 1: function invocation vs. formal arguments
def add_one(a):
b = 10
return a + 1
#
a = 1
b = 2
print("add_one(a): %d" % add_one(a))
print("add_one(b): %d" % add_one(b))
b
```
Why is ``func(b)`` equal to 3 when the function is defined in terms of ``a`` which equals 1?
```
# Example 2: formal argument vs. global variable
def func(a):
y = a + 1
return y
#
# The following causes an error when False is changed to True. Why?
y = 23
func(2)
print("After call value of y: %d" % y)
```
Why didn't the value of ``y`` change? Shouldn't it be ``y = 3``?
```
# Example 3: manipulation of global variables
x = 5
def func(a):
global x
x = 2*a
#
#print("Before call value of x = %d" % x)
func(2)
print(x)
#print("After call value of x = %d" % x)
```
Why didn't the value of ``x`` change?
# Refactoring a Function
```
def identify_primes2(start_range, end_range):
result = []
for candidate in range(start_range, end_range):
# n is candidate prime. Check if n is prime
if is_prime(candidate):
result.append(candidate)
return(result)
identify_primes2(2, 10)
def identify_primes2(start_range, end_range):
result = []
for candidate in range(start_range, end_range):
# n is candidate prime. Check if n is prime
if is_prime(candidate):
result.append(candidate)
return(result)
identify_primes2(2, 10)
# Return True if number is prime
def is_prime(candidate):
is_prime = True
for m in range(2, candidate):
if (candidate % m) == 0:
is_prime = False
return is_prime
# Test cases
print("Should be prime %d" %is_prime(53)) # should prime
print("Should not be prime %d" %is_prime(52)) # should prime
print("0 input %d" % is_prime(0))
```
## Keyword Arguments
Functions evolve over time, and so it is natural that you'll want to add agruments. But when you do, you "break" existing code that doesn't include those arguments.
Python has a great capability to deal with this -- keyword arguments.
```
# Optionally check for negative values
def identify_primes3(start_range, end_range, is_check=True):
if is_check:
if start_range < 0:
print("Bad")
return
result = []
for candidate in range(start_range, end_range):
# n is candidate prime. Check if n is prime
if is_prime(candidate):
result.append(candidate)
return(result)
identify_primes3(-2, 10, is_check=False)
# Extend find_primes to return None if argument is negative.
```
## Exercise
1. Find which substrings are present in a string.
1. For example, consider the string "The lazy brown fox jumped over the fence." Which of the following substrings is present: "ow", "low", "row" and how many occurrences are there?
1. Write a function that produces the desired result for this example.
```
a_string = "The lazy brown fox jumped over the fence."
a_string.index("lazy")
def find_substrings(base_string, substrings):
result = []
for stg in substrings:
if base_string.find(stg) >= 0:
result.append(stg)
return result
find_substrings("The lazy brown fox jumped over the fence.", ["azy", "jumped", "hopped"])
```
## Function Driven Workflow
- Script in a notebook
- Create functions from scripts
- Copy functions in a python module
- Replace functions in notebook with use of functions in module
- To use a function inside a notebook, you must ``import`` its containint module.
```
!ls
import identify_prime
identify_prime.identify_primes(2, 20)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.