text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
# Create figures in Python that handle LaTeX, and save images to files in my
# preferred formatting. I typically place this code in the root of each of my
# projects, and import using:
# from latexify import *
# which will also run the latexify() function on the import.
# Based on code from https://nipunbatra.github.io/blog/2014/latexify.html
import matplotlib
import matplotlib.pyplot as plt
from math import sqrt
#Back-end to use depends on the system
from matplotlib.backends.backend_pgf import FigureCanvasPgf
matplotlib.backend_bases.register_backend('pdf', FigureCanvasPgf)
# matplotlib.use('pgf')
# from matplotlib.backends.backend_pgf import FigureCanvasPgf
# matplotlib.backend_bases.register_backend('ps', FigureCanvasPgf)
import seaborn as sns
sns.set_style("white")
#my preferred palette. From
#https://seaborn.pydata.org/tutorial/color_palettes.html: "The cubehelix color
#palette system makes sequential palettes with a linear increase or decrease in
#brightness and some variation in hue. This means that the information in your
#colormap will be preserved when converted to black and white (for printing) or
#when viewed by a colorblind individual."
# I typically set the number of colors (below, 8) to the distinct colors I need
# in a given plot, so as to use the full range.
sns.set_palette(sns.color_palette("cubehelix", 8))
# The following is the latexify function. It allows you to create 2 column or 1
# column figures. You may also wish to alter the height or width of the figure.
# The default settings are good for most cases. You may also change the
# parameters such as labelsize and fontsize based on your classfile.
def latexify(fig_width=None, fig_height=None, columns=1, ticksize=8):
"""Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
"""
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1, 2])
if fig_width is None:
fig_width = 6.9 if columns == 1 else 13.8 # width in inches #3.39
if fig_height is None:
golden_mean = (sqrt(5) - 1.0) / 2.0 # Aesthetic ratio
fig_height = fig_width * golden_mean # height in inches
MAX_HEIGHT_INCHES = 16.0
if fig_height > MAX_HEIGHT_INCHES:
print(("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches."))
fig_height = MAX_HEIGHT_INCHES
params = {
# 'backend': 'ps',
# 'pgf.rcfonts': False,
# 'pgf.preamble': ['\\usepackage{gensymb}', '\\usepackage[dvipsnames]{xcolor}'],
# "pgf.texsystem": "pdflatex",
# 'text.latex.preamble': ['\\usepackage{gensymb}', '\\usepackage[dvipsnames]{xcolor}'],
'text.latex.preamble': '\\usepackage{mathptmx}',
#values below are useful defaults. individual plot fontsizes are
#modified as necessary.
'axes.labelsize': 8, # fontsize for x and y labels
'axes.titlesize': 8,
'font.size': 8,
'legend.fontsize': 8,
'xtick.labelsize': ticksize,
'ytick.labelsize': ticksize,
'text.usetex': True,
'figure.figsize': [fig_width, fig_height],
'font.family': 'DejaVu Sans',
'font.serif': 'Times',
'lines.linewidth': 1.5,
'lines.markersize':1,
'xtick.major.pad' : 2,
'ytick.major.pad' : 2,
'axes.xmargin' : .0, # x margin. See `axes.Axes.margins`
'axes.ymargin' : .0, # y margin See `axes.Axes.margins`
}
matplotlib.rcParams.update(params)
def saveimage(name, fig = plt, extension = 'pdf', folder = 'plots/'):
sns.despine()
#Minor ticks off by default in matplotlib
# plt.minorticks_off()
#grid being off is the default for seaborn white style, so not needed.
# plt.grid(False, axis = "x")
# plt.grid(False, axis = "y")
fig.savefig('{}{}.{}'.format(folder,name, extension), bbox_inches = 'tight')
latexify()
import numpy as np
import getdist
from getdist import plots
import corner
import chainconsumer
import os
import imnn
import imnn.lfi
import tensorflow_probability.substrates.jax as tfp
import matplotlib.patches as mpatches
import matplotlib.colors as mcolors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes, InsetPosition, mark_inset
from matplotlib.legend_handler import HandlerTuple
plt.rcParams.update({'lines.linewidth': 2})
plt.rcParams.update({'text.usetex': True})
plt.rcParams.update({'text.latex.preamble': r"\usepackage{amsmath}\usepackage{upgreek}"})
plt.rcParams.update({'font.family': 'serif'})
plt.rcParams.update({'font.size': 20})
colorsDict = {
# Match pygtc up to v0.2.4
'blues_old' : ('#4c72b0','#7fa5e3','#b2d8ff'),
'greens_old' : ('#55a868','#88db9b','#bbffce'),
'yellows_old' : ('#f5964f','#ffc982','#fffcb5'),
'reds_old' : ('#c44e52','#f78185','#ffb4b8'),
'purples_old' : ('#8172b2','#b4a5e5','#37d8ff'),
# New color scheme, dark colors match matplotlib v2
'blues' : ('#1f77b4','#52aae7','#85ddff'),
'oranges' : ('#ff7f0e','#ffb241','#ffe574'),
'greens' : ('#2ca02c','#5fd35f','#92ff92'),
'reds' : ('#d62728','#ff5a5b','#ff8d8e'),
'purples' : ('#9467bd','#c79af0','#facdff'),
'browns' : ('#8c564b','#bf897e','#f2bcb1'),
'pinks' : ('#e377c2','#ffaaf5','#ffddff'),
'grays' : ('#7f7f7f','#b2b2b2','#e5e5e5'),
'yellows' : ('#bcbd22','#eff055','#ffff88'),
'cyans' : ('#17becf','#4af1ff','#7dffff'),
}
defaultColorsOrder = ['blues', 'oranges','greens', 'reds', 'purples',
'browns', 'pinks', 'grays', 'yellows', 'cyans']
colorsOrder = defaultColorsOrder
colors = [colorsDict[cs] for cs in colorsOrder]
num_fields = 4
N = 20
n_levels = 2
alphamin = 0.
alphamax = 2.0
betamin = 0.2
betamax = 0.8
nalpha = 80
nbeta = nalpha
# set directories
outdir = './plots/'
datadir = './LN-data/'
marg_dirs = ['/Users/lucas/Datasets/imnn-ln-delfi/posteriors_field_%d_90_45/'%(i+1) for i in range(num_fields)]
# load in field data
field_data = [np.load(datadir + 'toy_LN_field_90_45_%d.npy'%(i+1)) for i in range(num_fields)]
# load in all marginals
# for BHM
BHM_posts = [np.load(marg_dirs[i] + 'post_chains_bhm.npy') for i in range(num_fields)]
# for all MAFs
MAF_posts = [np.load(marg_dirs[i] + 'post_chains_maf_super.npy') for i in range(num_fields)]
#MAF_posts[2] = np.load('/Users/lucas/Datasets/imnn-ln-delfi/posteriors_field_3_90_45_new/' + 'post_chains_maf_super.npy')
# pull in fisher at target
fisher_analytic = np.load(marg_dirs[1] + 'fisher_analytic.npy')
# pull in MAF training histories
train_losses_all = [np.load(marg_dirs[i] + 'maf_train_losses.npy') for i in range(num_fields)]
val_losses_all = [np.load(marg_dirs[i] + 'maf_val_losses.npy') for i in range(num_fields)]
def triangle_plot(samples = None, weights = None, truths = None,
savefig = False, filename = None, names = None, labels = None,
ranges = None, fontsize = 14, legend_labels=None):
# Set samples to the posterior samples by default
if weights is None:
mc_samples = [MCSamples(samples=samples[i], weights=None, names=names, labels=labels, ranges=ranges) for i in range(len(samples))]
else:
mc_samples = [MCSamples(samples=samples[i], weights=weights[i], names=names, labels=labels, ranges=ranges) for i in range(len(samples))]
# Triangle plot
plt.close()
with mpl.rc_context():
g = plots.getSubplotPlotter(width_inch = 12)
g.settings.figure_legend_frame = False
g.settings.alpha_filled_add=0.6
g.settings.axes_fontsize=fontsize
g.settings.legend_fontsize=fontsize
g.settings.lab_fontsize=fontsize
g.triangle_plot(mc_samples, filled_compare=True, normalized=True, legend_labels=legend_labels)
for i in range(0, len(samples[0][0,:])):
for j in range(0, i+1):
ax = g.subplots[i,j]
#xtl = ax.get_xticklabels()
#ax.set_xticklabels(xtl, rotation=45)
if truths is not None:
for column in range(0, len(samples[0][0,:])-1):
for row in range(column+1, len(samples[0][0,:])):
ax = g.subplots[row,column]
for t in range(len(truths)):
ax.scatter(np.array([truths[t][column]]), np.array([truths[t][row]]), marker = 'x', color = 'black')
#plt.tight_layout()
plt.subplots_adjust(hspace=0, wspace=0)
if savefig:
plt.savefig(filename)
plt.show()
else:
plt.show()
plt.close()
alphas=np.linspace(alphamin,alphamax,nalpha)
betas=np.linspace(betamin,betamax,nbeta)
levs = getdist.densities.getContourLevels(MAF_posts[0])
dataid = 0
colors = ['#52aae7', '#FF8D33']
legend_labels = ['DELFI + IMNN', 'BHM evaluation']
bhm_mcsamples = getdist.MCSamples(samples=np.array(BHM_posts[dataid]),
names=['alpha', 'beta'],
labels=['\\alpha', '\\beta'])
delfi_mcsamples = getdist.MCSamples(samples=np.array(MAF_posts[dataid]),
names=['alpha', 'beta'],
labels=['\\alpha', '\\beta'])
mcsamples = [delfi_mcsamples, bhm_mcsamples]
#fig,ax = plt.subplots(ncols=2, nrows=2)
g = plots.get_subplot_plotter(subplot_size=3)
g.triangle_plot(mcsamples, filled=True, legend_labels=legend_labels,
contour_colors=colors)
# plot_contours(-analytic_F_target, ax=g.subplots[1,0], pos=np.array([target["α"], target["β"]]),
# set_lims=False, color='k', alpha=0.7)
#patch1 = mpatches.Patch(color='k', alpha=0.7, label='Anaytic Fisher at target')
#plt.legend(handles=[patch1], bbox_to_anchor=(0.627, 0.03))
g.subplots[1,0].axhline(θ_target[1], color='gray', linestyle='--', lw=0.8)
g.subplots[1,0].axvline(θ_target[0], color='gray', linestyle='--', lw=0.8)
Finv_IMNN
θ_target = np.array([0.9, 0.45])
params = [r"$\alpha$", r"$\beta$"]
corner_colors = [None, None, 'k']
Finv_analytic = (-np.linalg.inv(fisher_analytic))
dataid = 3
c = ChainConsumer()
# c.configure(color_params="$z$")
c.add_chain(MAF_posts[dataid], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
c.add_chain(BHM_posts[dataid], parameters=params, name='BHM', color=corner_colors[1])
c.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
c.configure(linestyles=["-", "-", "--"], linewidths=[1.0, 1.0, 1.0,],
shade=[True, True, False], shade_alpha=[0.7, 0.6, 0.],
tick_font_size=8,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=False, legend_location=(0, 0))
fig = c.plotter.plot(figsize="column", truth=[0.90, 0.45], filename=outdir + 'field_%d_inference_comp'%(dataid + 1))
COLOR = 'white'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
θ_target = np.array([0.9, 0.45])
params = [r"$\alpha$", r"$\beta$"]
corner_colors = [None, None, 'k']
Finv_analytic = (-np.linalg.inv(fisher_analytic))
dataid = 3
c = ChainConsumer()
# c.configure(color_params="$z$")
c.add_chain(MAF_posts[dataid], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
c.add_chain(BHM_posts[dataid], parameters=params, name='BHM', color=corner_colors[1])
c.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
c.configure(linestyles=["-", "-", "-"], linewidths=[1.0, 1.0, 1.0,],
shade=[True, True, False], shade_alpha=[0.9, 0.6, 0.],
tick_font_size=8,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=True, legend_location=(0, 0))
fig = c.plotter.plot(figsize="column", truth=[0.90, 0.45], filename=outdir + 'white_field_%d_inference_comp'%(dataid + 1))
# do all fields' 2D plots
fig,ax = plt.subplots(nrows=2, ncols=4, figsize=(7.058, 3.41*1.), #figsize=(25, 13.5)) #
gridspec_kw={'height_ratios': [1, 1], 'width_ratios':[1,1,1,1]})
latexify(3.41*2)
for i in range(num_fields):
if i==0:
im = ax[0, i].imshow(field_data[i].reshape(N,N),
cmap='viridis',)
#vmin=0, vmax=6,
#interpolation='spline16')
else:
ax[0, i].imshow(field_data[i].reshape(N,N),
cmap='viridis',)
#vmin=0, vmax=6,
#interpolation='spline16')
ax[0, i].set_xticks([])
ax[0, i].set_yticks([])
divider = make_axes_locatable(ax[0, i])
cax = divider.append_axes('bottom', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='horizontal')
cs = ChainConsumer()
cs.add_chain(MAF_posts[i], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
cs.add_chain(BHM_posts[i], parameters=params, name='BHM', color=corner_colors[1])
cs.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
cs.configure(linestyles=["-", "-", "-"], linewidths=[1.0, 1.0, 1.0],
shade=[True, True, False], shade_alpha=[0.7, 0.6, 0.], tick_font_size=8)
cs.plotter.plot_contour(ax[1, i], r"$\alpha$", r"$\beta$")
ax[1, i].axvline(θ_target[0], linestyle=':', linewidth=1)
ax[1, i].axhline(θ_target[1], linestyle=':', linewidth=1)
ax[1,i].set_xlabel(r'$\alpha$', fontsize=8)
ax[1,i].set_ylabel(r'$\beta$', fontsize=8)
ax[0, i].set_title('field %d'%(i+1), fontsize=8)
ax[1,i].set_ylim([0.3, 0.65])
ax[1,i].set_xlim([0.65, 1.15])
line1, = ax[1,i].plot(np.ones(1)*-45, np.ones(1)*-45, linestyle='solid', color='k', label="Analytic Fisher Contours")
patch1 = mpatches.Patch(color=colors[0][1], label='DELFI + IMNN implicit likelihood inference')
patch2 = mpatches.Patch(color=colors[2][1], label='full-field Bayesian Hierarchical Model')
#patch3 = mpatches.Patch(color=colors[1][1], label='Full field, data assimilation')
fig.legend(handles=[patch1,patch2, line1],bbox_to_anchor=(0.77, 0.12), fontsize=8, ncol=2, frameon=False,)
plt.subplots_adjust(wspace=0.45, hspace=0.2, bottom=0.17)
#plt.tight_layout()
#ax1 = plt.subplot(111)
plt.savefig(outdir + 'four-LN-field-comparison', dpi=800, bbox_inches='tight')
from scipy.stats import multivariate_normal as mv
data = mv.rvs(mean=[5, 6], cov=[[1, 0.9], [0.9, 1]], size=10000)
fig, axes = plt.subplots(nrows=2, figsize=(4, 6), sharex=True)
axes[0].scatter(data[:, 0], data[:, 1], s=1, alpha=0.1)
c = ChainConsumer()
c.add_chain(data, parameters=["a", "b"])
c.configure(linestyles=["-", "-", "-"], usetex=False, linewidths=[1.0, 1.0, 1.0],
shade=[True, True, False], shade_alpha=[0.7, 0.6, 0.], tick_font_size=8)
c.plotter.plot_contour(axes[1], "a", "b")
for ax in axes:
ax.axvline(5)
ax.axhline(6)
F_IMNN = np.array([[ 338.58255, -355.37054],
[-355.37054, 1134.3672 ]])
Finv_IMNN = np.linalg.inv(F_IMNN)
np.linalg.det(fisher_analytic)
train_losses_all[0].shape
sns.set()
#sns.set_style('darkgrid')
fig,axs = plt.subplots(nrows=1, ncols=4, figsize=(7.058,2.5))
dataid = 2
train_losses = train_losses_all[dataid]
val_losses = val_losses_all[dataid]
for m in range(4):
ax = axs[m]
ax.plot(np.array(train_losses).T[m], label='train')
ax.plot(np.array(val_losses).T[m], label='val')
ax.set_ylim(-4.3, 1)
if m == 0:
ax.set_ylabel(r'$p(\textbf{x}\ |\ {\vartheta}; w)$')
else:
pass#ax.set_yticks([])
if m == 3:
ax.legend()
plt.subplots_adjust(wspace=0.1, hspace=0.17, bottom=0.17)
fig.text(0.5, 0.018, r'\# epochs', ha='center')
plt.tight_layout()
plt.savefig(outdir + 'maf-training', dpi=400)
#import seaborn as sns
%matplotlib inline
#fig,axs = plt.subplots(nrows=1, ncols=4)
dataid = 2
train_losses = train_losses_all[dataid]
val_losses = val_losses_all[dataid]
sns.set()
plt.figure(figsize=(7.058,2.5))
plt.subplot(141)
plt.plot(np.array(train_losses).T[0], label='train')
plt.plot(np.array(val_losses).T[0], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta; w)$')
plt.xlabel(r'\# epochs')
plt.ylim(-4.3, 1)
plt.subplot(142)
plt.plot(np.array(train_losses).T[1], label='train')
plt.plot(np.array(val_losses).T[1], label='val')
plt.xlabel(r'\# epochs')
#plt.ylabel(r'$p(t\ |\ \vartheta; w)$')
plt.ylim(-4.3, 1)
plt.yticks([])
plt.subplot(143)
plt.plot(np.array(train_losses).T[2], label='train')
plt.plot(np.array(val_losses).T[2], label='val')
plt.xlabel(r'\# epochs')
#plt.ylabel(r'$p(t\ |\ \vartheta; w)$')
plt.ylim(-4.3, 1)
plt.yticks([])
#plt.legend()
plt.subplot(144)
plt.plot(np.array(train_losses).T[3], label='train')
plt.plot(np.array(val_losses).T[3], label='val')
plt.xlabel(r'\# epochs')
#plt.ylabel(r'$p(t\ |\ \vartheta; w)$')
plt.yticks([])
plt.ylim(-4.3, 1)
plt.legend()
plt.subplots_adjust(wspace=0.1, hspace=0.17, bottom=0.17)
#plt.text(0.5, 0.04, r'\# epochs')
plt.tight_layout()
plt.savefig(outdir + 'maf-training', dpi=400)
plt.show()
import cloudpickle as pickle
def save_obj(obj, name ):
with open(name + '.pkl', 'wb') as f:
pickle.dump(obj, f)
def load_obj(name ):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)
```
# load results for the cosmo field analysis
```
mylist = [1,2,3,4]
mylist.insert(2, 9)
mylist
DELFIs = [load_obj('/Users/lucas/Datasets/imnn-ln-delfi/final_cosmo_analysis/delfi_cosmo_field_%d'%(i+1))
for i in [0,1,3]]
third_delfi = load_obj('/Users/lucas/Datasets/imnn-ln-delfi/cosmo_analysis/run_3/delfi_cosmo_field_3')
DELFIs.insert(2, third_delfi)
DELFIs = [load_obj('/Users/lucas/Datasets/imnn-ln-delfi/cosmo_analysis/run_3/delfi_cosmo_field_%d'%(i+1))
for i in range(4)]
cosmo_estimates = [[0.2885032, 0.7778493], [0.3440842, 0.7171489],
[0.27968222, 0.7649215 ], [0.29220793, 0.7358161 ]]
for i,d in enumerate(DELFIs):
d['F_IMNN'] = np.array([[2991.7769, 1740.3038],[1740.304 , 1120.6669]])
d['estimates'] = cosmo_estimates[i]
d['θ_target'] = np.array([0.2589, 0.8159])
d['θ_fid_new'] = np.array([0.142019, 0.80442715])
pst_dir = '/Users/lucas/Datasets/imnn-ln-delfi/final_cosmo_analysis/'
ABC_posts = [np.load(pst_dir + 'ABC_accepted_field_%d.npy'%(i+1)) for i in range(4)]
ABC_dists = [np.load(pst_dir + 'ABC_distances_field_%d.npy'%(i+1)) for i in range(4)]
delfi['super_post'].shape
# do all fields' 2D plots
# fig,ax = plt.subplots(nrows=2, ncols=4, figsize=(2*7.058, 3.41*2.), #figsize=(25, 13.5)) #
# gridspec_kw={'height_ratios': [1, 1], 'width_ratios':[1,1,1,1]})
fig,ax = plt.subplots(nrows=2, ncols=4, figsize=(7.058, 3.41*1.5)) #figsize=(25, 13.5)) #
#gridspec_kw={'height_ratios': [1, 1], 'width_ratios':[1,1,1,1]})
#latexify(3.41*2)
for i,delfi in enumerate(DELFIs):
if i==0:
im = ax[0, i].imshow(delfi['target_data'].reshape(128,128),
cmap='viridis',
vmin=0, vmax=6,
interpolation='spline16')
else:
ax[0, i].imshow(delfi['target_data'].reshape(128,128),
cmap='viridis',
vmin=0, vmax=6,
interpolation='spline16')
ax[0, i].set_xticks([])
ax[0, i].set_yticks([])
divider = make_axes_locatable(ax[0, i])
cax = divider.append_axes('bottom', size='5%', pad=0.05)
fig.colorbar(im, cax=cax, orientation='horizontal')
cs = ChainConsumer()
cs.add_chain(delfi['super_post'][500::90], parameters=params, name='DELFI + IMNN')
# add GA
cs.add_covariance(delfi['estimates'], np.linalg.inv(delfi['F_IMNN']),
parameters=params, name="GA Estimate", color='k')
cs.configure(smooth=1.5, linestyles=["-", "-"], linewidths=[1.0, 1.0],
shade=[True, False], shade_alpha=[0.5, 0.0], tick_font_size=8)
cs.plotter.plot_contour(ax[1, i], r"$\Omega_c$", r"$\sigma_8$")
abc_handle = ax[1,i].scatter(ABC_posts[i][:, 0], ABC_posts[i][:, 1], s=8, alpha=0.6,
c=np.log(ABC_dists[i]), cmap='inferno',
edgecolors=None, linewidths=0, marker='.', zorder=10, label="ABC Estimate")
ax[1, i].axvline(delfi['θ_target'][0], linestyle=':', linewidth=1)
ax[1, i].axhline(delfi['θ_target'][1], linestyle=':', linewidth=1)
point1 = ax[1, i].scatter(0.6, 0.6, marker='o', s=30, alpha=1., label=r'$\theta_{\rm fid,1}$',
facecolors='none', edgecolors='k', linewidth=0.7)
point2 = ax[1, i].scatter(delfi['θ_fid_new'][0], delfi['θ_fid_new'][1],
marker='*', s=30, label=r'$\theta_{\rm fid,2}$',
facecolors='none', edgecolors='k', linewidth=0.7)
ax[1,i].set_xlabel(r'$\Omega_c$', fontsize=8)
#if i == 0:
ax[1,i].set_ylabel(r'$\sigma_8$', fontsize=8)
ax[0, i].set_title('field %d'%(i+1), fontsize=8)
ax[1,i].set_ylim([0.35, 1.3])
ax[1,i].set_xlim([0.0, 0.8])
line2, = ax[1,i].plot(np.ones(1)*-45, np.ones(1)*-45, linestyle='solid', color='k', label="Gaussian Approximation Contours")
patch1 = mpatches.Patch(color=colors[0][1], label='DELFI + IMNN implicit likelihood inference')
patch2 = mpatches.Patch(color='orange', label=r'Approximate Bayesian Computation $\varepsilon = 0.05$', alpha=0.86)
fig.legend(handles=[patch1, patch2, line2, point1,point2],bbox_to_anchor=(0.68, 0.12), fontsize=8, ncol=2, frameon=False,)
plt.subplots_adjust(wspace=0.55, hspace=0.1, bottom=0.21)
plt.savefig(outdir + 'new-four-cosmo-field-comparison', dpi=1200, bbox_inches='tight', rasterized=True)
len(delfi['super_post'])
cosmo_estimates[0]
```
# make animation
```
dr = '/Users/lucas/Datasets/imnn-ln-delfi/cosmo_analysis/animation/posts/'
posts_dat_1 = [
np.concatenate(
np.load(dr + 'delfi_1_posterior_iter_%d.npy'%(i)),
axis=0) for i in range(8)]
posts_dat_2 = [
np.concatenate(
np.load(dr + 'delfi_2_posterior_iter_%d.npy'%(i)),
axis=0) for i in range(8)]
display_posts = [
np.load(dr + 'delfi_1_posterior_iter_%d.npy'%(i)) for i in range(8)]
field = 0
itr = 0
_posts = [posts_dat_1, posts_dat_2]
COLOR = 'k'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
params = [r"$\Omega_c$", r"$\sigma_8$"]
corner_colors = [None, None, 'k']
Finv_analytic = (-np.linalg.inv(fisher_analytic))
dataid = 3
c = ChainConsumer()
# c.configure(color_params="$z$")
c.add_chain(_posts[field][itr][::65], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
#c.add_chain(BHM_posts[dataid], parameters=params, name='BHM', color=corner_colors[1])
#c.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
c.configure(linestyles=["-"], linewidths=[1.0],
shade=[True,], shade_alpha=[0.9],
tick_font_size=8,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=False, legend_location=(0, 0))
fig = c.plotter.plot(figsize="column", truth=delfi['θ_target'], extents={r"$\Omega_c$": (-0.1,0.9),
r"$\sigma_8$": (0.1,1.6)})
fig.text(0.35, 0.6, r'\# simulations needed: %d'%((1)*500), fontsize=11.5)
len(display_posts[0])
# plot two delfi models over time
fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(7.058, 3.41*1.)) #figsize=(25, 13.5)) #
#gridspec_kw={'height_ratios': [1, 1], 'width_ratios':[1,1,1,1]})
post_0 = display_posts[0]
for i,pst in enumerate(post_0):
cs = ChainConsumer()
cs.add_chain(pst[::20], parameters=params, name='DELFI + IMNN')
# add GA
cs.configure(smooth=1.0, linestyles=["-"], linewidths=[1.0],
shade=[True], shade_alpha=[0.7], tick_font_size=8)
cs.plotter.plot_contour(ax[i], r"$\Omega_c$", r"$\sigma_8$")
ax[i].axvline(delfi['θ_target'][0], linestyle=':', linewidth=1)
ax[i].axhline(delfi['θ_target'][1], linestyle=':', linewidth=1)
ax[i].set_xlabel(r'$\Omega_c$', fontsize=15)
#if i == 0:
ax[i].set_ylabel(r'$\sigma_8$', fontsize=15)
ax[i].set_xlim(-0.1, 0.9)
ax[i].set_ylim(0.1, 1.6)
plt.subplots_adjust(wspace=0.35, hspace=0.20, bottom=0.21)
plt.text(-0.8, -0.4, r"\# simulations used: %d"%((0+1)*500), fontsize=15)
plt.show()
fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(7.058, 3.41*1.))
plt.subplots_adjust(wspace=0.45, hspace=0.20, bottom=0.21)
# fig.patch.set_alpha(0.)
prunes = [1, 35, 45, 45, 45, 65, 65, 65]
params = [r"$\Omega_c$", r"$\sigma_8$"]
COLOR = 'k'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
# initialization function: plot the background of each frame
def init():
#line.set_data([], [])
# fig.patch.set_alpha(0.0)
# for ax in fig.axes:
# ax.patch.set_alpha(0.0)
return (fig,)
def animate_two(t):
post = display_posts[t]
for i,pst in enumerate(post):
ax[i].clear()
#ax[i].patch.set_alpha(0.0)
cs = ChainConsumer()
cs.add_chain(pst[::prunes[t]], parameters=params, name='DELFI + IMNN')
cs.configure(smooth=1.0, linestyles=["-"], linewidths=[1.0],
shade=[True], shade_alpha=[0.7], tick_font_size=8)
cs.plotter.plot_contour(ax[i], r"$\Omega_c$", r"$\sigma_8$")
ax[i].axvline(delfi['θ_target'][0], linestyle=':', linewidth=1)
ax[i].axhline(delfi['θ_target'][1], linestyle=':', linewidth=1)
ax[i].set_xlabel(r'$\Omega_c$', fontsize=15, color=COLOR)
#if i == 0:
ax[i].set_ylabel(r'$\sigma_8$', fontsize=15, color=COLOR)
ax[i].set_xlim(-0.1, 0.9)
ax[i].set_ylim(0.1, 1.6)
ax[i].set_title('MAF %d'%(i+1), fontsize=15, color=COLOR)
#ax[i].patch.set_alpha(0.0)
# fig.patch.set_alpha(0.0)
# for a in fig.axes:
# a.patch.set_alpha(0.0)
ax[0].text(0.5, -0.3, r"\# simulations used: %d"%((t+1)*500), fontsize=15)
#
#plt.savefig('/Users/lucas/Datasets/imnn-ln-delfi/cosmo_analysis/animation/pane_%d'%(t), transparent=True)
#plt.cla()
return (fig,)
animator = animation.FuncAnimation(fig, animate_two, init_func=init,
frames=8, interval=1000, blit=False, repeat_delay=19000)
animator
# Set up formatting for the movie files 'imagemagick'
Writer = animation.writers['ffmpeg']
writer = Writer(fps=15, metadata=dict(artist='Me'), bitrate=1800)
animator.save('/Users/lucas/Documents/IAP/imnn-fields/delfi-whitefont.gif',
writer='imagemagick',
fps=2,
dpi=400,
codec="png"), #bitrate=-1,
#extra_args={'transparent': True, 'facecolor': 'none'},
#savefig_kwargs={'transparent': True, 'facecolor': 'none'})
ax[0].xaxis
%matplotlib inline
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=8, interval=250, blit=True, repeat_delay=1000)
from matplotlib import animation, rc
from IPython.display import HTML, Image
plt.rcParams['animation.ffmpeg_path'] = '/usr/local/bin/ffmpeg'
rc('animation', html='html5')
# initialization function: plot the background of each frame
def init():
return (fig,)
# initialize
fig = plt.figure()
field = 0
itr = 7
_posts = [posts_dat_1, posts_dat_2]
COLOR = 'k'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
params = [r"$\Omega_c$", r"$\sigma_8$"]
corner_colors = [None, None, 'k']
Finv_analytic = (-np.linalg.inv(fisher_analytic))
dataid = 3
c = ChainConsumer()
# c.configure(color_params="$z$")
c.add_chain(_posts[field][itr][::70], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
#c.add_chain(BHM_posts[dataid], parameters=params, name='BHM', color=corner_colors[1])
#c.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
c.configure(linestyles=["-"], linewidths=[1.0],
shade=[True,], shade_alpha=[0.9],
tick_font_size=8,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=False, legend_location=(0, 0))
c.plotter.plot(figsize="column", truth=delfi['θ_target'], extents={r"$\Omega_c$": (-0.1,0.9),
r"$\sigma_8$": (0.1,1.6)})
#plt.text(0.35, 0.6, r'\# simulations needed: %d'%((i+1)*500), fontsize=11.5)
fig
# data
#fig = plt.figure()
def animate(i):
for a,ax in fig.axes:
ax.clear()
field = 0
itr = i
_posts = [posts_dat_1, posts_dat_2]
COLOR = 'k'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
params = [r"$\Omega_c$", r"$\sigma_8$"]
corner_colors = [None, None, 'k']
Finv_analytic = (-np.linalg.inv(fisher_analytic))
dataid = 3
c = ChainConsumer()
# c.configure(color_params="$z$")
c.add_chain(_posts[field][itr][::65], parameters=params, name='DELFI + IMNN', color=corner_colors[0])
#c.add_chain(BHM_posts[dataid], parameters=params, name='BHM', color=corner_colors[1])
#c.add_covariance(θ_target, -Finv_analytic, parameters=params, name="Analytic Fisher", color=corner_colors[2])
c.configure(linestyles=["-"], linewidths=[1.0],
shade=[True,], shade_alpha=[0.9],
tick_font_size=8,
legend_kwargs={"loc": "upper left", "fontsize": 8},
legend_color_text=False, legend_location=(0, 0))
c.plotter.plot(figsize="column", truth=delfi['θ_target'], extents={r"$\Omega_c$": (-0.1,0.9),
r"$\sigma_8$": (0.1,1.6)})
plt.text(0.35, 0.6, r'\# simulations needed: %d'%((i+1)*500), fontsize=11.5)
return (fig,)
%matplotlib inline
anim = animation.FuncAnimation(fig, animate,
frames=8, interval=250, blit=True, repeat_delay=1000)
anim
# data
xval_c1 = pca3[20].T #np.squeeze(y_preds[1][2]).transpose()
yval_c1 = np.squeeze(cosmo[20]).transpose()
y_c1_pred = np.squeeze(nn_preds[20]).transpose()
# animation function. This is called sequentially
def animate(i):
pick = i
# cosmo calculation ----
nu = nu_arr[pick]
# from astropy.cosmology import FlatLambdaCDM
# CL = FlatLambdaCDM(H0=67, Om0=0.315, Ob0=0.049, Tcmb0=2.725)
# nu_21 = 1420.4
# z = (nu_21 / nu) - 1
# d_A = CL.angular_diameter_distance(z=z)
# res_rad = hp.pixelfunc.nside2resol(256, arcmin=True) * 0.000290888 # to radians
# res_mpc = res_rad * d_A
res_mpc = res_in_mpc[i]
# ----
#plt.style.use('dark_background')
fig.suptitle(r'$\nu = $%03d'%(nu) + r' $\rm MHz$')
# set data and change for changing comoving redshift
ax1.set(xlim=(0, 64*res_mpc), ylim=(0, 64*res_mpc))
im1.set_data(yval_c1[i])
ax2.set(xlim=(0, 64*res_mpc), ylim=(0, 64*res_mpc))
im2.set_data(xval_c1[i])
ax3.set(xlim=(0, 64*res_mpc), ylim=(0, 64*res_mpc))
im3.set_data(y_c1_pred[i])
return (im1, im2, im3)
```
| github_jupyter |
```
#!/usr/bin/env python
# coding=utf-8
# Copyright 2020 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...) on a text file or a dataset.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=masked-lm
"""
# You can also adapt this script on your own masked language modeling task. Pointers for this are left as comments.
import logging
import math
import os
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
from datasets import load_dataset
from datasets import DatasetDict
import transformers
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_MASKED_LM_MAPPING,
AutoConfig,
AutoModelForMaskedLM,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import check_min_version
from transformers.utils.versions import require_version
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
# check_min_version("4.12.0.dev0")
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization."
"Don't set if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_overrides: Optional[str] = field(
default=None,
metadata={
"help": "Override some existing default config settings when a model is trained from scratch. Example: "
"n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
"with private models)."
},
)
def __post_init__(self):
if self.config_overrides is not None and (self.config_name is not None or self.model_name_or_path is not None):
raise ValueError(
"--config_overrides can't be used in combination with --config_name or --model_name_or_path"
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
dataset_dir: Optional[str] = field(
default=None, metadata={"help": "The directory of the dataset to use."}
)
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[int] = field(
default=5,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
max_seq_length: Optional[int] = field(
default=None,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated."
},
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
line_by_line: bool = field(
default=False,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
pad_to_max_length: bool = field(
default=False,
metadata={
"help": "Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# We only use this script for evaluation for distilled models.
assert not training_args.do_train
assert training_args.do_eval
os.environ["WANDB_DISABLED"] = "true"
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
logger.info(f"Training/evaluation parameters {training_args}")
# Detecting last checkpoint.
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# Set seed before initializing model.
set_seed(training_args.seed)
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub
#
# For CSV/JSON files, this script will use the column called 'text' or the first column. You can easily tweak this
# behavior (see below)
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if data_args.dataset_dir is not None:
# this is always the case!
logger.info(f"Loading dataset from disk at: {data_args.dataset_dir}")
raw_datasets = DatasetDict.load_from_disk(data_args.dataset_dir)
assert "validation" in raw_datasets.keys() or "test" in raw_datasets.keys()
elif data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(
data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir
)
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
data_args.dataset_name,
data_args.dataset_config_name,
split=f"train[:{data_args.validation_split_percentage}%]",
cache_dir=model_args.cache_dir,
)
raw_datasets["train"] = load_dataset(
data_args.dataset_name,
data_args.dataset_config_name,
split=f"train[{data_args.validation_split_percentage}%:]",
cache_dir=model_args.cache_dir,
)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
extension = data_args.train_file.split(".")[-1]
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.validation_file.split(".")[-1]
if extension == "txt":
extension = "text"
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
# If no validation data is there, validation_split_percentage will be used to divide the dataset.
if "validation" not in raw_datasets.keys():
raw_datasets["validation"] = load_dataset(
extension,
data_files=data_files,
split=f"train[:{data_args.validation_split_percentage}%]",
cache_dir=model_args.cache_dir,
)
raw_datasets["train"] = load_dataset(
extension,
data_files=data_files,
split=f"train[{data_args.validation_split_percentage}%:]",
cache_dir=model_args.cache_dir,
)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
config_kwargs = {
"cache_dir": model_args.cache_dir,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.config_overrides is not None:
logger.info(f"Overriding config: {model_args.config_overrides}")
config.update_from_string(model_args.config_overrides)
tokenizer_kwargs = {
"cache_dir": model_args.cache_dir,
"use_fast": model_args.use_fast_tokenizer,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if model_args.model_name_or_path:
model = AutoModelForMaskedLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
else:
logger.info("Training new model from scratch")
model = AutoModelForMaskedLM.from_config(config)
model.resize_token_embeddings(len(tokenizer))
logger.info(model)
# Preprocessing the datasets.
# First we tokenize all the texts.
if training_args.do_train:
column_names = raw_datasets["train"].column_names
else:
column_names = raw_datasets["validation"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
if data_args.max_seq_length is None:
max_seq_length = tokenizer.model_max_length
if max_seq_length > 1024:
logger.warning(
f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --max_seq_length xxx."
)
max_seq_length = 1024
else:
if data_args.max_seq_length > tokenizer.model_max_length:
logger.warning(
f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the"
f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
)
max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
if data_args.line_by_line:
# When using line_by_line, we just tokenize each nonempty line.
padding = "max_length" if data_args.pad_to_max_length else False
def tokenize_function(examples):
# Remove empty lines
examples[text_column_name] = [
line for line in examples[text_column_name] if len(line) > 0 and not line.isspace()
]
return tokenizer(
examples[text_column_name],
padding=padding,
truncation=True,
max_length=max_seq_length,
# We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
# receives the `special_tokens_mask`.
return_special_tokens_mask=True,
)
with training_args.main_process_first(desc="dataset map tokenization"):
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on dataset line_by_line",
)
else:
# Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts.
# We use `return_special_tokens_mask=True` because DataCollatorForLanguageModeling (see below) is more
# efficient when it receives the `special_tokens_mask`.
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
with training_args.main_process_first(desc="dataset map tokenization"):
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on every text in dataset",
)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of
# max_seq_length.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= max_seq_length:
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a
# remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
with training_args.main_process_first(desc="grouping texts together"):
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=not data_args.overwrite_cache,
desc=f"Grouping texts in chunks of {max_seq_length}",
)
if training_args.do_train:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = tokenized_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
if training_args.do_eval:
if "validation" not in tokenized_datasets:
raise ValueError("--do_eval requires a validation dataset")
eval_dataset = tokenized_datasets["validation"]
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
# Data collator
# This one will take care of randomly masking the tokens.
pad_to_multiple_of_8 = data_args.line_by_line and training_args.fp16 and not data_args.pad_to_max_length
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm_probability=data_args.mlm_probability,
pad_to_multiple_of=8 if pad_to_multiple_of_8 else None,
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
# Training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model() # Saves the tokenizer too for easy upload
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate()
max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
try:
perplexity = math.exp(metrics["eval_loss"])
except OverflowError:
perplexity = float("inf")
metrics["perplexity"] = perplexity
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "fill-mask"}
if data_args.dataset_name is not None:
kwargs["dataset_tags"] = data_args.dataset_name
if data_args.dataset_config_name is not None:
kwargs["dataset_args"] = data_args.dataset_config_name
kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
else:
kwargs["dataset"] = data_args.dataset_name
if training_args.push_to_hub:
trainer.push_to_hub(**kwargs)
else:
trainer.create_model_card(**kwargs)
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
| github_jupyter |
<link rel="stylesheet" href="../../styles/theme_style.css">
<!--link rel="stylesheet" href="../../styles/header_style.css"-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<table width="100%">
<tr>
<td id="image_td" width="15%" class="header_image_color_2"><div id="image_img"
class="header_image_2"></div></td>
<td class="header_text"> Signal Acquisition [OpenSignals] </td>
</tr>
</table>
<div id="flex-container">
<div id="diff_level" class="flex-item">
<strong>Difficulty Level:</strong> <span class="fa fa-star checked"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
<span class="fa fa-star"></span>
</div>
<div id="tag" class="flex-item-tag">
<span id="tag_list">
<table id="tag_list_table">
<tr>
<td class="shield_left">Tags</td>
<td class="shield_right" id="tags">record☁acquire☁opensignals☁real-time</td>
</tr>
</table>
</span>
<!-- [OR] Visit https://img.shields.io in order to create a tag badge-->
</div>
</div>
<a href="https://www.biosignalsplux.com/en/software" target="_blank"><span class="color1"><strong>OpenSignals <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></span></a> is the PLUX's software dedicated to acquire, store and process physiological signals acquired with electronic devices developed and commercialized by the company (such as <a href="https://www.biosignalsplux.com/en/" target="_blank"><span class="color2"><strong>biosignalsplux <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></span></a> and <a href="https://bitalino.com/en/" target="_blank"><span class="color4"><strong>BITalino <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></span></a>).
Signal acquisition is a fundamental step that needs to be executed before every processing task. Without signals there are not any information to analyze or knowledge to extract.
Like previously referred, <span class="color1"><strong>OpenSignals</strong></span> ensures that anyone with a PLUX's acquisition system can easily acquire physiological data through a desktop or mobile device.
So, in the current <span class="color4"><strong>Jupyter Notebook</strong></span> we will begin our introductory journey through <span class="color1"><strong>OpenSignals</strong></span>, explaining/demonstrating how signals can be acquired in real-time.
<hr>
Before starting an acquisition it is mandatory that your PLUX acquisition system (in our case <span class="color1"><strong>biosignalsplux</strong></span>) is paired with the computer.<br>There is available a <span class="color4"><strong>Jupyter Notebook</strong></span> entitled <a href="../Connect/pairing_device.ipynb" target="_blank"><strong>"Pairing a Device at Windows 10 [biosignalsplux]" <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a> intended to help you in the pairing process between <span class="color1"><strong>biosignalsplux</strong></span> and your computer.
<p class="steps">0 - Execute steps 8 to 10 of <a href="../Connect/pairing_device.ipynb" target="_blank"><strong>"Pairing a Device at Windows 10 [biosignalsplux]" <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></strong></a>, in order to enable your device before starting the acquisition</p>
<p class="steps">1 - Click on the device box. A set of configurable options will appear</p>
<img src="../../images/record/record_data/opensignals_dev_click.gif">
<p class="steps">2 - For the current example we will pair an accelerometer (triaxial sensor). Three channels of our <span class="color1"><strong>biosignalsplux</strong></span> device are necessary for doing the acquisition <br>(Channel 1 >> Axis X, Channel 2 >> Axis Y, Channel 3 >> Axis Z)</p>
<img src="../../images/record/record_data/biosignalsplux_acc.png" width="50%">
<p class="steps">3 - Activate the used channels and select the respective sensor type </p>
<i>When clicking on our device we can see lots of options and components. Let's focus on the left section of the following image</i>
<p class="steps">3.1 - Activate the used channels</p>
For activating the channels that will be used during the acquisition just click on the "Status Circle" of the respective channel row.
<img src="../../images/record/record_data/signal_acquisition_sel_chn.gif">
<p class="steps">3.2 - Select the respective sensor type for each channels (for our accelerometer example, we need to select option "XYZ")</p>
For selecting the desired sensor type you should press on the downside arrow to see a list box containing the available options.
<img src="../../images/record/record_data/signal_acquisition_sel_type.gif">
<p class="steps">4 - Specify the desired sampling rate, taking into consideration the signal that you will acquire</p>
For a more deep information about sampling rate choice, there is a <span class="color4"><strong>Jupyter Notebook</strong></span> available in our <a href="../Record/sampling_rate_and_aliasing.ipynb" target="_blank">library <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>.
While using accelerometer sensor a minimum sampling rate of 100 Hz should be chosen (accordingly to the Nyquist Theorem as the double of the maximum frequency in the sensor bandwidth | <a href="https://www.biosignalsplux.com/en/acc-accelerometer" target="_blank">0-50 Hz <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a>)
<img src="../../images/record/record_data/signal_acquisition_sel_sr.gif">
<p class="steps">5 - Select the device resolution</p>
<i>Bigger resolutions ensures a more precise data acquisition but there is the risk of collecting also more <a href="http://www.lionprecision.com/tech-library/technotes/article-0010-sensor-resolution.html" target="_blank">noise <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a> !</i>
<img src="../../images/record/record_data/signal_acquisition_sel_res.gif">
<p class="steps">6 - There is only one remaining step left. Just press "Record" button to start acquiring your accelerometer data</p>
<img src="../../images/record/record_data/signal_acquisition_rec.png">
The previous steps ensure that all users of PLUX's acquisition systems can start their acquisition establishing an important step for beginning the experimental stage of any research.
For concluding, is shown a small video of a real-time accelerometer acquisition made with <span class="color1"><strong>OpenSignals</strong></span> !
<video id="video_1" muted loop src="../../images/record/record_data/signal_acquisition_video.mp4" class="video"></video>
```
%%javascript
document.getElementById("video_1").play()
```
<strong><span class="color7">We hope that you have enjoyed this guide. </span><span class="color2">biosignalsnotebooks</span><span class="color4"> is an environment in continuous expansion, so don't stop your journey and learn more with the remaining <a href="../MainFiles/biosignalsnotebooks.ipynb">Notebooks <img src="../../images/icons/link.png" width="10px" height="10px" style="display:inline"></a></span></strong> !
<span class="color6">**Auxiliary Code Segment (should not be replicated by
the user)**</span>
```
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
%%html
<script>
// AUTORUN ALL CELLS ON NOTEBOOK-LOAD!
require(
['base/js/namespace', 'jquery'],
function(jupyter, $) {
$(jupyter.events).on("kernel_ready.Kernel", function () {
console.log("Auto-running all cells-below...");
jupyter.actions.call('jupyter-notebook:run-all-cells-below');
jupyter.actions.call('jupyter-notebook:save-notebook');
});
}
);
</script>
```
| github_jupyter |
# 📃 Solution for Exercise M7.03
As with the classification metrics exercise, we will evaluate the regression
metrics within a cross-validation framework to get familiar with the syntax.
We will use the Ames house prices dataset.
```
import pandas as pd
import numpy as np
ames_housing = pd.read_csv("../datasets/house_prices.csv")
data = ames_housing.drop(columns="SalePrice")
target = ames_housing["SalePrice"]
data = data.select_dtypes(np.number)
target /= 1000
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
The first step will be to create a linear regression model.
```
# solution
from sklearn.linear_model import LinearRegression
model = LinearRegression()
```
Then, use the `cross_val_score` to estimate the generalization performance of
the model. Use a `KFold` cross-validation with 10 folds. Make the use of the
$R^2$ score explicit by assigning the parameter `scoring` (even though it is
the default score).
```
# solution
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, data, target, cv=10, scoring="r2")
print(f"R2 score: {scores.mean():.3f} +/- {scores.std():.3f}")
```
Then, instead of using the $R^2$ score, use the mean absolute error. You need
to refer to the documentation for the `scoring` parameter.
```
# solution
scores = cross_val_score(model, data, target, cv=10,
scoring="neg_mean_absolute_error")
errors = -scores
print(f"Mean absolute error: "
f"{errors.mean():.3f} k$ +/- {errors.std():.3f}")
```
The `scoring` parameter in scikit-learn expects score. It means that the
higher the values, and the smaller the errors are, the better the model is.
Therefore, the error should be multiplied by -1. That's why the string given
the `scoring` starts with `neg_` when dealing with metrics which are errors.
Finally, use the `cross_validate` function and compute multiple scores/errors
at once by passing a list of scorers to the `scoring` parameter. You can
compute the $R^2$ score and the mean absolute error for instance.
```
# solution
from sklearn.model_selection import cross_validate
scoring = ["r2", "neg_mean_absolute_error"]
cv_results = cross_validate(model, data, target, scoring=scoring)
import pandas as pd
scores = {"R2": cv_results["test_r2"],
"MAE": -cv_results["test_neg_mean_absolute_error"]}
scores = pd.DataFrame(scores)
scores
```
| github_jupyter |
```
## notebookの表示領域広げる
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
## 描画できるようにする&綺麗に描画できるようにする
%matplotlib inline
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = {'png', 'retina'}
## 自動で時間表示する
%load_ext autotime
## ライブラリのリロードを行えるようにする
%load_ext autoreload
import pandas as pd
import numpy as np
## dataframeの表示を広げる
def disp_full(x, drows=False, dcols=True):
if drows:
pd.set_option('display.max_rows', x.shape[0])
if dcols:
pd.set_option('display.max_columns', x.shape[1])
display(x)
pd.reset_option('display.max_rows')
pd.reset_option('display.max_columns')
import os, sys, gc
import seaborn as sns
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import shap
shap.initjs()
```
## make datasets
```
feature_set = [[100, 1], [101, 1], [102, 1]]
X_train = pd.concat([pd.read_pickle(f'../data/processed/{_feature_num}/{_version}/train.pickle') for _feature_num, _version in feature_set], axis=1)
y_train = pd.read_pickle('../data/processed/000/train.pickle').set_index('ID_code').loc[X_train.index, 'target']
real_id = pd.read_pickle('../data/processed/001/real_id.pickle')
X_test = pd.concat([pd.read_pickle(f'../data/processed/{_feature_num}/{_version}/test.pickle') for _feature_num, _version in feature_set], axis=1)[X_train.columns]
X_test = X_test.loc[real_id]
X_train.shape
X_test.shape
```
## train and predict
```
val_pred = np.zeros((50000, 200))
param = {
'bagging_freq': 5,
'bagging_fraction': 0.9,
'boost_from_average':'false',
'boost': 'gbdt',
'feature_fraction': 1.0,
'learning_rate': 0.05,
'max_depth': -1,
'metric':'binary_logloss',
'min_data_in_leaf': 10,
'min_sum_hessian_in_leaf': 10.0,
'num_leaves': 4,
'num_threads': 32,
'tree_learner': 'serial',
'objective': 'binary',
'verbosity': 1}
for cnum in range(200):
print(cnum)
trn_data = lgb.Dataset(X_train.filter(regex=f'var_{cnum}$').iloc[:150000], y_train.iloc[:150000], free_raw_data=False)
val_data = lgb.Dataset(X_train.filter(regex=f'var_{cnum}$').iloc[150000:], y_train.iloc[150000:], free_raw_data=False)
clf = lgb.train(param, trn_data, 100000, valid_sets = [trn_data, val_data], verbose_eval=-1, early_stopping_rounds=100)
val_pred[:, cnum] = clf.predict(X_train.filter(regex=f'var_{cnum}$').iloc[150000:])
roc_auc_score(y_train.iloc[150000:], val_pred.prod(axis=1))
params = {
'bagging_freq': 5,
'bagging_fraction': 0.9,
'boost_from_average':'false',
'boost': 'gbdt',
'feature_fraction': 1.0,
'learning_rate': 0.05,
'max_depth': -1,
'metric':'binary_logloss',
'min_data_in_leaf': 10,
'min_sum_hessian_in_leaf': 10.0,
'num_leaves': 4,
'num_threads': 32,
'tree_learner': 'serial',
'objective': 'binary',
'verbosity': 1}
oof_pred_array = np.ones((200000, 200))
test_pred_array = np.ones((100000, 5, 200))
for cnum in range(200):
print(f'cnum:{cnum}')
train_dset = lgb.Dataset(X_train.filter(regex=f'var_{cnum}$'), y_train, free_raw_data=False)
cv_result, bsts = lgb.cv(params, train_set=train_dset, num_boost_round=100000, early_stopping_rounds=100, stratified=True)
best_iteration = bsts.best_iteration
for i, bst in enumerate(bsts.boosters):
# oofの予測
cv_valid_index = bst.valid_sets[0].used_indices
cv_valid_data = train_dset.data.iloc[cv_valid_index]
oof_pred_array[cv_valid_index, cnum] = bst.predict(cv_valid_data, num_iteration=best_iteration)
# testの予測
test_pred_array[:, i, cnum] = bst.predict(X_test.filter(regex=f'var_{cnum}$'), num_iteration=best_iteration)
print('\tauc : {0:.6f}'.format(roc_auc_score(y_train, oof_pred_array[:, cnum])))
print('\tauc : {0:.6f}'.format(roc_auc_score(y_train, oof_pred_array.prod(axis=1))))
cv_index_list = []
for i, bst in enumerate(bsts.boosters):
train_index = bst.train_set.used_indices
valid_index = bst.valid_sets[0].used_indices
cv_index_list.append([train_index, valid_index])
model_number = 904
model_output_save_dir = f'../data/processed/{model_number}'
if not os.path.isdir(model_output_save_dir):
os.makedirs(model_output_save_dir)
pd.DataFrame(oof_pred_array, index=X_train.index, columns=[f'pred_{i}' for i in range(200)])\
.to_pickle(os.path.join(model_output_save_dir, 'oof_preds.pkl.gz'), compression='gzip')
pd.DataFrame(cv_index_list, columns=['train_index', 'valid_index'])\
.to_pickle(os.path.join(model_output_save_dir, 'cv_index.pkl.gz'), compression='gzip')
pd.to_pickle(test_pred_array, os.path.join(model_output_save_dir, 'test_pred_array.pickle'))
roc_auc_score(y_train, oof_pred_array.prod(axis=1))
roc_auc_score(y_train, (9 * oof_pred_array / (1 - oof_pred_array)).prod(axis=1))
for c in range(10):
pd.Series(oof_pred_array[:, c]).hist(bins=np.arange(0, 1, 0.01), alpha=0.5)
ax = pd.Series(test_pred_array[:, 0, c]).hist(bins=np.arange(0, 1, 0.01), alpha=0.5)
ax.set_yscale('log')
plt.show()
thr = 0.505
oof_pred_prod = np.ones((200000))
oof_pred_odds_prod = np.ones((200000))
test_pred_odds_prod = np.ones((100000, 5))
for cnum in tqdm(range(200)):
tmp_auc = roc_auc_score(y_train, oof_pred_array[:, cnum])
if tmp_auc >= thr:
oof_pred_prod *= oof_pred_array[:, cnum]
oof_pred_odds_prod *= 9 * oof_pred_array[:, cnum] / (1 - oof_pred_array[:, cnum])
test_pred_odds_prod *= 9 * test_pred_array[:,:, cnum] / (1 - test_pred_array[:,:, cnum])
print('raw prod auc : {0:.6f}'.format(roc_auc_score(y_train, oof_pred_prod)))
print('odds prod auc : {0:.6f}'.format(roc_auc_score(y_train, oof_pred_odds_prod)))
oof_pred = pd.DataFrame({'pred':oof_pred_odds_prod, 'label':y_train}, index=y_train.index)
test_pred = pd.DataFrame(test_pred_odds_prod, index=X_test.index, columns=[f'fold_{i}' for i in range(5)])
bins=np.arange(0, 1000, 10) #np.arange(-0.1, 1.1, 0.01)
fig, axs = plt.subplots(2, 1, figsize=(20,10), sharex=True)
ax = axs[0]
oof_pred.query('label==0')['pred'].hist(bins = bins, ax=ax, color='b', alpha=0.5)
oof_pred.query('label==1')['pred'].hist(bins = bins, ax=ax, color='r', alpha=0.5)
ax.set_yscale('log')
ax.set_title('oof prediction and label')
ax = axs[1]
test_pred.mean(axis=1).hist(bins = bins, ax=ax, color='b', alpha=0.5)
ax.set_yscale('log')
ax.set_title('test prediction')
```
## submit
```
sub = pd.read_pickle('../data/processed/000/sample_submission.pickle')
sub = sub[['ID_code']].merge((test_pred.rank()/test_pred.shape[0]).mean(axis=1).reset_index(name='target')\
.rename(columns={'index':'ID_code'}), how='left').fillna(0)
sub.head()
if not os.path.isdir('../data/out'):
os.makedirs('../data/out')
sub.to_csv(f'../data/out/{model_number}_lgb.csv.gz', index=False, compression='gzip')
pd.read_csv('../data/out/904_lgb.csv.gz').set_index('ID_code')['target'].corr(
pd.read_csv('../data/out/901_lgb.csv.gz').set_index('ID_code')['target'], method='spearman')
pd.read_csv('../data/out/904_lgb.csv.gz').set_index('ID_code')['target'].corr(
pd.read_csv('../data/external/lgb_sub_hrd_0404_2.csv').set_index('ID_code')['target'], method='spearman')
!kaggle competitions submit -c santander-customer-transaction-prediction -f ../data/out/904_lgb.csv.gz -m 'cv:0.919214'
```
| github_jupyter |
```
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
import galah_li_rich_selection
import getpass
import mpl_scatter_density
from matplotlib.colors import LogNorm
import seaborn as sns
import matplotlib as mpl
import matplotlib.cm as cm
import h5py
import os
username = getpass.getuser()
%config InlineBackend.figure_format = 'retina'
galah_idr3, galah_idr3_position = galah_li_rich_selection.load_table(f"/Users/{username}/ownCloud/unique_GALAH_iDR3_1912_with_RC.fits")
(ignore_stars_idx, good_spec_idx,
giant_idx, li_measured_idx, li_rich_idx) = galah_li_rich_selection.create_selections(galah_idr3, galah_idr3_position)
useful_stars_idx = ~ignore_stars_idx & good_spec_idx
useful_giants_idx = useful_stars_idx & giant_idx
useful_li_giants_idx = useful_giants_idx & li_measured_idx
useful_li_rich_giants_idx = useful_li_giants_idx & li_rich_idx
useful_super_li_rich_giants_idx = useful_li_giants_idx & (galah_idr3['a_li'] > 2.7)
# This doesn't include
super_li = 2.7
total_size = 567122
useful_stars_idx = ~ignore_stars_idx & good_spec_idx
useful_giants_idx = useful_stars_idx & giant_idx
useful_li_giants_idx = useful_giants_idx & li_measured_idx
useful_li_rich_giants_idx = useful_li_giants_idx & li_rich_idx
useful_super_li_rich_giants_idx = useful_li_giants_idx & (galah_idr3['a_li'] > super_li)
k2_fields = (galah_idr3['field_id'] > 6540) & (galah_idr3['field_id']< 6831)
tess_fields = (galah_idr3['field_id'] > 7080) & (galah_idr3['field_id']< 7364)
print(f"This retained {100*np.sum(useful_stars_idx)/total_size:0.1f} per cent ({np.sum(useful_stars_idx)}/{total_size}) of the sample.")
#
print(f"{100*np.sum(useful_giants_idx)/np.sum(useful_stars_idx):0.1f} per cent ({np.sum(useful_giants_idx)}/{np.sum(useful_stars_idx)}) of our sample of ``good'' stars were identified as giant stars.")
print(f"Of these {np.sum(useful_giants_idx)} giant stars, only {np.sum(useful_li_giants_idx)} ({100*np.sum(useful_li_giants_idx)/np.sum(useful_giants_idx):0.1f} per cent) had a measured \\ali. ")
print(f"Our red giant data set contains {np.sum(useful_li_rich_giants_idx)} stars with $\\ali> 1.5$")
print(f"Of those, {np.sum(useful_super_li_rich_giants_idx)} stars lie above the primordial value (WMAP reference?) of $\\ali> {super_li}$")
print(f"If we consider just the K2-HERMES fields: {np.sum(k2_fields & useful_giants_idx)}/{np.sum(k2_fields & useful_li_rich_giants_idx)}/{np.sum(k2_fields & useful_super_li_rich_giants_idx)}")
print(f"If we consider just the TESS-HERMES fields: {np.sum(tess_fields & useful_giants_idx)}/{np.sum(tess_fields & useful_li_rich_giants_idx)}/{np.sum(tess_fields & useful_super_li_rich_giants_idx)}")
def load_spectrum(star, camera):
sobject_id = str(star['sobject_id'])
date = str(sobject_id[:6])
com = "com"
if sobject_id[11] == "2":
com += "2"
fits_dir = f"/Users/{username}/ownCloud/GALAH_hdf5/obs/reductions/Iraf_5.3/{date}"
hdf5_path = f"{fits_dir}/{date}_{com}.hdf5"
specfile = f"{fits_dir}/{com}/{sobject_id}{camera}.fits"
if os.path.exists(hdf5_path):
try:
with h5py.File(hdf5_path, 'r') as f:
# print(sobject_id, "H5PY")
h5py_str = f'{sobject_id}/{camera}/normed'
flux = f[h5py_str]['spectrum'][()]
CDELT1 = f[h5py_str].attrs['CDELT1']
CRVAL1 = f[h5py_str].attrs['CRVAL1']
CRPIX1 = f[h5py_str].attrs['CRPIX1']
wavelength = (CDELT1 * (np.arange(len(flux)) + 1 - CRPIX1) + CRVAL1)
return wavelength, flux
except OSError:
print(f"OSError on {hdf5_path}")
pass
except KeyError:
print(f"KeyError on {hdf5_path}")
pass
if os.path.exists(specfile):
with fits.open(specfile) as spec:
print(sobject_id, "FITS")
flux = spec[4].data
CDELT1 = spec[0].header['CDELT1']
CRVAL1 = spec[0].header['CRVAL1']
CRPIX1 = spec[0].header['CRPIX1']
wavelength = (CDELT1 * (np.arange(len(flux)) + 1 - CRPIX1) + CRVAL1)
v_t = star['rv_guess'] * u.km / u.s
v_b = 0. * u.km / u.s #star['v_bary']
rv_offset = 1 / ((v_t) / const.c + 1)
wavelength = wavelength * rv_offset
return wavelength, flux
print(f"{sobject_id} Missing file?")
return ["",""]
def spec_plotting(ax, star, camera, kwargs):
wavelength, flux = load_spectrum(star, camera)
if isinstance(wavelength, str):
return 1
median_idx = (wavelength > 6707.76-12) & (wavelength < 6707.76+12)
ax.plot(wavelength[median_idx], flux[median_idx], **kwargs)
def teff_log_feh_box(star):
teff_logg_idx = ((np.abs(galah_idr3['teff']-star['teff']) < 50) &
(np.abs(galah_idr3['logg']-star['logg']) < 0.2) &
(np.abs(galah_idr3['fe_h']-star['fe_h']) < 0.03) &
(np.abs(galah_idr3['alpha_fe']-star['alpha_fe']) < 0.05) &
(np.abs(galah_idr3['vbroad']-star['vbroad']) < 5) &
(galah_idr3['snr_c2_iraf'] > 20) &
(galah_idr3['sobject_id'] != star['sobject_id']))
return teff_logg_idx
norm = mpl.colors.Normalize(vmin=-1, vmax=1.5)
cmap = cm.viridis_r
m = cm.ScalarMappable(norm=norm, cmap=cmap)
def li_colour(a_li):
# return [0.5, 0.5, 0.5]
if np.isnan(a_li):
return [0.5, 0.5, 0.5]
else:
return m.to_rgba(a_li)
#Normal metallicity
# star_list = [160519003601146,
# 160517000101393,
# 160522004101048,
# 170531003801318,
# 170517001801119,
# 151110004201355]
#feh_h -0.5
# star_list = [160531004101123,
# 171003002101124,
# 170506005401154,
# 140413002701201,
# 170408003501011,
# 171003003101157]
#fe_h -1.0
# star_list = [170602005701018,
# 160327006101287,
# 161212004601253,
# 160916001801260,
# 171101001201372,
# 170110002101161]
#fe_h -1.5
star_list = [171102001601153,
160522002102389,
170508002601067,
170512001801366,
171027001601375,
161228002501383]
sns.set_context("paper", font_scale=0.6)
fig, axes = plt.subplots(nrows=3,
ncols=2,
figsize=(7.5, 4.5), sharex=True, sharey=True)
for li_star_count, sobject_id in enumerate(star_list):
star_of_interest_idx = (galah_idr3['sobject_id'] == sobject_id) & useful_giants_idx
try:
star = galah_idr3[star_of_interest_idx][0]
except IndexError:
continue
ax = axes.flatten()[li_star_count]
teff_logg_idx = teff_log_feh_box(star) & useful_giants_idx & ~li_rich_idx
print(star['sobject_id'], li_star_count, np.sum(teff_logg_idx), star['teff'])
if np.sum(teff_logg_idx) > 0:
random_ten_indices = np.random.choice(range(np.sum(teff_logg_idx)), size=np.min([np.sum(teff_logg_idx), 10]), replace=False)
teff_logg_box = galah_idr3[teff_logg_idx][random_ten_indices]
for star_count, comp_star in enumerate(teff_logg_box):
spec_plotting(ax, comp_star, 3, dict(lw=0.5, alpha=0.6, color=[0.5, 0.5, 0.5], zorder=1))
spec_plotting(ax, star, 3, dict(lw=1.0, alpha=1.0, color='C3', zorder=100)) #li_colour(star['a_li'])
# for line in [6703.576, 6705.105, 6707.76, 6710.323, 6713.044]: #6707.98,
#
title_str = ""
teff_str = "\mathrm{T}_\mathrm{eff}"
logg_str = "\log g"
feh_str = "\mathrm{[Fe/H]}"
ali_str = "\mathrm{A}_\mathrm{Li}"
title_full = [#f"{np.sum(teff_logg_idx)} comparisons\n",
f"source_id = {star['source_id']}\n",
f"${teff_str}={star['teff']:0.0f}$ K • ",
f"${logg_str}={star['logg']:0.2f}$ • ",
f"${feh_str}={star['fe_h']:0.2f}$ • ",
f"${ali_str}={star['a_li']:0.2f}$"]
for extra_str in title_full:
title_str += extra_str
ax.annotate(title_str,
xy=(6697.9,0.07), xycoords='data')
ax.set_xticks(np.arange(6690+3, 6720+4, 5))
ax.set_xlim(6707.76-11, 6707.76+11)
ax.set_ylim(0, 1.05)
[ax.axvspan(6707.76-0.5,6707.76+0.5, alpha=0.1, color='C0', lw=0) for ax in axes.flatten()]
[ax.set_xlabel("Wavelength ($\AA$)") for ax in axes[-1,:]]
[ax.set_ylabel("Normalized flux") for ax in axes[:,0]]
fig.tight_layout()
plt.savefig(f"../paper/figures/comparison_spec_plots_-1.5.pdf", dpi=300, bbox_inches='tight')
plt.show()
plt.close()
def teff_log_feh_box(star):
teff_logg_idx = ((np.abs(galah_idr3['teff']-star['teff']) < 50) &
(np.abs(galah_idr3['logg']-star['logg']) < 0.2) &
(np.abs(galah_idr3['fe_h']-star['fe_h']) < 0.03) &
(np.abs(galah_idr3['alpha_fe']-star['alpha_fe']) < 0.05) &
(np.abs(galah_idr3['vbroad']-star['vbroad']) < 5) &
(galah_idr3['snr_c2_iraf'] > 20) &
(galah_idr3['sobject_id'] != star['sobject_id']))
return teff_logg_idx
norm = mpl.colors.Normalize(vmin=4, vmax=40)
cmap = cm.viridis_r
m = cm.ScalarMappable(norm=norm, cmap=cmap)
def li_colour(a_li):
# return [0.5, 0.5, 0.5]
if np.isnan(a_li):
return [0.5, 0.5, 0.5]
else:
return m.to_rgba(a_li)
# star_list = [150602004401158,
# 160519004601118,
# 170507007201193,
# 170905001601009,
# 150607004601371,
# 150108002201367,
# # 160723002001203,
# ]
star_list = [170115002701208,]
sns.set_context("paper", font_scale=1.0)
fig, axes = plt.subplots(nrows=1,
ncols=1,
figsize=(7.5, 4.5), sharex=True, sharey=True)
for li_star_count, sobject_id in enumerate(star_list):
star_of_interest_idx = (galah_idr3['sobject_id'] == sobject_id) & useful_giants_idx
star = galah_idr3[star_of_interest_idx][0]
ax = axes#.flatten()[li_star_count]
teff_logg_idx = teff_log_feh_box(star) & useful_giants_idx & ~li_rich_idx
print(star['sobject_id'], li_star_count, np.sum(teff_logg_idx), star['teff'])
# if np.sum(teff_logg_idx) > 10:
# random_ten_indices = np.random.choice(range(np.sum(teff_logg_idx)), size=10, replace=False)
# else:
# random_ten_indices = range(np.sum(teff_logg_idx))
# teff_logg_box = galah_idr3[teff_logg_idx][random_ten_indices]
# for star_count, comp_star in enumerate(teff_logg_box):
# spec_plotting(ax, comp_star, 3, dict(lw=0.5, alpha=0.6, color=[0.5, 0.5, 0.5], zorder=1))
# if star_count > 10:
# break
spec_plotting(ax, star, 3, dict(lw=1.0, alpha=1.0, color=li_colour(star['vbroad']), zorder=100)) #
# for line in [6703.576, 6705.105, 6707.76, 6710.323, 6713.044]: #6707.98,
#
# title_str = ""
# teff_str = "\mathrm{T}_\mathrm{eff}"
# logg_str = "\log g"
# feh_str = "\mathrm{[Fe/H]}"
# ali_str = "\mathrm{A}_\mathrm{Li}"
# title_full = [#f"{np.sum(teff_logg_idx)} comparisons\n",
# f"source_id = {star['source_id']}\n",
# f"${teff_str}={star['teff']:0.0f}$ K • ",
# f"${logg_str}={star['logg']:0.2f}$ • ",
# f"${feh_str}={star['fe_h']:0.2f}$ • ",
# f"${ali_str}={star['a_li']:0.2f}$"]
# for extra_str in title_full:
# title_str += extra_str
# ax.annotate(title_str,
# xy=(6697.9,0.07), xycoords='data')
ax.set_xticks(np.arange(6690+3, 6720+4, 5))
ax.set_xlim(6707.76-11, 6707.76+11)
# ax.set_ylim(0, 1.05)
# [ax.axvspan(6707.76-0.5,6707.76+0.5, alpha=0.1, color='C0', lw=0) for ax in axes.flatten()]
# [ax.set_xlabel("Wavelength ($\AA$)") for ax in axes[-1,:]]
# [ax.set_ylabel("Normalized flux") for ax in axes[:,0]]
ax.axvspan(6707.76-0.5,6707.76+0.5, alpha=0.1, color='C0', lw=0)
ax.set_xlabel("Wavelength ($\AA$)")
ax.set_ylabel("Normalized flux")
fig.tight_layout()
# plt.savefig(f"../paper/figures/comparison_spec_plots.pdf", dpi=300, bbox_inches='tight')
plt.show()
plt.close()
galah_idr3[galah_idr3['sobject_id'] == sobject_id]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/rifkybujana/KOPSI/blob/main/notebook/Squad.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Rifky Bujana Bisri & Muhammad Fajrin Buyang Daffa
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
!pip install datasets transformers
# !huggingface-cli login
# !pip install hf-lfs
# !git config --global user.email "<email>"
# !git config --global user.name "<username>"
import numpy as np
import pandas as pd
import transformers
import json
import sys
import time
import datetime
import random
import collections
import os
from pathlib import Path
from IPython.display import display, HTML
from datasets import load_dataset, load_metric
from datasets import Dataset
from datasets import DatasetDict
transformers.__version__
impossible_answer = True
model_checkpoint = "indolem/indobert-base-uncased"
batch_size = 16
```
# Runtime Settings
```
!nvidia-smi
```
# Data Preparation
## Load Dataset
```
with open('drive/MyDrive/KOPSI/squad/dev-v2.0.json') as f:
dev = json.load(f)
with open('drive/MyDrive/KOPSI/squad/train-v2.0.json') as f:
train = json.load(f)
def format(content):
hf_data = []
for data in content["data"]:
title = data["title"]
for paragraph in data["paragraphs"]:
context = paragraph["context"]
for qa in paragraph["qas"]:
fill = {
"id": qa["id"],
"title": title,
"context": context,
"question": qa["question"],
"answers": {"answer_start": [], "text": []}
}
if qa["is_impossible"]:
answers = qa["plausible_answers"]
else:
answers = qa["answers"]
for answer in answers:
fill["answers"]["answer_start"].append(answer["answer_start"])
fill["answers"]["text"].append(answer["text"])
hf_data.append(fill)
return hf_data
%%time
dev = format(dev)
train = format(train)
datasets = DatasetDict({
'train': Dataset.from_pandas(pd.DataFrame(train)),
'validation': Dataset.from_pandas(pd.DataFrame(dev))
})
datasets
datasets['train']['id'][0]
```
## Data Analyzation
```
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame((np.asarray(dev)[picks]).tolist())
df['answer_start'] = [i['answer_start'] for i in df['answers']]
df['answer_text'] = [i['text'] for i in df['answers']]
del df['answers']
return df
display(HTML(show_random_elements(dev).to_html()))
train_df = pd.DataFrame(train)
train_df['answer_start'] = [i['answer_start'] for i in train_df['answers']]
train_df['answer_text'] = [i['text'] for i in train_df['answers']]
del train_df['answers']
dev_df = pd.DataFrame(dev)
dev_df['answer_start'] = [i['answer_start'] for i in dev_df['answers']]
dev_df['answer_text'] = [i['text'] for i in dev_df['answers']]
del dev_df['answers']
figsize = (10,6)
train_df['context'].apply(len).plot.hist(title="Contex length histogram", bins=20, figsize=figsize, grid=True)
train_df['question'].apply(len).plot.hist(title="Question length histogram", bins=20, figsize=figsize, grid=True)
pd.DataFrame([len(i[0]) for i in train_df['answer_text']]).plot.hist(title="Answer length histogram", bins=20, figsize=figsize, grid=True)
dev_df['context'].apply(len).plot.hist(title="Contex length histogram", bins=20, figsize=figsize, grid=True)
dev_df['question'].apply(len).plot.hist(title="Question length histogram", bins=20, figsize=figsize, grid=True)
pd.DataFrame([len(i[0]) for i in dev_df['answer_text']]).plot.hist(title="Answer length histogram", bins=20, figsize=figsize, grid=True)
```
## Text Preprocessing
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
import transformers
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
tokenizer("What is your name?", "My name is Sylvain.")
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
for i, example in enumerate(datasets["train"]):
if len(tokenizer(example["question"], example["context"])["input_ids"]) > 384:
break
example = datasets["train"][i]
len(tokenizer(example["question"], example["context"])["input_ids"])
len(tokenizer(example["question"], example["context"], max_length=max_length, truncation="only_second")["input_ids"])
tokenized_example = tokenizer(
example["question"],
example["context"],
max_length=max_length,
truncation="only_second",
return_overflowing_tokens=True,
stride=doc_stride
)
[len(x) for x in tokenized_example["input_ids"]]
for x in tokenized_example["input_ids"][:2]:
print(tokenizer.decode(x))
tokenized_example = tokenizer(
example["question"],
example["context"],
max_length=max_length,
truncation="only_second",
return_overflowing_tokens=True,
return_offsets_mapping=True,
stride=doc_stride
)
print(tokenized_example["offset_mapping"][0][:100])
first_token_id = tokenized_example["input_ids"][0][1]
offsets = tokenized_example["offset_mapping"][0][1]
print(tokenizer.convert_ids_to_tokens([first_token_id])[0], example["question"][offsets[0]:offsets[1]])
sequence_ids = tokenized_example.sequence_ids()
print(sequence_ids)
answers = example["answers"]
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(tokenized_example["input_ids"][0]) - 1
while sequence_ids[token_end_index] != 1:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
offsets = tokenized_example["offset_mapping"][0]
if (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
# Move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
start_position = token_start_index - 1
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
end_position = token_end_index + 1
print(start_position, end_position)
else:
print("The answer is not in this feature.")
print(tokenizer.decode(tokenized_example["input_ids"][0][start_position: end_position+1]))
print(answers["text"][0])
pad_on_right = tokenizer.padding_side == "right"
def prepare_train_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
tokenized_datasets = datasets.map(prepare_train_features, batched=True, remove_columns=datasets["train"].column_names)
tokenized_datasets
```
## Fine Tuning Model
```
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
args = TrainingArguments(
f"drive/MyDrive/KOPSI/test-squad",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01
)
from transformers import default_data_collator
data_collator = default_data_collator
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train("./drive/MyDrive/KOPSI/test-squad/checkpoint-16404")
trainer.save_model("drive/MyDrive/KOPSI/bert-squad-trained")
```
## Evaluation
```
import torch
for batch in trainer.get_eval_dataloader():
break
batch = {k: v.to(trainer.args.device) for k, v in batch.items()}
with torch.no_grad():
output = trainer.model(**batch)
output.keys()
output.start_logits.shape, output.end_logits.shape
output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)
n_best_size = 20
import numpy as np
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:
if start_index <= end_index: # We need to refine that test to check the answer is inside the context
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": "" # We need to find a way to get back the original substring corresponding to the answer in the context
}
)
def prepare_validation_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
prepare_validation_features,
batched=True,
remove_columns=datasets["validation"].column_names
)
raw_predictions = trainer.predict(validation_features)
validation_features.set_format(type=validation_features.format["type"], columns=list(validation_features.features.keys()))
max_answer_length = 30
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
offset_mapping = validation_features[0]["offset_mapping"]
# The first feature comes from the first example. For the more general case, we will need to be match the example_id to
# an example index
context = datasets["validation"][0]["context"]
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
if start_index <= end_index: # We need to refine that test to check the answer is inside the context
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[:n_best_size]
valid_answers
valid_answers[5]
datasets["validation"][1231]
import collections
examples = datasets["validation"]
features = validation_features
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
from tqdm.auto import tqdm
def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
# The dictionaries we have to fill.
predictions = collections.OrderedDict()
# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]
min_null_score = None # Only used if squad_v2 is True.
valid_answers = []
context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]
# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}
# Let's pick our final answer: the best one or the null answer (only for squad_v2)
if not impossible_answer:
predictions[example["id"]] = best_answer["text"]
else:
answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
predictions[example["id"]] = answer
return predictions
final_predictions = postprocess_qa_predictions(datasets["validation"], validation_features, raw_predictions.predictions)
def print_answer(id):
text = [i for i in datasets['validation'] if i['id'] == id][0]
print(f"Text: {text['context']}")
print(f"Question: {text['question']}")
print(f"Answer: {final_predictions[id]}")
print_answer('5ad39d53604f3c001a3fe8d1')
metric = load_metric("squad_v2" if impossible_answer else "squad")
formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"]]
metric.compute(predictions=formatted_predictions, references=references)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from os import listdir
from rdkit import Chem
from scipy.spatial.distance import cdist
from itertools import product
from rdkit.ML.Descriptors.MoleculeDescriptors import MolecularDescriptorCalculator
```
### Loading useful data
### For ECIF
```
# Possible predefined protein atoms
ECIF_ProteinAtoms = ['C;4;1;3;0;0', 'C;4;2;1;1;1', 'C;4;2;2;0;0', 'C;4;2;2;0;1',
'C;4;3;0;0;0', 'C;4;3;0;1;1', 'C;4;3;1;0;0', 'C;4;3;1;0;1',
'C;5;3;0;0;0', 'C;6;3;0;0;0', 'N;3;1;2;0;0', 'N;3;2;0;1;1',
'N;3;2;1;0;0', 'N;3;2;1;1;1', 'N;3;3;0;0;1', 'N;4;1;2;0;0',
'N;4;1;3;0;0', 'N;4;2;1;0;0', 'O;2;1;0;0;0', 'O;2;1;1;0;0',
'S;2;1;1;0;0', 'S;2;2;0;0;0']
# Possible ligand atoms according to the PDBbind 2016 "refined set"
ECIF_LigandAtoms = ['Br;1;1;0;0;0', 'C;3;3;0;1;1', 'C;4;1;1;0;0', 'C;4;1;2;0;0',
'C;4;1;3;0;0', 'C;4;2;0;0;0', 'C;4;2;1;0;0', 'C;4;2;1;0;1',
'C;4;2;1;1;1', 'C;4;2;2;0;0', 'C;4;2;2;0;1', 'C;4;3;0;0;0',
'C;4;3;0;0;1', 'C;4;3;0;1;1', 'C;4;3;1;0;0', 'C;4;3;1;0;1',
'C;4;4;0;0;0', 'C;4;4;0;0;1', 'C;5;3;0;0;0', 'C;5;3;0;1;1',
'C;6;3;0;0;0', 'Cl;1;1;0;0;0', 'F;1;1;0;0;0', 'I;1;1;0;0;0',
'N;3;1;0;0;0', 'N;3;1;1;0;0', 'N;3;1;2;0;0', 'N;3;2;0;0;0',
'N;3;2;0;0;1', 'N;3;2;0;1;1', 'N;3;2;1;0;0', 'N;3;2;1;0;1',
'N;3;2;1;1;1', 'N;3;3;0;0;0', 'N;3;3;0;0;1', 'N;3;3;0;1;1',
'N;4;1;2;0;0', 'N;4;1;3;0;0', 'N;4;2;1;0;0', 'N;4;2;2;0;0',
'N;4;2;2;0;1', 'N;4;3;0;0;0', 'N;4;3;0;0;1', 'N;4;3;1;0;0',
'N;4;3;1;0;1', 'N;4;4;0;0;0', 'N;4;4;0;0;1', 'N;5;2;0;0;0',
'N;5;3;0;0;0', 'N;5;3;0;1;1', 'O;2;1;0;0;0', 'O;2;1;1;0;0',
'O;2;2;0;0;0', 'O;2;2;0;0;1', 'O;2;2;0;1;1', 'P;5;4;0;0;0',
'P;6;4;0;0;0', 'P;6;4;0;0;1', 'P;7;4;0;0;0', 'S;2;1;0;0;0',
'S;2;1;1;0;0', 'S;2;2;0;0;0', 'S;2;2;0;0;1', 'S;2;2;0;1;1',
'S;3;3;0;0;0', 'S;3;3;0;0;1', 'S;4;3;0;0;0', 'S;6;4;0;0;0',
'S;6;4;0;0;1', 'S;7;4;0;0;0']
PossibleECIF = [i[0]+"-"+i[1] for i in product(ECIF_ProteinAtoms, ECIF_LigandAtoms)]
```
### For ELEMENTS
```
ELEMENTS_ProteinAtoms = ["C","N","O", "S"]
ELEMENTS_LigandAtoms = ["Br", "C", "Cl", "F", "I", "N", "O", "P", "S"]
PossibleELEMENTS = [i[0]+"-"+i[1] for i in product(ELEMENTS_ProteinAtoms, ELEMENTS_LigandAtoms)]
```
### For ligand descriptors
```
LigandDescriptors = ['MaxEStateIndex', 'MinEStateIndex', 'MaxAbsEStateIndex', 'MinAbsEStateIndex',
'qed', 'MolWt', 'HeavyAtomMolWt', 'ExactMolWt', 'NumValenceElectrons',
'FpDensityMorgan1', 'FpDensityMorgan2', 'FpDensityMorgan3', 'BalabanJ',
'BertzCT', 'Chi0', 'Chi0n', 'Chi0v', 'Chi1', 'Chi1n', 'Chi1v', 'Chi2n',
'Chi2v', 'Chi3n', 'Chi3v', 'Chi4n', 'Chi4v', 'HallKierAlpha', 'Kappa1',
'Kappa2', 'Kappa3', 'LabuteASA', 'PEOE_VSA14', 'SMR_VSA1', 'SMR_VSA10',
'SMR_VSA2', 'SMR_VSA3', 'SMR_VSA4', 'SMR_VSA5', 'SMR_VSA6', 'SMR_VSA7',
'SMR_VSA9', 'SlogP_VSA1', 'SlogP_VSA10', 'SlogP_VSA11', 'SlogP_VSA12',
'SlogP_VSA2', 'SlogP_VSA3', 'SlogP_VSA4', 'SlogP_VSA5', 'SlogP_VSA6',
'SlogP_VSA7', 'SlogP_VSA8', 'TPSA', 'EState_VSA1', 'EState_VSA10',
'EState_VSA11', 'EState_VSA2', 'EState_VSA3', 'EState_VSA4', 'EState_VSA5',
'EState_VSA6', 'EState_VSA7', 'EState_VSA8', 'EState_VSA9', 'VSA_EState1',
'VSA_EState10', 'VSA_EState2', 'VSA_EState3', 'VSA_EState4', 'VSA_EState5',
'VSA_EState6', 'VSA_EState7', 'VSA_EState8', 'VSA_EState9', 'FractionCSP3',
'HeavyAtomCount', 'NHOHCount', 'NOCount', 'NumAliphaticCarbocycles',
'NumAliphaticHeterocycles', 'NumAliphaticRings', 'NumAromaticCarbocycles',
'NumAromaticHeterocycles', 'NumAromaticRings', 'NumHAcceptors', 'NumHDonors',
'NumHeteroatoms', 'NumRotatableBonds', 'NumSaturatedCarbocycles',
'NumSaturatedHeterocycles', 'NumSaturatedRings', 'RingCount', 'MolLogP',
'MolMR', 'fr_Al_COO', 'fr_Al_OH', 'fr_Al_OH_noTert', 'fr_ArN', 'fr_Ar_N',
'fr_Ar_NH', 'fr_Ar_OH', 'fr_COO', 'fr_COO2', 'fr_C_O', 'fr_C_O_noCOO',
'fr_C_S', 'fr_HOCCN', 'fr_Imine', 'fr_NH0', 'fr_NH1', 'fr_NH2', 'fr_N_O',
'fr_Ndealkylation1', 'fr_Ndealkylation2', 'fr_Nhpyrrole', 'fr_SH', 'fr_aldehyde',
'fr_alkyl_carbamate', 'fr_alkyl_halide', 'fr_allylic_oxid', 'fr_amide',
'fr_amidine', 'fr_aniline', 'fr_aryl_methyl', 'fr_azo', 'fr_barbitur',
'fr_benzene', 'fr_bicyclic', 'fr_dihydropyridine', 'fr_epoxide', 'fr_ester',
'fr_ether', 'fr_furan', 'fr_guanido', 'fr_halogen', 'fr_hdrzine', 'fr_hdrzone',
'fr_imidazole', 'fr_imide', 'fr_isocyan', 'fr_isothiocyan', 'fr_ketone',
'fr_ketone_Topliss', 'fr_lactam', 'fr_lactone', 'fr_methoxy', 'fr_morpholine',
'fr_nitrile', 'fr_nitro', 'fr_nitro_arom', 'fr_nitroso', 'fr_oxazole',
'fr_oxime', 'fr_para_hydroxylation', 'fr_phenol', 'fr_phenol_noOrthoHbond',
'fr_piperdine', 'fr_piperzine', 'fr_priamide', 'fr_pyridine', 'fr_quatN',
'fr_sulfide', 'fr_sulfonamd', 'fr_sulfone', 'fr_term_acetylene', 'fr_tetrazole',
'fr_thiazole', 'fr_thiocyan', 'fr_thiophene', 'fr_urea']
DescCalc = MolecularDescriptorCalculator(LigandDescriptors)
```
### An atom type from EFIC is defined as:
Atom symbol;
Explicit valence;
Attached heavy atoms;
Attached hydrogens;
Aromaticity;
Ring membership
```
def GetAtomType(atom):
# This function takes an atom in a molecule and returns its type as defined for ECIF
AtomType = [atom.GetSymbol(),
str(atom.GetExplicitValence()),
str(len([x.GetSymbol() for x in atom.GetNeighbors() if x.GetSymbol() != "H"])),
str(len([x.GetSymbol() for x in atom.GetNeighbors() if x.GetSymbol() == "H"])),
str(int(atom.GetIsAromatic())),
str(int(atom.IsInRing())),
]
return(";".join(AtomType))
```
### Ligands are loaded from an SDF file in a dataframe format considering the atom type definitions
```
def LoadSDFasDF(SDF):
# This function takes an SDF for a ligand as input and returns it as a pandas DataFrame with its atom types labeled according to ECIF
m = Chem.MolFromMolFile(SDF, sanitize=False)
m.UpdatePropertyCache(strict=False)
ECIF_atoms = []
for atom in m.GetAtoms():
if atom.GetSymbol() != "H": # Include only non-hydrogen atoms
entry = [int(atom.GetIdx())]
entry.append(GetAtomType(atom))
pos = m.GetConformer().GetAtomPosition(atom.GetIdx())
entry.append(float("{0:.4f}".format(pos.x)))
entry.append(float("{0:.4f}".format(pos.y)))
entry.append(float("{0:.4f}".format(pos.z)))
ECIF_atoms.append(entry)
df = pd.DataFrame(ECIF_atoms)
df.columns = ["ATOM_INDEX", "ECIF_ATOM_TYPE","X","Y","Z"]
if len(set(df["ECIF_ATOM_TYPE"]) - set(ECIF_LigandAtoms)) > 0:
print("WARNING: Ligand contains unsupported atom types. Only supported atom-type pairs are counted.")
return(df)
Atom_Keys=pd.read_csv("PDB_Atom_Keys.csv", sep=",")
def LoadPDBasDF(PDB):
# This function takes a PDB for a protein as input and returns it as a pandas DataFrame with its atom types labeled according to ECIF
ECIF_atoms = []
f = open(PDB)
for i in f:
if i[:4] == "ATOM":
# Include only non-hydrogen atoms
if (len(i[12:16].replace(" ","")) < 4 and i[12:16].replace(" ","")[0] != "H") or (len(i[12:16].replace(" ","")) == 4 and i[12:16].replace(" ","")[1] != "H" and i[12:16].replace(" ","")[0] != "H"):
ECIF_atoms.append([int(i[6:11]),
i[17:20]+"-"+i[12:16].replace(" ",""),
float(i[30:38]),
float(i[38:46]),
float(i[46:54])
])
f.close()
df = pd.DataFrame(ECIF_atoms, columns=["ATOM_INDEX","PDB_ATOM","X","Y","Z"])
df = df.merge(Atom_Keys, left_on='PDB_ATOM', right_on='PDB_ATOM')[["ATOM_INDEX", "ECIF_ATOM_TYPE", "X", "Y", "Z"]].sort_values(by="ATOM_INDEX").reset_index(drop=True)
if list(df["ECIF_ATOM_TYPE"].isna()).count(True) > 0:
print("WARNING: Protein contains unsupported atom types. Only supported atom-type pairs are counted.")
return(df)
def GetPLPairs(PDB_protein, SDF_ligand, distance_cutoff=6.0):
# This function returns the protein-ligand atom-type pairs for a given distance cutoff
# Load both structures as pandas DataFrames
Target = LoadPDBasDF(PDB_protein)
Ligand = LoadSDFasDF(SDF_ligand)
# Take all atoms from the target within a cubic box around the ligand considering the "distance_cutoff criterion"
for i in ["X","Y","Z"]:
Target = Target[Target[i] < float(Ligand[i].max())+distance_cutoff]
Target = Target[Target[i] > float(Ligand[i].min())-distance_cutoff]
# Get all possible pairs
Pairs = list(product(Target["ECIF_ATOM_TYPE"], Ligand["ECIF_ATOM_TYPE"]))
Pairs = [x[0]+"-"+x[1] for x in Pairs]
Pairs = pd.DataFrame(Pairs, columns=["ECIF_PAIR"])
Distances = cdist(Target[["X","Y","Z"]], Ligand[["X","Y","Z"]], metric="euclidean")
Distances = Distances.reshape(Distances.shape[0]*Distances.shape[1],1)
Distances = pd.DataFrame(Distances, columns=["DISTANCE"])
Pairs = pd.concat([Pairs,Distances], axis=1)
Pairs = Pairs[Pairs["DISTANCE"] <= distance_cutoff].reset_index(drop=True)
# Pairs from ELEMENTS could be easily obtained froms pairs from ECIF
Pairs["ELEMENTS_PAIR"] = [x.split("-")[0].split(";")[0]+"-"+x.split("-")[1].split(";")[0] for x in Pairs["ECIF_PAIR"]]
return Pairs
```
### Calculation of ECIF
```
def GetECIF(PDB_protein, SDF_ligand, distance_cutoff=6.0):
# Main function for the calculation of ECIF
Pairs = GetPLPairs(PDB_protein, SDF_ligand, distance_cutoff=distance_cutoff)
ECIF = [list(Pairs["ECIF_PAIR"]).count(x) for x in PossibleECIF]
return ECIF
```
### Calculation of ELEMENTS
```
def GetELEMENTS(PDB_protein, SDF_ligand, distance_cutoff=6.0):
# Function for the calculation of ELEMENTS
Pairs = GetPLPairs(PDB_protein, SDF_ligand, distance_cutoff=distance_cutoff)
ELEMENTS = [list(Pairs["ELEMENTS_PAIR"]).count(x) for x in PossibleELEMENTS]
return ELEMENTS
```
### Ligand descriptors
```
def GetRDKitDescriptors(SDF):
# Function for the calculation of ligand descriptors
mol = Chem.MolFromMolFile(SDF, sanitize=False)
mol.UpdatePropertyCache(strict=False)
Chem.GetSymmSSSR(mol)
return DescCalc.CalcDescriptors(mol)
```
| github_jupyter |
# EDA
path_plot = '/home/petra42/GIT/aida_question_classification/plots/'
## Libaries
```
import pandas as pd
import numpy as np
from collections import Counter
#visualisation
import matplotlib.pyplot as plt
import seaborn as sns
#nltk import für eda_1 - preprozessing dataframe
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
# default setting for lemmatizer and stopwords
nltk.download('wordnet')
lemmatizer = WordNetLemmatizer()
# remove stop words
nltk.download('stopwords')
my_stopwords = set(stopwords.words('english'))
```
##libaries for part od speech
```
import spacy
import csv
nlp = spacy.load("en_core_web_sm")
#plot spacy
from spacy import displacy
from spacy.attrs import ORTH
from nltk import pos_tag
from collections import Counter
nltk.download('averaged_perceptron_tagger')
nltk.download('punkt')
from nltk import pos_tag
from nltk import FreqDist
from nltk.tokenize import PunktSentenceTokenizer, word_tokenize
from nltk import FreqDist
```
##Functions
```
# from folder data
# import data.get_data.process_question(row)
def process_question(row):
'''join row to text'''
text = " ".join(row.split(" ")[1:])
return text
def get_plot_length_text(df, title, diagram):
'''
creates a histogram
that shows the distribution of the number of tokens
per line for the data set to be examined
Parameters
----------
df:
series; to examined text
title:
string; title for Plot
diagram:
string: name of the diagram
Returns
-------
Vizualisationplot Distribution
'''
plt.hist(df.apply(lambda text: len(text.split())))
plt.xlabel('number of token')
plt.ylabel('frequency')
plt.title(title)
plt.savefig(path_plot + diagram + '.png')
plt.show()
avg = round(df.apply(lambda text: len(text.split())).mean())
maxi = max(list(df.apply(lambda text: len(text.split()))))
mini = min(list(df.apply(lambda text: len(text.split()))))
print(f'The texts have a mean length of {avg} tokens.')
print(f'The longest text has {maxi} tokens, the shortest has {mini} tokens.')
def corpus_func(df):
'''
create a textcorpus from pd.series, df['column']
Parameters
----------
text, string
Returns
------_
concatinated string with marker --XXX-- as selector
'''
return " XXX ".join(text for text in df)
def split_corpus_func(corpus):
'''
create a column from text corpus with marker ' XXX '
as selector
Parameters
----------
text, string
Returns
------_
list of text
'''
list = lambda x: x.split(' XXX ')
return list
def preprocess_dataframe(df):
'''
create new columns in the data frame
new_column: 'text' => cleaned stopwords (english)
'text_clean' => regex, lowercase
'text_lemma' => lemmetized
param: df
returns: df with new columns
'''
corpus = corpus_func(df['question'])
text_corpus = stopword_text(corpus)
df['text'] = split_corpus_func(corpus)
clean_corpus = clean_text(text_corpus)
df['text_clean'] = split_corpus_func(clean_corpus)
lemma = lem_text(clean_corpus)
df['text_lemma'] = split_corpus_func(clean_corpus)
return df
#import utils.text_manipulation
def clean_text(text):
"""
Regex cleaning of the text. Filters everything except alphanumerical and '.
Return is turned into lower case
Parameters
----------
text : string
text to be cleaned
Returns
-------
string
lower case regex cleaned text
"""
text = text.replace("´", "'")
text = text.replace("'s", " ")
digi_punct = "[^a-zA-Z.1234567890#' ]"
text = re.sub(digi_punct, " ", text)
text = " ".join(text.split())
text = text.lower()
return text
def stopword_text(text):
"""
Remove all words in the text that are in the stopword list
Parameters
----------
text : string
Returns
-------
string
text only stopwords
"""
return " ".join([word for word in text.split() if word not in my_stopwords])
def lem_text(text):
"""
Group the different inflected forms of a word so they can be analysed as
a single item
Parameters
----------
text : string
Returns
-------
string
text with lemmas
"""
lem_sentence = text.split()
for i, word in enumerate(text.split()):
for pos in "n", "v", "a", "r":
lem = lemmatizer.lemmatize(word, pos=pos)
if lem != word:
lem_sentence[i] = lem
break
else:
lem_sentence[i] = word
return " ".join(lem_sentence)
```
# Load data
```
# data
path_class_def = 'https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html'
path_train_data = 'https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label'
path_test_data = 'https://cogcomp.seas.upenn.edu/Data/QA/QC/TREC_10.label'
#load
train_df = pd.read_table(path_train_data, encoding = "ISO-8859-1", header=None)
train_df.columns = ["raw"]
train_df['category'] = train_df.apply (lambda row: row["raw"].split(":")[0], axis=1)
train_df['subcategory'] = train_df.apply (lambda row: row["raw"].split(" ")[0].split(":")[1], axis=1)
train_df['question'] = train_df.apply (lambda row: process_question(row["raw"]), axis=1)
train_df.head(2)
test_df = pd.read_table(path_test_data, encoding = "ISO-8859-1", header=None)
test_df.columns = ["raw"]
test_df['category'] = train_df.apply (lambda row: row["raw"].split(":")[0], axis=1)
test_df['subcategory'] = train_df.apply (lambda row: row["raw"].split(" ")[0].split(":")[1], axis=1)
test_df['question'] = train_df.apply (lambda row: process_question(row["raw"]), axis=1)
test_df.head(2)
```
#corpus
```
# corpus from df_train['question']
corpus = corpus_func(train_df['question'])
print(corpus[:500])
#corpus from df_test['question']
corpus_test = corpus_func(test_df['question'])
print(corpus_test[:500])
text = corpus
#sample
text_sample = process_question(train_df['raw'][10])
```
#Part of speech tags
```
# Split the data into individual tokens.
text_splitted = text.split()
# or:
word_tokens = word_tokenize(text)
print('split:\t',text_splitted[:100],'\ntokenize:',word_tokens[:100])
# Tag each token into a specific part of speech
word_tagged = nltk.pos_tag(word_tokens)
print('tagged:\t',word_tagged[:10])
# Print out the frequency for each part of speech. (i.e. 2 NN's, 100 VBG's, etc.)
# To see the acronym definitions for each part of speech,
# look at the following link: https://pythonprogramming.net/part-of-speech-tagging-nltk-tutorial/
# with counter from collections
word_count = sorted(Counter(word_tagged).items(), key = lambda kv:( kv[1], kv[0]), reverse=True)
#or with nltk FreqDist
fdist = FreqDist(word_tagged)
first20_count = fdist.most_common(10)
print(f'word_count:{word_count}\n len word_count: {len(word_count)}')
print(f'first20_count :{first20_count }')
doc = nlp(text_sample)
print('token','-', 'lemma_', '-', 'lower_''-', 'pos_','-', 'tag_-', 'dep_','-', 'sentiment','-','is_alpha','-', 'is_stop')
for token in doc:
#any token_attributs; more: https://spacy.io/api/token#attributes
print('------------------------------------------------------------------------\n')
print(token.text,'-', token.lemma_, '-', token.lower_,'-', token.pos_,'-', token.tag_,'-',
token.dep_,'-', token.sentiment,'-', token.is_alpha,'-', token.is_stop)
'''
Text: The original word text.
Lemma: The base form of the word.
POS: The simple UPOS part-of-speech tag.
Tag: The detailed part-of-speech tag.
Dep: Syntactic dependency, i.e. the relation between tokens.
Shape: The word shape – capitalization, punctuation, digits.
is alpha: Is the token an alpha character?
is stop: Is the token part of a stop list, i.e. the most common words of the language?'''
# Print out the entire spacy text. --- split by 300 ...to long for display
print(doc[:300])
# check doc is tagged?
print(doc.is_tagged)
#token_tags + explain
print([(token.text, token.tag_, spacy.explain(token.tag_)) for token in doc])
#_entities + label
print([(ent.text, ent.label_) for ent in doc.ents])
#token_dep_ + explain
print([(token.text, token.dep_, spacy.explain(token.dep_)) for token in doc])
```
spaCy add on:
doc, any attributes, chunk
```
#any token_attributs; more: https://spacy.io/api/token#attributes
print('token','-', 'lemma_', '-', 'lower_''-', 'pos_','-', 'tag_-', 'dep_','-', 'sentiment','-','is_alpha','-', 'is_stop')
print('------------------------------------------------------------------------')
for i, token in zip(range (10), doc):
print(i, token.text,'-', token.lemma_, '-', token.lower_,'-', token.pos_,'-', token.tag_,'-',
token.dep_,'-', token.sentiment,'-', token.is_alpha,'-', token.is_stop)
#Base Noun Phrases - flat phrases whose head is based on a noun
for chunk in doc[0:10].noun_chunks:
print(chunk.text, chunk.root.text, chunk.root.dep_,
chunk.root.head.text)
# For the first 100 words, print out the text, tag, and explanation of the tag.
# If you want to format things better, try using f strings! For example, look at the following:
# for number in range(0, 100):
# print(f"Original Number: {number}, {' ':{10}} Original Number times 2: {number * 10}")
i = 0
for i, token in zip(range(99),doc):
print(f"{token}\t{' ':{7}}\t{token.tag_}\t\t{' ':{8}}{spacy.explain(token.tag_)}")
# {' ':{10} or \t ---> it won't be very nice
#Count the frequencies of a given attribute with ORTH
print(doc.count_by(ORTH))
print(len(doc.count_by(ORTH)))
#other way
dict_tags={}
for token in doc:
#print(token,token.tag_)
try:
dict_tags[token.tag_].append(token.text)
except:
dict_tags[token.tag_]=[token.text]
print(dict_tags)
tag_labels = []
for key in doc:
while token.head != token:
tag_labels.append(token.tag_)
token = token.head
print(tag_labels)
tags_count=[]
for value in dict_tags.values():
tags_count.append(len(value))
print(tags_count)
# Visualize the POS tags of the first sentence in the original text file with displacy.
#sentence
sents = list(doc.sents)
print(len(sents))
sents[0]
print(sents[0])
displacy.render(sents[0],style="dep" ,jupyter=True, options = {'distance' : 100})
plt.savefig(path_plot + 'POS_tag.png')
# Visualize the Key_words and your classification of the first sentence with displacy.
displacy.render(sents[0],style="ent" ,jupyter=True)
plt.savefig(path_plot + diagram + '.png')
```
copy from:
https://shirishkadam.com/2017/07/03/nlp-question-classification-using-support-vector-machines-spacyscikit-learnpandas/
```
def process_question (question, qclass, en_nlp):
en_doc = nlp (u, ' ' + question)
sent_list = list (en_doc.sents)
sent = sent_list [0]
wh_bi_gram = []
root_token = ""
wh_pos = ""
wh_nbor_pos = ""
wh_word = ""
for sent in token:
if token.tag_ == "WDT" or token.tag_ == "WP" or token.tag_ == "WP $" or token.tag_ == "WRB":
wh_pos = token .tag_
wh_word = token.text
wh_bi_gram.append (token.text)
wh_bi_gram.append (str (en_doc [token.i + 1]))
wh_nbor_pos = en_doc [token.i + 1].tag_
if token.dep_ == "ROOT":
root_token = token.tag_
write_each_record_to_csv (wh_pos, wh_word, wh_bi_gram, wh_nbor_pos, root_token)
process_question (df_train['question'][50])
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from pathlib import Path
import json
import sys
sys.path.append("../")
import anamic
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tifffile
from scipy import ndimage
import read_roi
from tqdm.auto import trange
from tqdm.auto import tqdm
pixel_size = 107 # nm/pixel
time_per_frame = 2.33 # s
```
## Load image and ROIs.
```
# Open the image and its starting points for fitting
data_dir = Path('/home/hadim/Documents/Code/Postdoc/ij/testdata/anamic')
fname = data_dir / "IRM TEST 2019-06-28-ch1_CROP_XY.tif"
# Open the image
image = tifffile.imread(str(fname))
# Load lines
rois = read_roi.read_roi_zip(fname.with_suffix('.zip'))
roi = list(rois.values())[4]
# Get microtubule tip coordinates
tip_start = np.array([roi['y2'], roi['x2']])
tip_end = np.array([roi['y1'], roi['x1']])
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(image[0], interpolation='none', origin=[0, 0], cmap='viridis')
ax.set_aspect('equal')
ax.scatter(tip_start[1], tip_start[0], color='red', s=200, marker="x", lw=4)
ax.scatter(tip_end[1], tip_end[0], color='red', s=200, marker="x", lw=4)
```
## Define fitting parameters
```
# Define fitting parameters
args = {}
args['get_thick_line_args'] = {}
args['get_thick_line_args']['length_spacing'] = 1 # pixel
args['get_thick_line_args']['line_thickness'] = 5000 / pixel_size # pixel
args['get_thick_line_args']['width_spacing'] = 1 # pixel
args['perpendicular_line_fit_args'] = {}
args['perpendicular_line_fit_args']['length_spacing'] = 0.1 # pixel
args['perpendicular_line_fit_args']['fit_threshold'] = 0.15
args['perpendicular_line_fit_args']['continuous_discard'] = False
args['offset_start'] = 2000 / pixel_size # pixel
args['offset_end'] = 2000 / pixel_size # pixel
args['tip_fit_args'] = {}
args['tip_fit_args']['length_spacing'] = 0.1 # pixel
args['tip_fit_args']['line_thickness'] = 400 / pixel_size # pixel
args['tip_fit_args']['width_spacing'] = 0.1 # pixel
```
## Iterate over all frames and do the fitting.
```
# Use the first frame for the initial fitting.
frame = image[0]
lines = anamic.fitter.get_thick_line(tip_start, tip_end, **args['get_thick_line_args'])
fitted_line = anamic.fitter.perpendicular_line_fit(lines, frame, **args['perpendicular_line_fit_args'])
# Now we fit the best line from those points
a, b = np.polyfit(fitted_line[:, 1], fitted_line[:, 0], deg=1)
new_point1 = np.array([a * fitted_line[0, 1] + b, fitted_line[0, 1]])
new_point2 = np.array([a * fitted_line[-1, 1] + b, fitted_line[-1, 1]])
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(frame, interpolation='none', origin=[0, 0], cmap='viridis')
ax.set_aspect('equal')
ax.scatter(new_point1[1], new_point1[0], color='red', s=200, marker="x", lw=2)
ax.scatter(new_point2[1], new_point2[0], color='red', s=200, marker="x", lw=2)
data = []
for i in trange(len(image[:])):
frame = image[i]
# Calculate the vector of the line and its norm
vec = new_point2 - new_point1
# Get the coordinates of the points we'll use
# to for line fitting.
start_point = anamic.geometry.get_point_from_vector(-vec, new_point2, args['offset_start'])
end_point = anamic.geometry.get_point_from_vector(vec, new_point2, args['offset_end'])
line_fit_tips = np.array([start_point, end_point])
# Fit the tip
tip_line_fit_results = anamic.fitter.tip_line_fit(line_fit_tips[0], line_fit_tips[1], frame, **args['tip_fit_args'])
x_profile, y_profile, fit_result, fit_func = tip_line_fit_results
fit_values = fit_result.values
fit_values, fit_values['sigma'] * pixel_size
# Compute x and y tip coordinates.
mu = fit_values['mu']
vec = line_fit_tips[1] - line_fit_tips[0]
y_fitted, x_fitted = anamic.geometry.get_point_from_vector(vec, line_fit_tips[0], mu)
# Update `new_point2` for the next fit.
new_point2 = np.array([y_fitted, x_fitted])
# Save the data
datum = {}
datum['frame'] = i
datum['x'] = x_fitted
datum['y'] = y_fitted
datum['sigma'] = fit_values['sigma'] * pixel_size
data.append(datum)
data = pd.DataFrame(data)
# Compute length and convert to spatial and temporal values.
init_position = data[['x', 'y']].iloc[0]
data['length'] = np.sqrt(np.sum((data[['x', 'y']] - init_position) ** 2, axis=1))
data['length_um'] = data['length'] * (pixel_size * 1e-3)
data['time_s'] = data['frame'] * time_per_frame
data['time_min'] = data['time_s'] / 60
data.to_csv(fname.with_suffix('.csv'), index=False)
```
## Load results from CSV
```
data = pd.read_csv(fname.with_suffix('.csv'))
```
## Visualize the results.
```
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(image[0], interpolation='none', origin=[0, 0], cmap='gray')
ax.set_aspect('equal')
df = data.iloc[::100]
ax.scatter(df['x'], df['y'], c=df['frame'].values, s=150, marker='x', lw=2, cmap='Reds')
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(16, 10), sharex=True)
ax1.plot(data['time_min'], data['sigma'], marker='o', ms=1, alpha=0.4)
# Rolling average.
rolling_window_s = 3 * 60 # s
rolling_window = int(rolling_window_s / time_per_frame)
rolling_values = data['sigma'].rolling(rolling_window).mean()
ax1.plot(data['time_min'], rolling_values, marker='o', ms=1, alpha=0.4)
ax1.set_xlabel('Time (min)', fontsize=20)
ax1.set_ylabel('Sigma Value (nm)', fontsize=20)
ax2.plot(data['time_min'], data['length_um'], marker='o', ms=1)
ax2.set_xlabel('Time (min)', fontsize=20)
ax2.set_ylabel('Length (um)', fontsize=20)
```
## Make movie with overlay
```
import imageio
with imageio.get_writer(fname.with_suffix('.mp4'), mode='I') as f:
df = data.iloc[:]
for _, row in tqdm(df.iterrows(), total=len(df)):
frame = image[int(row['frame'])]
fig, ax = plt.subplots(figsize=(6.08, 6.08), dpi=100)
ax.imshow(frame, interpolation='none', origin=[0, 0], cmap='gray')
ax.set_aspect('equal')
ax.scatter(row['x'], row['y'], color='none', edgecolors='red', s=500, marker='o', lw=2)
fig.canvas.draw()
buffer_image = np.array(fig.canvas.renderer.buffer_rgba())
f.append_data(buffer_image)
fig.clear()
plt.close('all')
import base64
from IPython import display
video = open(fname.with_suffix('.mp4'), "rb").read()
encoded = base64.b64encode(video)
display.HTML(data=f'<video width=600 controls><source src="data:video/mp4;base64,{encoded.decode("ascii")}" type="video/mp4" /></video>')
```
## Build kymograph
```
offset_start = 4000 / pixel_size # pixel
offset_end = 100 / pixel_size # pixel
vec = data.iloc[-1][['x', 'y']] - data.iloc[0][['x', 'y']]
point1 = data.loc[data['length'].idxmin()][['x', 'y']]
point1 = anamic.geometry.get_point_from_vector(-vec, point1, offset_start)
point1 = point1[::-1]
point2 = data.loc[data['length'].idxmax()][['x', 'y']]
point2 = anamic.geometry.get_point_from_vector(vec, point2, offset_end)
point2 = point2[::-1]
kymograph = []
for i in trange(len(image[:])):
frame = image[i]
_, profile = anamic.fitter.line_profile(frame, point1, point2, line_thickness=0, normalized_intensities=False)
kymograph.append(profile)
kymograph = np.array(kymograph)
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(kymograph, interpolation='none', origin=[0, 0], cmap='gray')
ax.set_aspect('equal')
```
| github_jupyter |
```
import numpy as np
import csv, gzip, os, sys, gc
import math
import torch
from torch import nn
import torch.optim as optim
from torch.nn import functional as F
import logging
import datetime
import optparse
import pandas as pd
import os
from sklearn.metrics import log_loss
import ast
from torch.utils.data import Dataset
from sklearn.metrics import log_loss
from torch.utils.data import DataLoader
from scipy.ndimage import uniform_filter
from torch.optim.lr_scheduler import StepLR
from apex.parallel import DistributedDataParallel as DDP
from apex.fp16_utils import *
from apex import amp, optimizers
from apex.multi_tensor_apply import multi_tensor_applier
# Print info about environments
parser = optparse.OptionParser()
parser.add_option('-s', '--seed', action="store", dest="seed", help="model seed", default="1234")
parser.add_option('-o', '--fold', action="store", dest="fold", help="Fold for split", default="0")
parser.add_option('-p', '--nbags', action="store", dest="nbags", help="Number of bags for averaging", default="4")
parser.add_option('-e', '--epochs', action="store", dest="epochs", help="epochs", default="10")
parser.add_option('-b', '--batchsize', action="store", dest="batchsize", help="batch size", default="4")
parser.add_option('-r', '--rootpath', action="store", dest="rootpath", help="root directory", default="")
parser.add_option('-i', '--imgpath', action="store", dest="imgpath", help="root directory", default="data/mount/512X512X6/")
parser.add_option('-w', '--workpath', action="store", dest="workpath", help="Working path", default="data/resnext101v12fold1/")
parser.add_option('-f', '--weightsname', action="store", dest="weightsname", help="Weights file name", default="pytorch_model.bin")
parser.add_option('-l', '--lr', action="store", dest="lr", help="learning rate", default="0.00005")
parser.add_option('-g', '--logmsg', action="store", dest="logmsg", help="root directory", default="Recursion-pytorch")
parser.add_option('-c', '--size', action="store", dest="size", help="model size", default="512")
parser.add_option('-a', '--globalepoch', action="store", dest="globalepoch", help="root directory", default="3")
parser.add_option('-n', '--loadcsv', action="store", dest="loadcsv", help="Convert csv embeddings to numpy", default="F")
parser.add_option('-j', '--lstm_units', action="store", dest="lstm_units", help="Lstm units", default="128")
parser.add_option('-d', '--dropout', action="store", dest="dropout", help="LSTM input spatial dropout", default="0.3")
parser.add_option('-z', '--decay', action="store", dest="decay", help="Weight Decay", default="0.0")
parser.add_option('-m', '--lrgamma', action="store", dest="lrgamma", help="Scheduler Learning Rate Gamma", default="1.0")
parser.add_option('-k', '--ttahflip', action="store", dest="ttahflip", help="Bag with horizontal flip on and off", default="F")
parser.add_option('-q', '--ttatranspose', action="store", dest="ttatranspose", help="Bag with horizontal flip on and off", default="F")
parser.add_option('-x', '--datapath', action="store", dest="datapath", help="Data path", default="data")
options, args = parser.parse_args(['--datapath', 'data/resnext101v12fold1'])
package_dir = options.rootpath
sys.path.append(package_dir)
sys.path.insert(0, 'scripts')
from logs import get_logger
from utils import dumpobj, loadobj, GradualWarmupScheduler
# Print info about environments
logger = get_logger(options.logmsg, 'INFO') # noqa
logger.info('Cuda set up : time {}'.format(datetime.datetime.now().time()))
device=torch.device('cuda')
logger.info('Device : {}'.format(torch.cuda.get_device_name(0)))
logger.info('Cuda available : {}'.format(torch.cuda.is_available()))
n_gpu = torch.cuda.device_count()
logger.info('Cuda n_gpus : {}'.format(n_gpu ))
logger.info('Load params : time {}'.format(datetime.datetime.now().time()))
for (k,v) in options.__dict__.items():
logger.info('{}{}'.format(k.ljust(20), v))
SEED = int(options.seed)
SIZE = int(options.size)
EPOCHS = int(options.epochs)
GLOBALEPOCH=int(options.globalepoch)
n_epochs = EPOCHS
lr=float(options.lr)
lrgamma=float(options.lrgamma)
DECAY=float(options.decay)
batch_size = int(options.batchsize)
ROOT = options.rootpath
path_data = os.path.join(ROOT, options.datapath)
# path_img = os.path.join(ROOT, options.imgpath)
WORK_DIR = os.path.join(ROOT, options.workpath)
path_emb = os.path.join(ROOT, options.workpath)
WEIGHTS_NAME = options.weightsname
fold = int(options.fold)
LOADCSV= options.loadcsv=='T'
LSTM_UNITS=int(options.lstm_units)
nbags=int(options.nbags)
DROPOUT=float(options.dropout)
TTAHFLIP= 'T' if options.ttahflip=='T' else ''
TTATRANSPOSE= 'P' if options.ttatranspose=='T' else ''
n_classes = 6
label_cols = ['epidural', 'intraparenchymal', 'intraventricular', 'subarachnoid', 'subdural', 'any']
def makeSub(ypred, imgs):
imgls = np.array(imgs).repeat(len(label_cols))
icdls = pd.Series(label_cols*ypred.shape[0])
yidx = ['{}_{}'.format(i,j) for i,j in zip(imgls, icdls)]
subdf = pd.DataFrame({'ID' : yidx, 'Label': ypred.flatten()})
return subdf
class SpatialDropout(nn.Dropout2d):
def forward(self, x):
x = x.unsqueeze(2) # (N, T, 1, K)
x = x.permute(0, 3, 2, 1) # (N, K, 1, T)
x = super(SpatialDropout, self).forward(x) # (N, K, 1, T), some features are masked
x = x.permute(0, 3, 2, 1) # (N, T, 1, K)
x = x.squeeze(2) # (N, T, K)
return x
def criterion(data, targets, criterion = torch.nn.BCEWithLogitsLoss()):
''' Define custom loss function for weighted BCE on 'target' column '''
loss_all = criterion(data, targets)
loss_any = criterion(data[:,-1:], targets[:,-1:])
return (loss_all*6 + loss_any*1)/7
class IntracranialDataset(Dataset):
def __init__(self, df, mat, labels=label_cols):
self.data = df
self.mat = mat
self.labels = labels
self.patients = df.SliceID.unique()
self.data = self.data.set_index('SliceID')
def __len__(self):
return len(self.patients)
def __getitem__(self, idx):
patidx = self.patients[idx]
patdf = self.data.loc[patidx].sort_values('seq')
patemb = self.mat[patdf['embidx'].values]
patdeltalag = np.zeros(patemb.shape)
patdeltalead = np.zeros(patemb.shape)
patdeltalag [1:] = patemb[1:]-patemb[:-1]
patdeltalead[:-1] = patemb[:-1]-patemb[1:]
patemb = np.concatenate((patemb, patdeltalag, patdeltalead), -1)
ids = torch.tensor(patdf['embidx'].values)
if self.labels:
labels = torch.tensor(patdf[label_cols].values)
return {'emb': patemb, 'embidx' : ids, 'labels': labels}
else:
return {'emb': patemb, 'embidx' : ids}
def predict(loader):
valls = []
imgls = []
imgdf = loader.dataset.data.reset_index().set_index('embidx')[['Image']].copy()
for step, batch in enumerate(loader):
inputs = batch["emb"]
mask = batch['mask'].to(device, dtype=torch.int)
inputs = inputs.to(device, dtype=torch.float)
logits = model(inputs)
# get the mask for masked labels
maskidx = mask.view(-1)==1
# reshape for
logits = logits.view(-1, n_classes)[maskidx]
valls.append(torch.sigmoid(logits).detach().cpu().numpy())
# Get the list of images
embidx = batch["embidx"].detach().cpu().numpy().astype(np.int32)
embidx = embidx.flatten()[embidx.flatten()>-1]
images = imgdf.loc[embidx].Image.tolist()
imgls += images
return np.concatenate(valls, 0), imgls
# Print info about environments
logger.info('Cuda set up : time {}'.format(datetime.datetime.now().time()))
# Get image sequences
trnmdf = pd.read_csv(os.path.join(path_data, 'train_metadata.csv'))
tstmdf = pd.read_csv(os.path.join(path_data, 'test_metadata.csv'))
trnmdf['SliceID'] = trnmdf[['PatientID', 'SeriesInstanceUID', 'StudyInstanceUID']].apply(lambda x: '{}__{}__{}'.format(*x.tolist()), 1)
tstmdf['SliceID'] = tstmdf[['PatientID', 'SeriesInstanceUID', 'StudyInstanceUID']].apply(lambda x: '{}__{}__{}'.format(*x.tolist()), 1)
poscols = ['ImagePos{}'.format(i) for i in range(1, 4)]
trnmdf[poscols] = pd.DataFrame(trnmdf['ImagePositionPatient']\
.apply(lambda x: list(map(float, ast.literal_eval(x)))).tolist())
tstmdf[poscols] = pd.DataFrame(tstmdf['ImagePositionPatient']\
.apply(lambda x: list(map(float, ast.literal_eval(x)))).tolist())
trnmdf = trnmdf.sort_values(['SliceID']+poscols)\
[['PatientID', 'SliceID', 'SOPInstanceUID']+poscols].reset_index(drop=True)
tstmdf = tstmdf.sort_values(['SliceID']+poscols)\
[['PatientID', 'SliceID', 'SOPInstanceUID']+poscols].reset_index(drop=True)
trnmdf['seq'] = (trnmdf.groupby(['SliceID']).cumcount() + 1)
tstmdf['seq'] = (tstmdf.groupby(['SliceID']).cumcount() + 1)
keepcols = ['PatientID', 'SliceID', 'SOPInstanceUID', 'seq']
trnmdf = trnmdf[keepcols]
tstmdf = tstmdf[keepcols]
trnmdf.columns = tstmdf.columns = ['PatientID', 'SliceID', 'Image', 'seq']
SIZE=480
fold=1
GLOBALEPOCH=0
# Load Data Frames
trndf = loadobj(os.path.join(path_emb, 'loader_trn_size{}_fold{}_ep{}'.format(SIZE, fold, GLOBALEPOCH))).dataset.data
valdf = loadobj(os.path.join(path_emb, 'loader_val_size{}_fold{}_ep{}'.format(SIZE, fold, GLOBALEPOCH))).dataset.data
tstdf = loadobj(os.path.join('data/stg2tst', 'loader_tst2_size{}_fold{}_ep{}'.format(SIZE, fold, GLOBALEPOCH))).dataset.data
trndf['embidx'] = range(trndf.shape[0])
valdf['embidx'] = range(valdf.shape[0])
tstdf['embidx'] = range(tstdf.shape[0])
trndf = trndf.merge(trnmdf.drop('PatientID', 1), on = 'Image')
valdf = valdf.merge(trnmdf.drop('PatientID', 1), on = 'Image')
# tstdf = tstdf.merge(trnmdf.drop('PatientID', 1), on = 'Image')
tstdf = tstdf.merge(tstmdf, on = 'Image')
trndf.shape
valdf.shape
tstmdf.shape
tstdf.shape
tstdf.shape
logger.info('Trn df shape {} {}'.format(*trndf.shape))
logger.info('Val df shape {} {}'.format(*valdf.shape))
logger.info('Tst df shape {} {}'.format(*tstdf.shape))
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Training, hyperparameter tune, and deploy with PyTorch Lightning
## Introduction:
## Prerequisite:
* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning
* install the AML SDK
* create a workspace and downlod its configuration file (`config.json`)
Reference:
Azure Machine Learning を使用して PyTorch モデルを大規模にトレーニングする:
https://docs.microsoft.com/ja-jp/azure/machine-learning/how-to-train-pytorch
Training Your First Distributed PyTorch Lightning Model with Azure ML:
https://medium.com/microsoftazure/training-your-first-distributed-pytorch-lightning-model-with-azure-ml-f493d370acb
FINETUNING TORCHVISION MODELS:
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
Split data with PyTorch Transformer:
https://stackoverflow.com/questions/61811946/train-valid-test-split-for-custom-dataset-using-pytorch-and-torchvision
Multi-GPU in PyTorch Lightning:
https://towardsdatascience.com/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565
```
%matplotlib inline
import numpy as np
import os
import matplotlib.pyplot as plt
import azureml
from azureml.core import Workspace
from azureml.core import Experiment
from azureml.core import ScriptRunConfig
from azureml.core import Dataset
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core import Environment
from azureml.telemetry import set_diagnostics_collection
from azureml.widgets import RunDetails
from azureml.train.hyperdrive import RandomParameterSampling, BayesianParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform, uniform
# check core SDK version number
print('This code run confrimed - SDK version: 1.19.0')
print("Azure ML SDK Version: ", azureml.core.VERSION)
set_diagnostics_collection(send_diagnostics=True)
```
# Initialize workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
```
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
exp = Experiment(workspace=ws, name='ImageClassification-PyTorchLightning')
```
# Connect dataset
In order to train on the `cat_dogs` dataset that was created via Azure ML SDK or Portal.
```
dataset = Dataset.get_by_name(ws, name='cat_dogs')
```
# Set training cluster - AmlCompute
You can create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model.
```
compute_target = ComputeTarget(workspace=ws, name='gpucluster6')
compute_target
```
# Create an environment
Define a conda environment YAML file with your training script dependencies and create an Azure ML environment.
Reference for-
Document:
https://docs.microsoft.com/ja-jp/azure/machine-learning/how-to-use-environments#use-a-prebuilt-docker-image
base_image:
https://github.com/Azure/AzureML-Containers
```
env = Environment.from_conda_specification('my_pl', 'environment.yml')
# specify a GPU base image
env.docker.enabled = True
env.docker.base_image = (
"mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04"
)
env.docker.arguments = ['--shm-size', '2g']
```
# Prepare training script
Now you will need to create your training script. In this tutorial, the training script is already provided for you at `train.py`. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.
However, if you would like to use Azure ML's tracking and metrics capabilities, you will have to add a small amount of Azure ML code inside your training script.
In `train.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML Run object within the script:
```
import shutil
script_folder = './script'
os.makedirs(script_folder, exist_ok=True)
# the training logic is in the train.py file.
shutil.copy('./train.py', script_folder)
```
# Configure the Single job
Create a ScriptRunConfig object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. The following code will configure a single-node PyTorch job.
```
arguments = [
'--data-folder', dataset.as_mount(),
'--batch-size', 50,
'--epoch', 10,
'--learning-rate', 0.001,
'--momentum', 0.9,
'--model-name', 'resnet',
'--optimizer', 'Adagrad',
'--criterion', 'cross_entropy',
'--feature_extract', True
]
config = ScriptRunConfig(
source_directory=script_folder,
script='train.py',
arguments=arguments,
compute_target=compute_target,
max_run_duration_seconds=900, # 15 minutes
environment=env
)
```
## Submit job to run
Submit the estimator to the Azure ML experiment to kick off the execution.
```
run = exp.submit(config)
```
### Monitor the Run
As the Run is executed, it will go through the following stages:
1. Preparing: A docker image is created matching the Python environment specified by the TensorFlow estimator and it will be uploaded to the workspace's Azure Container Registry. This step will only happen once for each Python environment -- the container will then be cached for subsequent runs. Creating and uploading the image takes about **5 minutes**. While the job is preparing, logs are streamed to the run history and can be viewed to monitor the progress of the image creation.
2. Scaling: If the compute needs to be scaled up (i.e. the AmlCompute cluster requires more nodes to execute the run than currently available), the cluster will attempt to scale up in order to make the required amount of nodes available. Scaling typically takes about **5 minutes**.
3. Running: All scripts in the script folder are uploaded to the compute target, data stores are mounted/copied and the `entry_script` is executed. While the job is running, stdout and the `./logs` folder are streamed to the run history and can be viewed to monitor the progress of the run.
4. Post-Processing: The `./outputs` folder of the run is copied over to the run history
There are multiple ways to check the progress of a running job. We can use a Jupyter notebook widget.
**Note: The widget will automatically update ever 10-15 seconds, always showing you the most up-to-date information about the run**
```
RunDetails(run).show()
```
We can also periodically check the status of the run object, and navigate to Azure portal to monitor the run.
```
%%time
run.wait_for_completion(show_output=True)
```
## Download the saved model
In the training script, the PyTorch model is saved into two files, `model.dist` and `model.pt`, in the `outputs/models` folder on the gpucluster AmlCompute node. Azure ML automatically uploaded anything written in the `./outputs` folder into run history file store. Subsequently, we can use the `run` object to download the model files. They are under the the `outputs/model` folder in the run history file store, and are downloaded into a local folder named `model`.
```
# create a model folder in the current directory
os.makedirs('./model', exist_ok=True)
for f in run.get_file_names():
if f.startswith('outputs/model'):
output_file_path = os.path.join('./model', f.split('/')[-1])
print('Downloading from {} to {} ...'.format(f, output_file_path))
run.download_file(name=f, output_file_path=output_file_path)
```
# Hyperparameter tuning by HyperDrive
We have trained the model with one set of hyperparameters, now let's how we can do hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.
1st time: Use **Random Sampling** to understand rough hyperparameter range.
2nd time or later: Use** Bayesian Sampling** using 1st job result to more optimize explorer
```
# BayesianParameterSampling dones't support Eearly Termination Policy
# https://docs.microsoft.com/ja-jp/azure/machine-learning/service/how-to-tune-hyperparameters
# Use Random Sampling as 1st Phase
ps = RandomParameterSampling(
{
'--batch-size': choice(50, 75, 100),
'--epoch': choice(20, 25, 30, 40, 50, 75, 100),
'--learning-rate': loguniform(-5, -1),
'--momentum': loguniform(-3, -1),
'--model-name': choice('resnet', 'alexnet', 'vgg', 'squeezenet', 'densenet', 'inception'),
'--optimizer': choice('SGD','Adagrad','Adadelta','Adam','AdamW','Adamax', 'ASGD', 'RMSprop', 'Rprop'),
'--criterion': choice('cross_entropy')
}
)
# After Random Sampling finished, try to better hyperparameters using Basyean Sampling
'''
ps = BayesianParameterSampling(
{
'--batch-size': choice(50, 100),
'--epoch': choice(25, 50, 75, 100),
'--learning-rate': loguniform(-4, -1),
'--momentum': loguniform(-5, -3),
'--model-name': choice('resnet', 'alexnet', 'vgg', 'squeezenet', 'densenet', 'inception'),
'--optimizer': choice('SGD','Adagrad','Adadelta','Adam','AdamW','SparseAdam', 'Adamax', 'ASGD', 'RMSprop', 'Rprop'),
'--criterion': choice('cross_entropy', 'binary_cross_entropy', 'binary_cross_entropy_with_logits', 'poisson_nll_loss', 'hinge_embedding_loss', 'kl_div', 'l1_loss', 'mse_loss', 'margin_ranking_loss', 'multilabel_margin_loss', 'multilabel_soft_margin_loss', 'multi_margin_loss','nll_loss', 'smooth_l1_loss', 'soft_margin_loss')
}
)
'''
```
Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.
```
policy = BanditPolicy(slack_factor=0.15, evaluation_interval=2, delay_evaluation=10)
```
Now we are ready to configure a run configuration object, and specify the primary metric `Accuracy` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster.
```
from azureml.core import ScriptRunConfig
arguments = [
'--data-folder', dataset.as_mount(),
'--feature_extract', True
]
config = ScriptRunConfig(
source_directory=script_folder,
script='train.py',
arguments=arguments,
compute_target=compute_target,
max_run_duration_seconds=3000, # 30 minutes
environment=env
)
hdc = HyperDriveConfig(run_config=config,
hyperparameter_sampling=ps,
policy=policy, # Comment out this line for Baysian Sampling
primary_metric_name='Accuracy',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=200,
max_concurrent_runs=4,
max_duration_minutes=120)
```
Finally, let's launch the hyperparameter tuning job.
```
hdr = exp.submit(config=hdc)
```
We can use a run history widget to show the progress. Be patient as this might take a while to complete.
```
from azureml.widgets import RunDetails
RunDetails(hdr).show()
hdr.wait_for_completion(show_output=True)
best_run = hdr.get_best_run_by_primary_metric()
print(best_run.get_details()['runDefinition']['arguments'])
```
Now let's list the model files uploaded during the run.
```
print(best_run.get_file_names())
```
We can then register the folder (and all files in it) as a model named `dog_cats_imageclassification_pytourch` under the workspace for deployment.
```
model = best_run.register_model(model_name='dog_cats_imageclassification_pytourch', model_path='outputs/model')
```
| github_jupyter |
#Waymo Open Dataset Tutorial
- Website: https://waymo.com/open
- GitHub: https://github.com/waymo-research/waymo-open-dataset
This tutorial demonstrates how to use the Waymo Open Dataset with two frames of data. Visit the [Waymo Open Dataset Website](https://waymo.com/open) to download the full dataset.
To use, open this notebook in [Colab](https://colab.research.google.com).
Uncheck the box "Reset all runtimes before running" if you run this colab directly from the remote kernel. Alternatively, you can make a copy before trying to run it by following "File > Save copy in Drive ...".
## Install waymo_open_dataset package
```
!rm -rf waymo-od > /dev/null
!git clone https://github.com/waymo-research/waymo-open-dataset.git waymo-od
!cd waymo-od && git branch -a
!cd waymo-od && git checkout remotes/origin/master
!pip3 install --upgrade pip
!pip3 install waymo-open-dataset-tf-2-1-0==1.2.0
import os
import tensorflow.compat.v1 as tf
import math
import numpy as np
import itertools
tf.enable_eager_execution()
from waymo_open_dataset.utils import range_image_utils
from waymo_open_dataset.utils import transform_utils
from waymo_open_dataset.utils import frame_utils
from waymo_open_dataset import dataset_pb2 as open_dataset
```
## Read one frame
Each file in the dataset is a sequence of frames ordered by frame start timestamps. We have extracted two frames from the dataset to demonstrate the dataset format.
```
FILENAME = '/content/waymo-od/tutorial/frames'
dataset = tf.data.TFRecordDataset(FILENAME, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
break
(range_images, camera_projections,
_, range_image_top_pose) = frame_utils.parse_range_image_and_camera_projection(
frame)
```
###Examine frame context
Refer to [dataset.proto](https://github.com/waymo-research/waymo-open-dataset/blob/master/waymo_open_dataset/dataset.proto) for the data format. The context contains shared information among all frames in the scene.
```
print(frame.context)
```
## Visualize Camera Images and Camera Labels
```
import matplotlib.pyplot as plt
import matplotlib.patches as patches
def show_camera_image(camera_image, camera_labels, layout, cmap=None):
"""Show a camera image and the given camera labels."""
ax = plt.subplot(*layout)
# Draw the camera labels.
for camera_labels in frame.camera_labels:
# Ignore camera labels that do not correspond to this camera.
if camera_labels.name != camera_image.name:
continue
# Iterate over the individual labels.
for label in camera_labels.labels:
# Draw the object bounding box.
ax.add_patch(patches.Rectangle(
xy=(label.box.center_x - 0.5 * label.box.length,
label.box.center_y - 0.5 * label.box.width),
width=label.box.length,
height=label.box.width,
linewidth=1,
edgecolor='red',
facecolor='none'))
# Show the camera image.
plt.imshow(tf.image.decode_jpeg(camera_image.image), cmap=cmap)
plt.title(open_dataset.CameraName.Name.Name(camera_image.name))
plt.grid(False)
plt.axis('off')
plt.figure(figsize=(25, 20))
for index, image in enumerate(frame.images):
show_camera_image(image, frame.camera_labels, [3, 3, index+1])
```
##Visualize Range Images
```
plt.figure(figsize=(64, 20))
def plot_range_image_helper(data, name, layout, vmin = 0, vmax=1, cmap='gray'):
"""Plots range image.
Args:
data: range image data
name: the image title
layout: plt layout
vmin: minimum value of the passed data
vmax: maximum value of the passed data
cmap: color map
"""
plt.subplot(*layout)
plt.imshow(data, cmap=cmap, vmin=vmin, vmax=vmax)
plt.title(name)
plt.grid(False)
plt.axis('off')
def get_range_image(laser_name, return_index):
"""Returns range image given a laser name and its return index."""
return range_images[laser_name][return_index]
def show_range_image(range_image, layout_index_start = 1):
"""Shows range image.
Args:
range_image: the range image data from a given lidar of type MatrixFloat.
layout_index_start: layout offset
"""
range_image_tensor = tf.convert_to_tensor(range_image.data)
range_image_tensor = tf.reshape(range_image_tensor, range_image.shape.dims)
lidar_image_mask = tf.greater_equal(range_image_tensor, 0)
range_image_tensor = tf.where(lidar_image_mask, range_image_tensor,
tf.ones_like(range_image_tensor) * 1e10)
range_image_range = range_image_tensor[...,0]
range_image_intensity = range_image_tensor[...,1]
range_image_elongation = range_image_tensor[...,2]
plot_range_image_helper(range_image_range.numpy(), 'range',
[8, 1, layout_index_start], vmax=75, cmap='gray')
plot_range_image_helper(range_image_intensity.numpy(), 'intensity',
[8, 1, layout_index_start + 1], vmax=1.5, cmap='gray')
plot_range_image_helper(range_image_elongation.numpy(), 'elongation',
[8, 1, layout_index_start + 2], vmax=1.5, cmap='gray')
frame.lasers.sort(key=lambda laser: laser.name)
show_range_image(get_range_image(open_dataset.LaserName.TOP, 0), 1)
show_range_image(get_range_image(open_dataset.LaserName.TOP, 1), 4)
```
##Point Cloud Conversion and Visualization
```
points, cp_points = frame_utils.convert_range_image_to_point_cloud(
frame,
range_images,
camera_projections,
range_image_top_pose)
points_ri2, cp_points_ri2 = frame_utils.convert_range_image_to_point_cloud(
frame,
range_images,
camera_projections,
range_image_top_pose,
ri_index=1)
# 3d points in vehicle frame.
points_all = np.concatenate(points, axis=0)
points_all_ri2 = np.concatenate(points_ri2, axis=0)
# camera projection corresponding to each point.
cp_points_all = np.concatenate(cp_points, axis=0)
cp_points_all_ri2 = np.concatenate(cp_points_ri2, axis=0)
```
###Examine number of points in each lidar sensor.
First return.
```
print(points_all.shape)
print(cp_points_all.shape)
print(points_all[0:2])
for i in range(5):
print(points[i].shape)
print(cp_points[i].shape)
```
Second return.
```
print(points_all_ri2.shape)
print(cp_points_all_ri2.shape)
print(points_all_ri2[0:2])
for i in range(5):
print(points_ri2[i].shape)
print(cp_points_ri2[i].shape)
```
###Show point cloud
3D point clouds are rendered using an internal tool, which is unfortunately not publicly available yet. Here is an example of what they look like.
```
from IPython.display import Image, display
display(Image('/content/waymo-od/tutorial/3d_point_cloud.png'))
```
##Visualize Camera Projection
```
images = sorted(frame.images, key=lambda i:i.name)
cp_points_all_concat = np.concatenate([cp_points_all, points_all], axis=-1)
cp_points_all_concat_tensor = tf.constant(cp_points_all_concat)
# The distance between lidar points and vehicle frame origin.
points_all_tensor = tf.norm(points_all, axis=-1, keepdims=True)
cp_points_all_tensor = tf.constant(cp_points_all, dtype=tf.int32)
mask = tf.equal(cp_points_all_tensor[..., 0], images[0].name)
cp_points_all_tensor = tf.cast(tf.gather_nd(
cp_points_all_tensor, tf.where(mask)), dtype=tf.float32)
points_all_tensor = tf.gather_nd(points_all_tensor, tf.where(mask))
projected_points_all_from_raw_data = tf.concat(
[cp_points_all_tensor[..., 1:3], points_all_tensor], axis=-1).numpy()
def rgba(r):
"""Generates a color based on range.
Args:
r: the range value of a given point.
Returns:
The color for a given range
"""
c = plt.get_cmap('jet')((r % 20.0) / 20.0)
c = list(c)
c[-1] = 0.5 # alpha
return c
def plot_image(camera_image):
"""Plot a cmaera image."""
plt.figure(figsize=(20, 12))
plt.imshow(tf.image.decode_jpeg(camera_image.image))
plt.grid("off")
def plot_points_on_image(projected_points, camera_image, rgba_func,
point_size=5.0):
"""Plots points on a camera image.
Args:
projected_points: [N, 3] numpy array. The inner dims are
[camera_x, camera_y, range].
camera_image: jpeg encoded camera image.
rgba_func: a function that generates a color from a range value.
point_size: the point size.
"""
plot_image(camera_image)
xs = []
ys = []
colors = []
for point in projected_points:
xs.append(point[0]) # width, col
ys.append(point[1]) # height, row
colors.append(rgba_func(point[2]))
plt.scatter(xs, ys, c=colors, s=point_size, edgecolors="none")
plot_points_on_image(projected_points_all_from_raw_data,
images[0], rgba, point_size=5.0)
```
## Install from source code
The remaining part of this colab covers details of installing the repo form source code which provides a richer API.
### Install dependencies
```
!sudo apt install build-essential
!sudo apt-get install --assume-yes pkg-config zip g++ zlib1g-dev unzip python3 python3-pip
!wget https://github.com/bazelbuild/bazel/releases/download/0.28.0/bazel-0.28.0-installer-linux-x86_64.sh
!sudo bash ./bazel-0.28.0-installer-linux-x86_64.sh
```
###Build and test (this can take 10 mins)
Configure .bazelrc. This works with/without Tensorflow. This colab machine has Tensorflow installed.
```
!cd waymo-od && ./configure.sh && cat .bazelrc && bazel clean
!cd waymo-od && bazel build ... --show_progress_rate_limit=10.0
```
### Metrics computation
The core metrics computation library is written in C++, so it can be extended to other programming languages. It can compute detection metrics (mAP) and tracking metrics (MOTA). See more information about the metrics on the [website](https://waymo.com/open/next/).
We provide command line tools and TensorFlow ops to call the detection metrics library to compute detection metrics. We will provide a similar wrapper for tracking metrics library in the future. You are welcome to contribute your wrappers.
#### Command line detection metrics computation
The command takes a pair of files for prediction and ground truth. Read the comment in waymo_open_dataset/metrics/tools/compute_detection_metrics_main.cc for details of the data format.
```
!cd waymo-od && bazel-bin/waymo_open_dataset/metrics/tools/compute_detection_metrics_main waymo_open_dataset/metrics/tools/fake_predictions.bin waymo_open_dataset/metrics/tools/fake_ground_truths.bin
```
#### TensorFlow custom op
A TensorFlow op is defined at metrics/ops/metrics_ops.cc. We provide a python wrapper of the op at metrics/ops/py_metrics_ops.py, and a tf.metrics-like implementation of the op at metrics/python/detection_metrics.py. This library requires TensorFlow to be installed.
Install TensorFlow and NumPy.
```
!pip3 install numpy tensorflow
```
Reconfigure .bazelrc such that you can compile the TensorFlow ops
```
!cd waymo-od && ./configure.sh && cat .bazelrc
```
Run the op and tf.metrics wrapper unit tests which can be referenced as example usage of the libraries.
```
!cd waymo-od && bazel test waymo_open_dataset/metrics/ops/... && bazel test waymo_open_dataset/metrics/python/...
```
Run all tests in the repo.
```
!cd waymo-od && bazel test ...
```
### Build local PIP package
```
!cd waymo-od && export PYTHON_VERSION=3 && ./pip_pkg_scripts/build.sh
```
You can install the locally compiled package or access any c++ binary compiled from this.
| github_jupyter |
## DataFrames
Consider DataFrame as a combination of Series objects put together to share the same index.
```
import pandas as pd
import numpy as np
df = pd.DataFrame([[10,20,30],[50,60,70], [20,30,40]],columns=['Col1','Col2', 'Col3'])
df
df.columns # to get the column names
df.dtypes
df.index = ["R1","R2","R3"]
df
df1 = pd.DataFrame(np.random.randn(4,5),index=['R1','R2', 'R3', 'R4'],columns=['C1', 'C2', 'C3', 'C4','C5'])
df1
df_dict = pd.DataFrame({"Brand":['Samsung','Realme','Mi','Nokia'],
"Camera":[48,24,16,32], "RAM":[6,4,3,4], "Price" : [17500, 12000, 6999, 22000]})
df_dict
```
### Selection and Indexing
```
df1['C2']
# Pass a list of column names
df1[['C2','C4']]
# Similar to SQL Syntax (NOT RECOMMENDED!)
df1.C1
```
DataFrame Columns are just like Series.
```
type(df1['C2'])
```
**Creating a new column:**
```
df1['C6'] = df1['C1'] + df1['C2']
df1
```
**Removing Columns**
```
df1.drop('C6',axis=1) # axis=1 for across the columns
df1 #Not inplace(permanent change) unless specified!
df1.drop('C6',axis=1,inplace=True)
df1
df1.drop('R4',axis=0) #axis = 0 for across the rows
df1
```
**Selecting Rows**
```
df1.loc[['R2','R3']]
df1.iloc[0] #accessing rows based off of position instead of label
df1.iloc[:3]
df1.loc['R2']
df1.loc['R2','C4'] #Selecting subset of rows and columns
df1.iloc[:3,1:]
df1
df1.loc[['R2','R3'],['C1','C5']]
```
### Filtering data based on condition(Conditional Selection)
```
df1
df1>0.3
df1[df1>0]
df1
df1['C1']>0.3
df1[df1['C1']>0.3] # returns the rows which satisfies the column condition
df1[df1['C1']>0.3]['C3']
df1[df1['C1']>0.3][['C3','C5']]
df1
```
For multiple conditions use logical operators like &,| etc.
```
df1[(df1['C1']>0.1) & (df1['C3'] > 0.7)]
```
### Quiz 2
```
employees=pd.DataFrame({"Name":['Tom','Nick','John','Peter'],
"Age":[25,26,37,22], "Salary" : [24500, 27000, 42000, 26000]})
employees
```
What will be the output here?
**employees[(employees["Age"]>25) | (employees["Salary"]>25000)]**
a.
| Name | Age | Salary |
|------|------|------|
| Nick | 26 | 27000 |
| John | 37 | 42000 |
| Peter | 22 | 26000 |
b.
| Name | Age | Salary |
|------|------|------|
| Nick | 26 | 27000 |
| John | 37 | 42000 |
c.
| Name | Age | Salary |
|------|------|------|
| John | 37 | 42000 |
| Peter | 22 | 26000 |
d.
| Name | Age | Salary |
|------|------|------|
| Tom | 25 | 24500 |
| Nick | 26 | 27000 |
| John | 37 | 42000 |
| Peter | 22 | 26000 |
### More about Indexing
Let's see how to reset the index or setting it to something else.
```
df2 = pd.DataFrame({"Name":['Tom','Nick','John','Peter',"Ram","Shayam","Mohan","Sundar"],
"Age":[15,16,7,12,11,13,15,18], "Height" : [160,164,154,170,165,172,175,180]})
df2
df3 = df2[(df2["Age"]>11) & (df2["Height"]>150)]
df3
# Reset to default 0,1...n index
df3.reset_index()
df3.reset_index(drop=True)
new_index = 'AB CD EF GH IJ KL MN OP'.split()
new_index
df2['New'] = new_index
df2
df2.set_index('New')
df2
df2.set_index('New',inplace=True)
df2
```
### Quiz 3
```
employees
```
Choose the correct statement(s) about the **employees** dataframe after the below operation?
**employees.set_index('Name',inplace=True).reset_index()**
* No change(same like earlier)
* It will have two columns **Age** and **Salary**
* **Name** column is set for index labels
* It will have four columns **index**, **Name**, **Age** and **Salary**
| github_jupyter |
## Export Data Script "Blanket Script"
This script searches for ArcGIS Online feature layers and exports them as a geodatabase on ArcGIS Online. This script is based on a system-wide backup script and utilizes a keyword search function to archive data.
To start running the script, you can click Run at the top to run the selected cell. Running a header cell (like this) will just move on to the next cell. A number will appear next to the cell when it has been successfully ran. You can also run them all by clicking "Cell" in the menu bar at the top and then click "Run All". You can run one cell several times if you are tinkering with the code, as long as you make sure to run the first cell below to log yourself into AGOL before you begin (and you run any other cells that create data used in the cell you are tinkering).
```
from arcgis.gis import GIS
import datetime as dt
gis = GIS('home')
```
Starting the script, the cell below will gather the ArcGIS Online layers you want to back up. See look at the comments for specifics. This script will find any feature layer with the keyword you identify in the first open line of this cell.
For example, if you use "Bolder Wildfire" as your keyword, this script will return everything with "Bolder" OR "Wildfire" in the title. After you run the script, you will get an output message of the layers to be backed up. Make sure you check what the script finds BEFORE setting a task to automatically run this script. This script will only export feature layers. It will find feature layer views, but they will not be exported.
Comments start with "#"
```
# What folder is this data going to? Put the folder name between the "". I suggest not using spaces in the folder names. You do not have to create a whole new folder to run the script.
# Leaving this blank will make the data backup to the root folder in your ArcGIS Online.
agol_folder = ""
# How many items do you want to search for and backup at one time? This is the max number of layers that will be backed up each time the script is ran.
# Without this, the default is 10. You can change this number to be whatever you want
num_items = 10
```
This cell looks for items in AGOL. Put your desired keyword between the "" in the first line. Whatever you put here has to also be in the items' title, otherwise the code won't know to look for them.
```
keyword = "<INCIDENT NAME>"
query_string = "type:Feature Service, title:{}, owner:{}".format(keyword, username)
items = gis.content.search(query=query_string, max_items=num_items, sort_field='modifed', sort_order='desc')
print(str(len(items)) + " items will be backed up to " + agol_folder + " Folder. See the list below:")
items
```
This section is what does the magic. By default, you do not have to change anything here. However, if you want to change the name of the output file, this would be done in line 10 next to "result". Do not worry if several lines say "An error occurred downloading". This just means that that file is not a feature layer and isn't going to download. If everything gives you this error, check your code and make sure you are grabbing feature layers.
```
def download_as_fgdb(item_list):
for item in item_list:
try:
if 'View Service' in item.typeKeywords:
print(item.title + " is view, not downloading")
else:
print("Downloading " + item.title)
# This line figures out today's date in Year-Month-Day format, then puts the date at the end of the export data name
version=time.strftime('%Y%m%d %H:%M')
result = item.export(version + " UTC " + item.title, "File Geodatabase")
print("Successfully downloaded " + item.title)
result.move(folder= agol_folder)
except Exception as e:
print("An error occurred downloading " + item.title)
print(e)
print("The function has completed")
download_as_fgdb(items)
```
| github_jupyter |
#Environment Setup
```
from google.colab import drive
drive.mount('/content/drive')
```
#Put Data in DataFrame (Articles)
```
import pandas as pd
path = "PATH/TO/DATASETS" # Place dataset path here
import glob, os
dfFalse = pd.read_csv(path+"NewsFakeCOVID-19.csv", usecols=['title'])
dfFalse['label']=0
dfFalseJuly = pd.read_csv(path+"NewsFakeCOVID-19-JULY.csv", usecols=['title'])
dfFalseJuly['label']=0
dfTrue = pd.read_csv(path+"NewsRealCOVID-19.csv", usecols=['title'], nrows=len(dfFalse.values))
dfTrue['label']=1
dfTrueJuly = pd.read_csv(path+"NewsRealCOVID-19-JULY.csv", usecols=['title'], nrows=len(dfFalseJuly.values))
dfTrueJuly['label']=1
dfTotal = pd.concat([dfTrue, dfFalse, dfTrueJuly, dfFalseJuly])
X = dfTotal['title'].values
y = dfTotal['label'].values
print("len(X)", len(X))
print("len(y)", len(y))
```
#Next Steps With Data
```
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from keras.preprocessing.text import one_hot
from keras.preprocessing.text import hashing_trick
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.optimizers import Adam
from keras import layers
from keras.layers import Activation
from keras import regularizers
from keras.regularizers import l1, l2, l1_l2
import math
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=450)
X_train = [hashing_trick(elem, 10000, hash_function='md5') for elem in X_train]
X_test = [hashing_trick(elem, 10000, hash_function='md5') for elem in X_test]
X_train = pad_sequences(X_train, padding='post', maxlen=500)
X_test = pad_sequences(X_test, padding='post', maxlen=500)
print("Length of X_train:", len(X_train))
print("Length of X_test:", len(X_test))
print("Length of y_train:", len(y_train))
print("Length of y_test:", len(y_test))
model = Sequential()
model.add(layers.Embedding(10000, 64, input_length=500))
model.add(layers.Bidirectional(layers.LSTM(64, kernel_regularizer=l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=l2(1e-4), recurrent_regularizer=l2(1e-5))))
model.add(layers.Dropout(0.7))
model.add(layers.Dense(1, activation='sigmoid'))
slowAdam = Adam(learning_rate=0.0001)
model.compile(loss='binary_crossentropy',
optimizer=slowAdam,
metrics=['accuracy'])
model.fit(X_train, y_train,
epochs=5,
validation_data=(X_test, y_test),
batch_size=1)
model.summary()
model.save('PATH/TO/MODELS/CoAID_using_HashingTrick.h5') #Place intended path here
modelPrediction = model.predict(X_test)
print(len(modelPrediction))
modelPrediction = [math.floor(0.5+pred) for pred in modelPrediction]
print(classification_report(y_test, modelPrediction))
print(accuracy_score(modelPrediction, y_test))
```
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src="https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width="400" align="center"></a>
<h1><center>K-Nearest Neighbors</center></h1>
In this Lab you will load a customer dataset, fit the data, and use K-Nearest Neighbors to predict a data point. But what is **K-Nearest Neighbors**?
**K-Nearest Neighbors** is an algorithm for supervised learning. Where the data is 'trained' with data points corresponding to their classification. Once a point is to be predicted, it takes into account the 'K' nearest points to it to determine it's classification.
### Here's an visualization of the K-Nearest Neighbors algorithm.
<img src="https://ibm.box.com/shared/static/mgkn92xck0z05v7yjq8pqziukxvc2461.png">
In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A.
In this sense, it is important to consider the value of k. But hopefully from this diagram, you should get a sense of what the K-Nearest Neighbors algorithm is. It considers the 'K' Nearest Neighbors (points) when it predicts the classification of the test point.
<h1>Table of contents</h1>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ol>
<li><a href="#about_dataset">About the dataset</a></li>
<li><a href="#visualization_analysis">Data Visualization and Analysis</a></li>
<li><a href="#classification">Classification</a></li>
</ol>
</div>
<br>
<hr>
Lets load required libraries
```
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
```
<div id="about_dataset">
<h2>About the dataset</h2>
</div>
Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict class of a new or unknown case.
The example focuses on using demographic data, such as region, age, and marital, to predict usage patterns.
The target field, called __custcat__, has four possible values that correspond to the four customer groups, as follows:
1- Basic Service
2- E-Service
3- Plus Service
4- Total Service
Our objective is to build a classifier, to predict the class of unknown cases. We will use a specific type of classification called K nearest neighbour.
Lets download the dataset. To download the data, we will use !wget to download it from IBM Object Storage.
```
!wget -O teleCust1000t.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/teleCust1000t.csv
```
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
### Load Data From CSV File
```
df = pd.read_csv('teleCust1000t.csv')
df.head()
```
<div id="visualization_analysis">
<h2>Data Visualization and Analysis</h2>
</div>
#### Let’s see how many of each class is in our data set
```
df['custcat'].value_counts()
```
#### 281 Plus Service, 266 Basic-service, 236 Total Service, and 217 E-Service customers
You can easily explore your data using visualization techniques:
```
df.hist(column='income', bins=50)
```
### Feature set
Lets define feature sets, X:
```
df.columns
```
To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:
```
X = df[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']] .values #.astype(float)
X[0:5]
```
What are our labels?
```
y = df['custcat'].values
y[0:5]
```
## Normalize Data
Data Standardization give data zero mean and unit variance, it is good practice, especially for algorithms such as KNN which is based on distance of cases:
```
X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]
```
### Train Test Split
Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.
It is important that our models have a high, out-of-sample accuracy, because the purpose of any model, of course, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
```
<div id="classification">
<h2>Classification</h2>
</div>
<h3>K nearest neighbor (KNN)</h3>
#### Import library
Classifier implementing the k-nearest neighbors vote.
```
from sklearn.neighbors import KNeighborsClassifier
```
### Training
Lets start the algorithm with k=4 for now:
```
k = 4
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
neigh
```
### Predicting
we can use the model to predict the test set:
```
yhat = neigh.predict(X_test)
yhat[0:5]
```
### Accuracy evaluation
In multilabel classification, __accuracy classification score__ is a function that computes subset accuracy. This function is equal to the jaccard_similarity_score function. Essentially, it calculates how closely the actual labels and predicted labels are matched in the test set.
```
from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))
```
## Practice
Can you build the model again, but this time with k=6?
```
# write your code here
k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))
```
Double-click __here__ for the solution.
<!-- Your answer is below:
k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))
-->
#### What about other K?
K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by the User. So, how can we choose right value for K?
The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.
We can calculate the accuracy of KNN for different Ks.
```
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
ConfustionMx = [];
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat=neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
mean_acc
```
#### Plot model accuracy for Different number of Neighbors
```
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Nabors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
```
<h2>Want to learn more?</h2>
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a>
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a>
<h3>Thanks for completing this lesson!</h3>
<h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4>
<p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>
<hr>
<p>Copyright © 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
| github_jupyter |
# Regression Week 4: Ridge Regression (gradient descent)
In this notebook, you will implement ridge regression via gradient descent. You will:
* Convert an SFrame into a Numpy array
* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty
# Fire up graphlab create
Make sure you have the latest version of GraphLab Create (>= 1.7)
```
import graphlab
```
# Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
```
sales = graphlab.SFrame('kc_house_data.gl/')
```
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
# Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_numpy_data()` from the second notebook of Week 2.
```
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe
# (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
```
Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights:
```
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
```
# Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
```
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
```
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to `w[i]` can be written as:
```
2*SUM[ error*[feature_i] ].
```
The derivative of the regularization term with respect to `w[i]` is:
```
2*l2_penalty*w[i].
```
Summing both, we get
```
2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].
```
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus `2*l2_penalty*w[i]`.
**We will not regularize the constant.** Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the `2*l2_penalty*w[0]` term).
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus `2*l2_penalty*w[i]`.
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call `feature_is_constant` which you should set to `True` when computing the derivative of the constant and `False` otherwise.
```
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
if feature_is_constant == True:
derivative = 2 * np.dot(errors, feature)
else :
derivative = 2 * np.dot(errors, feature) + 2*l2_penalty*weight
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
return derivative
```
To test your feature derivartive run the following:
```
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
```
# Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function.
The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a **maximum number of iterations** and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
```
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100):
print 'Starting gradient descent with l2_penalty = ' + str(l2_penalty)
weights = np.array(initial_weights) # make sure it's a numpy array
iteration = 0 # iteration counter
print_frequency = 1 # for adjusting frequency of debugging output
while iteration < max_iterations :
iteration += 1 # increment iteration counter
### === code section for adjusting frequency of debugging output. ===
if iteration == 10:
print_frequency = 10
if iteration == 100:
print_frequency = 100
if iteration%print_frequency==0:
print('Iteration = ' + str(iteration))
### === end code section ===
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
# from time to time, print the value of the cost function
if iteration%print_frequency==0:
print 'Cost function = ', str(np.dot(errors,errors) + l2_penalty*(np.dot(weights,weights) - weights[0]**2))
for i in xrange(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
if () :
feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, True)
else :
feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, False)
# subtract the step size times the derivative from the current weight
print 'Done with gradient descent at iteration ', iteration
print 'Learned weights = ', str(weights)
return weights
```
# Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
```
simple_features = ['sqft_living']
my_output = 'price'
```
Let us split the dataset into training set and test set. Make sure to use `seed=0`:
```
train_data,test_data = sales.random_split(.8,seed=0)
```
In this part, we will only use `'sqft_living'` to predict `'price'`. Use the `get_numpy_data` function to get a Numpy versions of your data with only this feature, for both the `train_data` and the `test_data`.
```
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
```
Let's set the parameters for our optimization:
```
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
```
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`simple_weights_0_penalty`
we'll use them later.
```
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 0.0, max_iterations=100)
```
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`simple_weights_high_penalty`
we'll use them later.
```
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, 1e11, max_iterations=100)
```
This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
```
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
```
test_predicted_output = predict_output(simple_test_feature_matrix, initial_weights)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
test_predicted_output = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
test_predicted_output = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
```
***QUIZ QUESTIONS***
1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
# Running a multiple regression with L2 penalty
Let us now consider a model with 2 features: `['sqft_living', 'sqft_living15']`.
First, create Numpy versions of your training and test data with these two features.
```
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
```
We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
```
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
```
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`multiple_weights_0_penalty`
```
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, 0.0, max_iterations=100)
```
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:
`multiple_weights_high_penalty`
```
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, 1e11, max_iterations=100)
```
Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
```
test_predicted_output = predict_output(test_feature_matrix, initial_weights)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
test_predicted_output = predict_output(test_feature_matrix, multiple_weights_0_penalty)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
test_predicted_output = predict_output(test_feature_matrix, multiple_weights_high_penalty)
rss = ((test_output - test_predicted_output )**2).sum()
print rss
```
Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
***QUIZ QUESTIONS***
1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
2. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
3. We make prediction for the first house in the test set using two sets of weights (no regularization vs high regularization). Which weights make better prediction <u>for that particular house</u>?
| github_jupyter |
## Title: Spatial Database of Planted Trees (SDPT)
### Description
The Spatial Database of Planted Trees (SDPT) was compiled by Global Forest Watch using data obtained from national governments, non-governmental organizations and independent researchers. Data were compiled for 82 countries around the world, with most country maps originating from supervised classification or manual polygon delineation of Landsat, SPOT or RapidEye satellite imagery.
The category of “planted trees” in the SDPT includes forest plantations of native or introduced species, established through deliberate human planting or seeding. Sometimes called “tree farms,” these forests infuse the global economy with a constant stream of lumber for construction, pulp for paper and fuelwood for energy. The data set also includes agricultural tree crops like oil palm plantations, avocado farms, apple orchards and even Christmas tree farms. The SDPT makes it possible to identify planted forests and tree crops as being separate from natural forests and enables changes in these planted areas to be monitored independently from changes in global natural forest cover.
The SDPT contains 173 million hectares of planted forest and 50 million hectares of agricultural trees, or approximately 82% of the world’s total planted forest area in 2015 (FAO 2015). The SDPT was compiled through a procedure that included cleaning and processing each individual data set before creating a harmonized attribute table.
Available as single country datasets, as well as a compiled global version (excluding countries with no data notably Canada, Russia and some African countries).
### FLINT
This dataset has been pre-processed/checked and is suitable for use in FLINT. Please adhere to individual dataset licence conditions and citations. Processed data can be accessed here: https://datasets.mojaglobal.workers.dev/
### Format
<b>Extent: </b>Compiled country - global but excluding some countries<br>
<b>Format</b>: polygon geoJSON .json<br>
<b>Cordinate system:</b> EPSG:4326 (WGS84)<br>
<b>Temporal Resolution: </b>multiple<br>
<b>Size: 10GB+ (individual country file sizes smaller)</b>
### Original source
Original Source: https://www.arcgis.com/home/item.html?id=224e00192f6d408fa5147bbfc13b62dd
Zip: http://gfw-files.s3.amazonaws.com/plantations/final/global/plantations_v1_3_dl.gdb.zip Accessed 13/12/2020<br>
Note this is a ESRI Geodatabase feature class (polygon) and is 8GB. Multiple countries but not all, vector, shapefile by country. Cordinate system EPSG:4326 (WGS84)<br>
### Licence
[CC BY 4.0] (https://creativecommons.org/licenses/by/4.0/)
### Citation
Harris, N., E. Goldman and S. Gibbes. “Spatial Database of Planted Trees (SDPT) Version 1.0.” Accessed through Global Forest Watch on 20/12/2020. www.globalforestwatch.org.
### Metadata
https://www.arcgis.com/home/item.html?id=224e00192f6d408fa5147bbfc13b62dd
Metadata states there is data available for 82 countries, however only 43 from the downloaded source are available.
### Notes
This dataset is a compilation of planted trees data from a variety of countries and sources. As a result, there are definitional and temporal inconsistencies within the database, as well as an absence of a uniform accuracy assessment and incomplete spatial coverage, notably in Canada, Russia and countries in Africa.
File sizes are large due to detail and resoltion of the vectors. Processing time is lengthy and final vector json sizes large. Significant self intersection, significant gaps but overlaps are generally ok.
### Processing
Repair geometry, fix topologial error (remove overlaps and small gaps), convert to geojson, EPSG:4326 (WGS84), remove/disable Z values. The Sth Korea, USA, South Africa and India datasets are very large, detailed with many attributes and vertices, and are 1GB+ in json format. South Korea was so large (6GB) - it was split into three, simplified using the Douglas-Peucker algorithm with a 5m tolerance and stictched back together. It is not recommeded the large files are combined into a global dataset without simplification in GIS first (someone please let me know if you want this).
```
# Import arcpy and os
import arcpy
import os
# Input local variables
in_folder = r"C:\SpatialDatabasePlantedTrees\plantations_v1_3_dl.gdb"
scr_folder = r"C:\Data\scratch.gdb"
out_folder = r"C:\Data\json\plantedtrees"
# Environments
workspace = in_folder
arcpy.env.workspace = workspace
arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(4326)
arcpy.env.outputZFlag = "Disabled"
arcpy.env.overwriteOutput = True
scr = arcpy.CreateFileGDB_management(r"C:\Data", "scratch")
# List features to process
featureclasses = arcpy.ListFeatureClasses()
print(featureclasses)
# Repair/check topology and make FLINT ready
for fc in featureclasses:
fcname = os.path.join(os.path.splitext(fc)[0])
outjson = os.path.join(out_folder, fcname)
print(fcname + ' processing...')
geomRepair = arcpy.management.RepairGeometry(fixedlyr, "DELETE_NULL", "OGC")[0]
fLayer = "project_Layer"
arcpy.management.MakeFeatureLayer(fc, fLayer)
projectIntersect = os.path.join(scr_folder, "projectIntersect")
arcpy.analysis.Intersect(fLayer, projectIntersect, "ONLY_FID")
projectSingle = os.path.join(scr_folder, "projectSingle")
arcpy.management.MultipartToSinglepart(projectIntersect, projectSingle)
dissolveSlither = os.path.join(scr_folder, "dissolveSlither")
arcpy.management.Dissolve(projectSingle, dissolveSlither, None, None,"SINGLE_PART")
whereclause = "FID_" + fcname + "= -1 OR AREA_GEO <= 5000 Or AREA_GEO IS NULL"
# Take action if overlaps
if arcpy.management.GetCount(dissolveSlither)[0] == "0":
print('no overlaps detected')
projectUnion = os.path.join(scr_folder, "projectUnion")
arcpy.analysis.Union(fLayer,projectUnion, "ALL", None, "NO_GAPS")
arcpy.management.AddGeometryAttributes("projectUnion", "AREA_GEODESIC", None, "SQUARE_METERS")
uniSelect = os.path.join(scr_folder, "uniSelect")
arcpy.analysis.Select(projectUnion, uniSelect, whereclause)
if arcpy.management.GetCount(uniSelect)[0] == "0":
# Progress report no error
print(fcname, 'No gaps and overlaps. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fLayer, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fLayer, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
print(outjson, '.geojson complete')
else:
# Take action if gaps
print('gaps detected')
appendGap = arcpy.management.Append(uniSelect, fLayer, "NO_TEST")
selectGap = arcpy.management.SelectLayerByAttribute(fLayer, "NEW_SELECTION", "final_id = ''")
fixedlyr = os.path.join(scr_folder, "fixedlyr")
arcpy.management.Eliminate(selectGap, fixedlyr, "LENGTH")
# Progress report
print(fcname, 'No overlaps, gaps detected and repaired. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fixedlyr, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fixedlyr, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
else:
# Fix overlaps
projectErase = os.path.join(scr_folder, "projectErase")
arcpy.analysis.Erase(fLayer, dissolveSlither, projectErase)
arcpy.management.Append(dissolveSlither, projectErase, "NO_TEST")
selectSlither = arcpy.management.SelectLayerByAttribute(projectErase, "NEW_SELECTION", "final_id = ''")
eliminateSlither = os.path.join(scr_folder, "eliminateSlither")
arcpy.management.Eliminate(selectSlither, eliminateSlither, "LENGTH")
print('overlaps detected and fixed')
projectUnion = os.path.join(scr_folder, "projectUnion")
arcpy.analysis.Union(eliminateSlither, projectUnion, "ALL", None, "NO_GAPS")
arcpy.management.AddGeometryAttributes("projectUnion", "AREA_GEODESIC", None, "SQUARE_METERS")
uniSelect = os.path.join(scr_folder, "uniSelect")
arcpy.analysis.Select(projectUnion, uniSelect, "FID_eliminateSlither = -1 OR AREA_GEO <= 5000 Or AREA_GEO IS NULL")
if arcpy.management.GetCount(uniSelect)[0] == "0":
# Progress report no error
print(fcname, 'Overlaps detected and repaired. No gaps detected. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(eliminateSlither, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(eliminateSlither, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
print(outjson, '.geojson complete')
else:
# Take action if gaps
appendGap = arcpy.management.Append(uniSelect, eliminateSlither, "NO_TEST")
selectGap = arcpy.management.SelectLayerByAttribute(eliminateSlither, "NEW_SELECTION", "final_id = ''")
fixedlyr = os.path.join(scr_folder, "fixedlyr")
arcpy.management.Eliminate(selectGap, fixedlyr, "LENGTH")
print('gaps detected and repaired')
# Progress report
print(fcname, 'Gaps and overlaps repaired. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fixedlyr, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fixedlyr, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
arcpy.AddMessage("All done!")
print('done')
# To reduce the file size of the Korea planted trees data - split into three files (manually)then follow below for each
# Split will need to be done manually making sure all conecting polygons are in the same file at the split zone.
# Local variables
in_file = "insert file here"
out_simple = "interim simple file and location" #inspect this to see if algorithm satisfactory
outfinalkor_plant = "insert outfile and location here"
outjson = "insert output json file"
# Processes 3 split files (otherwise process will fail - too many parts)
arcpy.cartography.SimplifyPolygon(in_file, out_simple, "POINT_REMOVE", "5 Meters", "200 SquareMeters", "RESOLVE_ERRORS", "NO_KEEP", None)
# merge back together
arcpy.management.Merge("kor1Simple_plant;kor2Simple_plant;kor3Simple_plant", outfinalkor_plant)
geomRepair = arcpy.management.RepairGeometry(outfinalkor_plant, "DELETE_NULL", "OGC")[0]
arcpy.conversion.FeaturesToJSON(outfinalkor_plant, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
```
| github_jupyter |
# Bayesian Optimization with Random Forests (SMAC)
## Optimizing a CNN with Scikit-Optimize
In this notebook, we will use **Bayesian Optimization** to select the best **hyperparameters** for a CNN that recognizes digits in images, using the MNIST dataset and the open source Python package [Scikit-Optimize](https://scikit-optimize.github.io/stable/index.html).
We will use Random Forests as the surrogate function to approximate f(x)
The MNIST dataset is availale in [Kaggle](https://www.kaggle.com/c/digit-recognizer/data).
## Download dataset
- Navigate to the [MNIST website in Kaggle](https://www.kaggle.com/c/digit-recognizer/data)
- Download the train.csv file
- Unzip and copy the train.csv file to where you see the SAVE_DATASETS-HERE.txt file
- Rename to mnist.csv
**Remember that you need to be logged in to be able to download the dataset**
## Notebook content
- Data Preparation
- Set up a simple CNN
- Set up the hyperparameter search shape
- Set up the objective function
- Perform Bayesian Optimization
- Evaluate Model Performance
```
# For reproducible results.
# See:
# https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
import os
os.environ['PYTHONHASHSEED'] = '0'
import numpy as np
import tensorflow as tf
import random as python_random
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(123)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
python_random.seed(123)
# The below set_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/random/set_seed
tf.random.set_seed(1234)
import itertools
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Dense, Flatten, Conv2D, MaxPool2D
from keras.optimizers import Adam
from keras.callbacks import ReduceLROnPlateau
from skopt import forest_minimize, gbrt_minimize
from skopt.space import Real, Categorical, Integer
from skopt.plots import plot_convergence
from skopt.plots import plot_objective, plot_evaluations
from skopt.utils import use_named_args
```
# Data Preparation
The dataset contains information about images, each image is a hand-written digit. The aim is to have the computer predict which digit was written by the person, automatically, by "looking" at the image.
Each image is 28 pixels in height and 28 pixels in width (28 x 28), making a total of 784 pixels. Each pixel value is an integer between 0 and 255, indicating the darkness in a gray-scale of that pixel.
The data is stored in a dataframe where each each pixel is a column (so it is flattened and not in the 28 x 28 format).
The data set the has 785 columns. The first column, called "label", is the digit that was drawn by the user. The rest of the columns contain the pixel-values of the associated image.
```
# Load the data
data = pd.read_csv("../mnist.csv")
# first column is the target, the rest of the columns
# are the pixels of the image
# each row is 1 image
data.head()
# split dataset into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['label'], axis=1), # the images
data['label'], # the target
test_size = 0.1,
random_state=0)
X_train.shape, X_test.shape
# number of images for each digit
g = sns.countplot(x=y_train)
plt.xlabel('Digits')
plt.ylabel('Number of images')
```
There are roughly the same amount of images for each of the 10 digits.
## Image re-scaling
We re-scale data for the CNN, between 0 and 1.
```
# Re-scale the data
# 255 is the maximum value a pixel can take
X_train = X_train / 255
X_test = X_test / 255
```
## Reshape
The images were stored in a pandas dataframe as 1-D vectors of 784 values. For a CNN with Keras, we need tensors with the following dimensions: width x height x channel.
Thus, we reshape all data to 28 x 2 8 x 1, 3-D matrices.
The 3rd dimension corresponds to the channel. RGB images have 3 channels. MNIST images are in gray-scale, thus they have only one channel in the 3rd dimension.
```
# Reshape image in 3 dimensions:
# height: 28px X width: 28px X channel: 1
X_train = X_train.values.reshape(-1,28,28,1)
X_test = X_test.values.reshape(-1,28,28,1)
```
## Target encoding
```
# the target is 1 variable with the 9 different digits
# as values
y_train.unique()
# For Keras, we need to create 10 dummy variables,
# one for each digit
# Encode labels to one hot vectors (ex : digit 2 -> [0,0,1,0,0,0,0,0,0,0])
y_train = to_categorical(y_train, num_classes = 10)
y_test = to_categorical(y_test, num_classes = 10)
# the new target
y_train
```
Let's print some example images.
```
# Some image examples
g = plt.imshow(X_train[0][:,:,0])
# Some image examples
g = plt.imshow(X_train[10][:,:,0])
```
# Define the CNN
We will create a CNN, with 2 Convolutional layers followed by Pooling, and varying number of fully-connected Dense We will create a CNN, with 2 Convolutional layers followed by Pooling, and varying number of fully-connected Dense layers. Each Convlutional layer, can itself have more than 1 conv layer.
```
# function to create the CNN
def create_cnn(
# the hyperparam to optimize are passed
# as arguments
learning_rate,
num_conv_layers,
num_dense_layers,
num_dense_nodes,
activation,
):
"""
Hyper-parameters:
learning_rate: Learning-rate for the optimizer.
convolutional layers: Number of conv layers.
num_dense_layers: Number of dense layers.
num_dense_nodes: Number of nodes in each dense layer.
activation: Activation function for all layers.
"""
# Start construction of a Keras Sequential model.
model = Sequential()
# First convolutional layer.
# There are many hyper-parameters in this layer
# For this demo, we will optimize the activation function and
# the number of convolutional layers that it can take.
# We add the different number of conv layers in the following loop:
for i in range(num_conv_layers):
model.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same',
activation=activation))
model.add(MaxPool2D(pool_size=2, strides=2))
# Second convolutional layer.
# Same hyperparameters to optimize as previous layer.
for i in range(num_conv_layers):
model.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same',
activation=activation))
model.add(MaxPool2D(pool_size=2, strides=2))
# Flatten the 4-rank output of the convolutional layers
# to 2-rank that can be input to a fully-connected Dense layer.
model.add(Flatten())
# Add fully-connected Dense layers.
# The number of layers is a hyper-parameter we want to optimize.
# We add the different number of layers in the following loop:
for i in range(num_dense_layers):
# Add the dense fully-connected layer to the model.
# This has two hyper-parameters we want to optimize:
# The number of nodes (neurons) and the activation function.
model.add(Dense(num_dense_nodes,
activation=activation,
))
# Last fully-connected dense layer with softmax-activation
# for use in classification.
model.add(Dense(10, activation='softmax'))
# Use the Adam method for training the network.
# We want to find the best learning-rate for the Adam method.
optimizer = Adam(lr=learning_rate)
# In Keras we need to compile the model so it can be trained.
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
```
# Define the Hyperparameter Space
Scikit-optimize provides an utility function to create the range of values to examine for each hyperparameters. More details in [skopt.Space](https://scikit-optimize.github.io/stable/modules/generated/skopt.Space.html)
We want to find the following hyper-parameters:
- The learning rate of the optimizer.
- The number of convolutional layers.
- The number of fully-connected Dense layers.
- The number of nodes (neurons) for each of the dense layers.
- Whether to use 'sigmoid' or 'relu' activation in all the layers.
```
dim_learning_rate = Real(
low=1e-6, high=1e-2, prior='log-uniform', name='learning_rate',
)
dim_num_conv_layers = Integer(low=1, high=3, name='num_conv_layers')
dim_num_dense_layers = Integer(low=1, high=5, name='num_dense_layers')
dim_num_dense_nodes = Integer(low=5, high=512, name='num_dense_nodes')
dim_activation = Categorical(
categories=['relu', 'sigmoid'], name='activation',
)
# the hyperparameter space grid
param_grid = [dim_learning_rate,
dim_num_conv_layers,
dim_num_dense_layers,
dim_num_dense_nodes,
dim_activation]
```
# Define the Objective Function
```
# we will save the model with this name
path_best_model = 'cnn_model.h5'
# starting point for the optimization
best_accuracy = 0
@use_named_args(param_grid)
def objective(
learning_rate,
num_conv_layers,
num_dense_layers,
num_dense_nodes,
activation,
):
"""
Hyper-parameters:
learning_rate: Learning-rate for the optimizer.
convolutional layers: Number of conv layers.
num_dense_layers: Number of dense layers.
num_dense_nodes: Number of nodes in each dense layer.
activation: Activation function for all layers.
"""
# Print the hyper-parameters.
print('learning rate: {0:.1e}'.format(learning_rate))
print('num_conv_layers:', num_conv_layers)
print('num_dense_layers:', num_dense_layers)
print('num_dense_nodes:', num_dense_nodes)
print('activation:', activation)
print()
# Create the neural network with the hyper-parameters.
# We call the function we created previously.
model = create_cnn(learning_rate=learning_rate,
num_conv_layers=num_conv_layers,
num_dense_layers=num_dense_layers,
num_dense_nodes=num_dense_nodes,
activation=activation)
# Set a learning rate annealer
# this reduces the learning rate if learning does not improve
# for a certain number of epochs
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy',
patience=2,
verbose=1,
factor=0.5,
min_lr=0.00001)
# train the model
# we use 3 epochs to be able to run the notebook in a "reasonable"
# time. If we increase the epochs, we will have better performance
# this could be another parameter to optimize in fact.
history = model.fit(x=X_train,
y=y_train,
epochs=3,
batch_size=128,
validation_split=0.1,
callbacks=learning_rate_reduction)
# Get the classification accuracy on the validation-set
# after the last training-epoch.
accuracy = history.history['val_accuracy'][-1]
# Print the classification accuracy.
print()
print("Accuracy: {0:.2%}".format(accuracy))
print()
# Save the model if it improves on the best-found performance.
# We use the global keyword so we update the variable outside
# of this function.
global best_accuracy
# If the classification accuracy of the saved model is improved ...
if accuracy > best_accuracy:
# Save the new model to harddisk.
# Training CNNs is costly, so we want to avoid having to re-train
# the network with the best found parameters. We save it instead
# as we search for the best hyperparam space.
model.save(path_best_model)
# Update the classification accuracy.
best_accuracy = accuracy
# Delete the Keras model with these hyper-parameters from memory.
del model
# Remember that Scikit-optimize always minimizes the objective
# function, so we need to negate the accuracy (because we want
# the maximum accuracy)
return -accuracy
```
## Test run
```
# Before we run the hyper-parameter optimization,
# let's first check that the everything is working
# by passing some default hyper-parameters.
default_parameters = [1e-5, 1, 1, 16, 'relu']
objective(x=default_parameters)
```
We obtained a mediocre accuracy, but all our code is working. So let's get started with the Optimization now!!
## Bayesian Optimization with Random Forests
- [forest_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.forest_minimize.html#skopt.forest_minimize)
- [gbrt_minimize](https://scikit-optimize.github.io/stable/modules/generated/skopt.gbrt_minimize.html#skopt.gbrt_minimize)
```
# we approximate f(x) using Random Forests, we could
# also approximate it with gradient boosting machines
# using gbrt_minimize instead.
fm_ = forest_minimize(
objective, # the objective function to minimize
param_grid, # the hyperparameter space
x0=default_parameters, # the initial parameters to test
acq_func='EI', # the acquisition function
n_calls=30, # the number of subsequent evaluations of f(x)
random_state=0,
)
```
# Analyze results
```
# function value at the minimum.
# note that it is the negative of the accuracy
"Best score=%.4f" % fm_.fun
fm_.x
fm_.space
print("""Best parameters:
=========================
- learning rate=%.6f
- num_conv_laayers=%d
- num_dense_layers=%d
- num_nodes=%d
- activation = %s""" %(
fm_.x[0],
fm_.x[1],
fm_.x[2],
fm_.x[3],
fm_.x[4],
))
```
## Convergence
```
plot_convergence(fm_)
```
## Partially dependency plots
[plot_objective](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.plot_objective.html#skopt.plots.plot_objective)
```
dim_names = ['learning_rate', 'num_conv_layers', 'num_dense_layers', 'num_dense_nodes', 'activation']
plot_objective(result=fm_, plot_dims=dim_names)
plt.show()
```
## Evaluation order
[plot_evaluations](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.plot_evaluations.html)
```
plot_evaluations(result=fm_, plot_dims=dim_names)
plt.show()
```
# Evaluate the model
```
# load best model
model = load_model(path_best_model)
# make predictions in test set
result = model.evaluate(x=X_test,
y=y_test)
# print evaluation metrics
for name, value in zip(model.metrics_names, result):
print(name, value)
```
## Confusion matrix
```
# Predict the values from the validation dataset
y_pred = model.predict(X_test)
# Convert predictions classes to one hot vectors
y_pred_classes = np.argmax(y_pred, axis = 1)
# Convert validation observations to one hot vectors
y_true = np.argmax(y_test, axis = 1)
# compute the confusion matrix
cm = confusion_matrix(y_true, y_pred_classes)
cm
# let's make it more colourful
classes = 10
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Confusion matrix')
plt.colorbar()
tick_marks = np.arange(classes)
plt.xticks(tick_marks, range(classes), rotation=45)
plt.yticks(tick_marks, range(classes))
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > 100 else "black",
)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
```
Here we can see that our CNN performs very well on all digits.
# References
This notebook was based on these resources:
- [TensorFlow Tutorial #19](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/19_Hyper-Parameters.ipynb)
- [Introduction to CNN Keras - 0.997 (top 6%)](https://www.kaggle.com/yassineghouzam/introduction-to-cnn-keras-0-997-top-6)
- [Keras](https://keras.io/)
# Exploring the Scikit-Optimize minimizer
```
fm_
# the accuracy
fm_.func_vals
# the hyperparameter combinations
fm_.x_iters
# all together in one dataframe, so we can investigate further
tmp = pd.concat([
pd.DataFrame(fm_.x_iters),
pd.Series(fm_.func_vals),
], axis=1)
tmp.columns = dim_names + ['accuracy']
tmp.head()
tmp.sort_values(by='accuracy', ascending=True, inplace=True)
tmp.head(10)
```
| github_jupyter |
# Leverage
Make sure to watch the video and slides for this lecture for the full explanation!
$ Leverage Ratio = \frac{Debt + Capital Base}{Capital Base}$
## Leverage from Algorithm
Make sure to watch the video for this! Basically run this and grab your own backtestid as shown in the video. More info:
The get_backtest function provides programmatic access to the results of backtests run on the Quantopian platform. It takes a single parameter, the ID of a backtest for which results are desired.
You can find the ID of a backtest in the URL of its full results page, which will be of the form:
https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>.
You are only entitled to view the backtests that either:
* 1) you have created
* 2) you are a collaborator on
```
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
schedule_function(rebalance,date_rules.every_day(),time_rules.market_open())
schedule_function(record_vars,date_rules.every_day(),time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.amzn,0.5)
order_target_percent(context.ibm,-0.5)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn,'close'))
record(ibm_close=data.current(context.ibm,'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
```
## Backtest Info
```
bt = get_backtest('5986b969dbab994fa4264696')
bt.algo_id
bt.recorded_vars
bt.recorded_vars['Leverage'].plot()
bt.recorded_vars['Exposure'].plot()
```
## High Leverage Example
You can actually specify to borrow on margin (NOT RECOMMENDED)
```
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
schedule_function(rebalance,date_rules.every_day(),time_rules.market_open())
schedule_function(record_vars,date_rules.every_day(),time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.ibm,-2.0)
order_target_percent(context.amzn,2.0)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn,'close'))
record(ibm_close=data.current(context.ibm,'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
bt = get_backtest('5986bd68ceda5554428a005b')
bt.recorded_vars['Leverage'].plot()
```
## Set Hard Limit on Leverage
http://www.zipline.io/appendix.html?highlight=leverage#zipline.api.set_max_leverage
```
def initialize(context):
context.amzn = sid(16841)
context.ibm = sid(3766)
set_max_leverage(1.03)
schedule_function(rebalance,date_rules.every_day(),time_rules.market_open())
schedule_function(record_vars,date_rules.every_day(),time_rules.market_close())
def rebalance(context,data):
order_target_percent(context.ibm,-0.5)
order_target_percent(context.amzn,0.5)
def record_vars(context,data):
record(amzn_close=data.current(context.amzn,'close'))
record(ibm_close=data.current(context.ibm,'close'))
record(Leverage = context.account.leverage)
record(Exposure = context.account.net_leverage)
```
| github_jupyter |
[[source]](../api/alibi.confidence.model_linearity.rst)
# Measuring the linearity of machine learning models
## Overview
Machine learning models include in general linear and non-linear operations: neural networks may include several layers consisting of linear algebra operations followed by non-linear activation functions, while models based on decision trees are by nature highly non-linear. The linearity measure function and class provide an operational definition for the amount of non-linearity of a map acting on vector spaces. Roughly speaking, the amount of non-linearity of the map is defined based on how much the output of the map applied to a linear superposition of input vectors differs from the linear superposition of the map's outputs for each individual vector. In the context of supervised learning, this definition is immediately applicable to machine learning models, which are fundamentally maps from a input vector space (the feature space) to an output vector space that may represent probabilities (for classification models) or actual values of quantities of interest (for regression models).
Given an input vector space $V$, an output vector space $W$ and a map $M: V \rightarrow W$,
the amount of non-linearity of the map $M$ in a region $\beta$ of the input space $V$ and relative to some coefficients $\alpha(v)$ is defined as
$$
L_{\beta, \alpha}^{(M)} = \left\| \int_{\beta} \alpha(v) M(v) dv -
M\left(\int_{\beta}\alpha(v)vdv \right) \right\|,
$$
where $v \in V$ and $\|\cdot\|$ denotes the norm of a vector.
If we consider a finite number of vectors $N$, the amount of non-linearity can be defined as
$$
L_{\beta, \alpha}^{(M)} = \left\| \sum_{i} \alpha_{i} M(v_i) -
M\left(\sum_i \alpha_i v_i \right) \right\|,
$$
where, with an abuse of notation, $\beta$ is no longer a continuous region in
the input space but a collection of input vectors $\{v_i\}$ and $\alpha$ is no longer a function but a collection of real coefficients $\{\alpha_i \}$ with $i \in \{1, ..., N\}$. Note that the second expression may be interpreted as an approximation of the integral quantity defined in the first expression, where the vectors $\{v_i\}$ are sampled uniformly in the region $\beta$.
## Application to machine learning models
In supervised learning, a model can be considered as a function $M$ mapping vectors from the input space (feature vectors) to vectors in the output space. The output space may represents probabilities in the case of a classification model or values of the target quantities in the case of a regression model. The definition of the linearity measure given above can be applied to the case of a regression model (either a single target regression or a multi target regression) in a straightforward way.
In case of a classifier, let us denote by $z$ the logits vector of the model such that the probabilities of the model $M$ are given by $\text{softmax}(z).$ Since the activation function of the last layer is usually highly non-linear, it is convenient to apply the definition of linearity given above to the logits vector $z.$
In the "white box" scenario, in which we have access to the internal architecture of the model, the vector $z$ is accessible and the amount of non-linearity can be calculated immediately. On the other hand, if the only accessible quantities are the output probabilities (the "black box" scenario), we need to invert the last layer's activation function in order to retrieve $z.$ In other words, that means defining a new map $M^\prime = f^{-1} \circ M(v)$ where $f$ is the activation function at the last layer and considering $L_{\beta, \alpha}^{(M^\prime)}$ as a measure of the non-linearity of the model. The activation function of the last layer is usually a sigmoid function for binary classification tasks or a softmax function for multi-class classification.
The inversion of the sigmoid function does not present any particular challenge, and the map $M^\prime$ can be written as
$$
M^\prime = -\log \circ \left(\frac{1-M(v)}{M(v)}\right).
$$
On the other hand, the softmax probabilities $p$ are defined in terms of the vector $z$ as $p_j = e^{z_j}/\sum_j{e^{z_j}},$ where $z_j$ are the components of $z$. The inverse of the softmax function is thus defined up to a constant $C$ which does not depend on $j$ but might depend on the input vector $v.$ The inverse map $M^\prime = \text{softmax}^{-1} \circ M(v)$ is then given by:
$$
M^\prime = \log \circ M(v) + C(v),
$$
where $C(v)$ is an arbitrary constant depending in general on the input vector $v.$
Since in the black box scenario it is not possible to assess the value of $C$, henceforth we will ignore it and define the amount of non-linearity of a machine learning model whose output is a probability distribution as
$$
L_{\beta, \alpha}^{(\log \circ M)} = \left\| \sum_{i}^N \alpha_{i} \log \circ M(v_i) -
\log \circ M\left(\sum_i^N \alpha_i v_i \right)\right\|.
$$
It must be noted that the quantity above may in general be different from the "actual" amount of non-linearity of the model, i.e. the quantity calculated by accessing the activation vectors $z$ directly.
## Implementation
### Sampling
The module implements two different methods for the sampling of vectors in a neighbourhood of the instance of interest $v.$
* The first sampling method ```grid``` consists of defining the region $\beta$ as a discrete lattice of a given size around the instance of interest, with the size defined in terms of the L1 distance in the lattice; the vectors are then sampled from the lattice according to a uniform distribution. The density and the size of the lattice are controlled by the resolution parameter ```res``` and the size parameter ```epsilon```. This method is highly efficient and scalable from a computational point of view.
* The second sampling method ```knn``` consists of sampling from the same probability distribution the instance $v$ was drawn from; this method is implemented by simply selecting the $K$ nearest neighbours to $v$ from a training set, when this is available. The ```knn``` method imposes the constraint that the neighbourhood of $v$ must include only vectors from the training set, and as a consequence it will exclude out-of-distribution instances from the computation of linearity.
### Pairwise vs global linearity
The module implements two different methods to associate a value of the linearity measure to $v.$
* The first method consists of measuring the ```global``` linearity in a region around $v.$ This means that we sample $N$ vectors $\{v_i\}$ from a region $\beta$ of the input space around $v$ and apply
\begin{equation}
L_{\beta, \alpha}^{(M)} = \left\| \sum_{i=1}^N \alpha_{i} M(v_i) -
M\left(\sum_{i=1}^N \alpha_i v_i \right) \right\|,
\end{equation}
* The second method consists of measuring the ```pairwise``` linearity between the instance of interest and other vectors close to it, averaging over all such pairs. In other words, we sample $N$ vectors $\{v_i\}$ from $\beta$ as in the global method, but in this case we calculate the amount of non-linearity $L_{(v,v_i),\alpha}$ for every pair of vectors $(v, v_i)$ and average over all the pairs. Given two coefficients $\{\alpha_0, \alpha_1\}$ such that $\alpha_0 + \alpha_1 = 1,$ we can define the pairwise linearity measure relative to the instance of interest $v$ as
\begin{equation}\label{pairwiselin}
L^{(M)} = \frac{1}{N} \sum_{i=0}^N \left\|\alpha_0 M(v) + \alpha_1 M(v_i) - M(\alpha_0 v + \alpha_1 v_i)\right\|.
\end{equation}
The two methods are slightly different from a conceptual point of view: the global linearity measure combines all $N$ vectors sampled in $\beta$ in a single superposition, and can be conceptually regarded as a direct approximation of the integral quantity. Thus, the quantity is strongly linked to the model behavior in the whole region $\beta.$ On the other hand, the pairwise linearity measure is an averaged quantity over pairs of superimposed vectors, with the instance of interest $v$ included in each pair. For that reason, it is conceptually more tied to the instance $v$ itself rather than the region $\beta$ around it.
## Usage
### LinearityMeasure class
Given a ```model``` class with a ```predict``` method that return probabilities distribution in case of a classifier or numeric values in case of a regressor, the linearity measure $L$ around an instance of interest $X$ can be calculated using the class ```LinearityMeasure``` as follows:
```python
from alibi.confidence.model_linearity import LinearityMeasure
predict_fn = lambda x: model.predict(x)
lm = LinearityMeasure(method='grid',
epsilon=0.04,
nb_samples=10,
res=100,
alphas=None,
model_type='classifier',
agg='pairwise',
verbose=False)
lm.fit(X_train)
L = lm.score(predict_fn, X)
```
Where `x_train` is the dataset the model was trained on. The ```feature_range``` is inferred form `x_train` in the ```fit``` step.
### linearity_measure function
Given a ```model``` class with a ```predict``` method that return probabilities distribution in case of a classifier or numeric values in case of a regressor, the linearity measure $L$ around an instance of interest $X$ can also be calculated using the ```linearity_measure``` function as follows:
```python
from alibi.confidence.model_linearity import linearity_measure, _infer_feature_range
predict_fn = lambda x: model.predict(x)
feature_range = _infer_feature_range(X_train)
L = linearity_measure(predict_fn,
X,
feature_range=feature_range
method='grid',
X_train=None,
epsilon=0.04,
nb_samples=10,
res=100,
alphas=None,
agg='global',
model_type='classifier')
```
Note that in this case the ```feature_range``` must be explicitly passed to the function and it is inferred beforehand.
## Examples
[Iris dataset](../examples/linearity_measure_iris.nblink)
[Fashion MNIST dataset](../examples/linearity_measure_fashion_mnist.nblink)
| github_jupyter |
```
!pip install pandas
import pandas as pd
import numpy as np
import gc
data_click = pd.read_csv('../train_preliminary/click_log.csv')# click_log
data_user = pd.read_csv('../train_preliminary/user.csv') # user
data_ad = pd.read_csv('../train_preliminary/ad.csv') # user
data_click = data_click.merge(data_ad,on = 'creative_id',how = 'left')
del data_ad
#对industry广告行业进行特征提取
industry_click = data_click[['user_id', 'industry', 'click_times']].sort_values(by = 'user_id')
industry_click = industry_click[data_click['industry']!='\\N']
industry_click = industry_click.groupby(['user_id','industry']).agg({'click_times':sum})
industry_click = industry_click.reset_index()
# def func_log(x):
# return np.log(x+1)
# industry_click['industry'+'_log'] = industry_click['click_times'].transform(func_log)
# 提取头三个点击最高的种类及点击量
head_x = industry_click.sort_values(['click_times'],ascending=False).groupby(['user_id']).head(3)
del industry_click
head_x = head_x.sort_values('user_id')
def fun1(x):
x = list(x.values.reshape([-1]))[:6]
x = x[:6]+[0]*(6-len(x))
return pd.DataFrame([x])
tops = head_x.groupby('user_id')['industry','click_times'].apply(fun1)
del head_x
columns = []
for i in range(6):
columns.append('industry'+str(i))
tops.columns = columns
tops = tops.reset_index()
tops = tops.drop(['level_1'],axis = 1)
tops.to_csv('industry_feat.csv',index=False)
del tops
click_feat = pd.read_csv('clicks_feat.csv')
category_feat = pd.read_csv('category_feat_addlog.csv')
industry_feat = pd.read_csv('industry_feat.csv')
#特征合并在一起
features = click_feat.merge(category_feat,on='user_id',how='left').merge(industry_feat,on='user_id',how='left')
#添加label
user = pd.read_csv('../../data/train_preliminary/user.csv')
data = features.merge(user,on='user_id',how='left')
# del data['user_id']
# 添加train_cat_max_id_feat特征
def func_cat (x):
x = x[['category1','category2', 'category3', 'category4', 'category5', 'category6','category7', 'category8', 'category9', 'category10', 'category11','category12', 'category13', 'category14', 'category15', 'category16','category17', 'category18']]
d = {}
d['cat_max_id'] = [np.argmax(x.values)+1]
return pd.DataFrame(d)
cat_max_id_feat = data.groupby('user_id').apply(func_cat)
cat_max_id_feat.reset_index(level=0, inplace=True)
cat_max_id_feat.index = range(len(cat_max_id_feat))
cat_max_id_feat.to_csv('train_cat_max_id_feat.csv',index=False)
data = data.merge(cat_max_id_feat,on='user_id',how='left')
data.to_csv('train_feat_user.csv',index=False)
```
| github_jupyter |
# 12 - Introduction to Deep Learning
by [Alejandro Correa Bahnsen](albahnsen.com/)
version 0.1, May 2016
## Part of the class [Machine Learning for Security Informatics](https://github.com/albahnsen/ML_SecurityInformatics)
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US)
Based on the slides and presentation by [Alec Radford](https://www.youtube.com/watch?v=S75EdAcXHKk) [github](https://github.com/Newmu/Theano-Tutorials/)
For this class you must install theno
```pip instal theano```
# Motivation
How do we program a computer to recognize a picture of a
handwritten digit as a 0-9?

### What if we have 60,000 of these images and their label?
```
import numpy as np
from load import mnist
X_train, X_test, y_train2, y_test2 = mnist(onehot=True)
y_train = np.argmax(y_train2, axis=1)
y_test = np.argmax(y_test2, axis=1)
X_train[1].reshape((28, 28)).round(2)[:, 4:9].tolist()
from pylab import imshow, show, cm
import matplotlib.pylab as plt
%matplotlib inline
def view_image(image, label="", predicted='', size=4):
"""View a single image."""
plt.figure(figsize = (size, size))
plt.imshow(image.reshape((28, 28)), cmap=cm.gray, )
plt.tick_params(axis='x',which='both', bottom='off',top='off', labelbottom='off')
plt.tick_params(axis='y',which='both', left='off',top='off', labelleft='off')
show()
if predicted == '':
print("Label: %s" % label)
else:
print('Label: ', str(label), 'Predicted: ', str(predicted))
view_image(X_train[1], y_train[1])
view_image(X_train[40000], y_train[40000])
```
# Naive model
For each image, find the “most similar” image and guess
that as the label
```
def similarity(image, images):
similarities = []
image = image.reshape((28, 28))
images = images.reshape((-1, 28, 28))
for i in range(images.shape[0]):
distance = np.sqrt(np.sum(image - images[i]) ** 2)
sim = 1 / distance
similarities.append(sim)
return similarities
np.random.seed(52)
small_train = np.random.choice(X_train.shape[0], 100)
view_image(X_test[0])
similarities = similarity(X_test[0], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
```
Lets try an other example
```
view_image(X_test[200])
similarities = similarity(X_test[200], X_train[small_train])
view_image(X_train[small_train[np.argmax(similarities)]])
```
# Logistic Regression
Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix $W$ and a bias vector $b$ Classification is
done by projecting data points onto a set of hyperplanes, the distance to
which is used to determine a class membership probability.
Mathematically, this can be written as:
$$
P(Y=i\vert x, W,b) = softmax_i(W x + b)
$$
$$
P(Y=i|x, W,b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}
$$
The output of the model or prediction is then done by taking the argmax of
the vector whose i'th element is $P(Y=i|x)$.
$$
y_{pred} = argmax_i P(Y=i|x,W,b)
$$

```
import theano
from theano import tensor as T
import numpy as np
import datetime as dt
theano.config.floatX = 'float32'
```
```
Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (numpy.ndarray). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data. It can also surpass C on a CPU by many orders of magnitude by taking advantage of recent GPUs.
Theano combines aspects of a computer algebra system (CAS) with aspects of an optimizing compiler. It can also generate customized C code for many mathematical operations. This combination of CAS with optimizing compilation is particularly useful for tasks in which complicated mathematical expressions are evaluated repeatedly and evaluation speed is critical. For situations where many different expressions are each evaluated once Theano can minimize the amount of compilation/analysis overhead, but still provide symbolic features such as automatic differentiation.
```
```
def floatX(X):
# return np.asarray(X, dtype='float32')
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.01))
def model(X, w):
return T.nnet.softmax(T.dot(X, w))
X = T.fmatrix()
Y = T.fmatrix()
w = init_weights((784, 10))
w.get_value()
```
initialize model
```
py_x = model(X, w)
y_pred = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
gradient = T.grad(cost=cost, wrt=w)
update = [[w, w - gradient * 0.05]]
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
```
One iteration
```
for start, end in zip(range(0, X_train.shape[0], 128), range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors = [(np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test)))]
errors
```
Now for 100 epochs
```
t0 = dt.datetime.now()
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
```
### Checking the results
```
y_pred = predict(X_test)
np.random.seed(2)
small_test = np.random.choice(X_test.shape[0], 10)
for i in small_test:
view_image(X_test[i], label=y_test[i], predicted=y_pred[i], size=1)
```
# Simple Neural Net
Add a hidden layer with a sigmoid activation function

```
def sgd(cost, params, lr=0.05):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
updates.append([p, p - g * lr])
return updates
def model(X, w_h, w_o):
h = T.nnet.sigmoid(T.dot(X, w_h))
pyx = T.nnet.softmax(T.dot(h, w_o))
return pyx
w_h = init_weights((784, 625))
w_o = init_weights((625, 10))
py_x = model(X, w_h, w_o)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
params = [w_h, w_o]
updates = sgd(cost, params)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
```
# Complex Neural Net
Two hidden layers with dropout

```
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
srng = RandomStreams()
def rectify(X):
return T.maximum(X, 0.)
```
### Understanding rectifier units

```
def RMSprop(cost, params, lr=0.001, rho=0.9, epsilon=1e-6):
grads = T.grad(cost=cost, wrt=params)
updates = []
for p, g in zip(params, grads):
acc = theano.shared(p.get_value() * 0.)
acc_new = rho * acc + (1 - rho) * g ** 2
gradient_scaling = T.sqrt(acc_new + epsilon)
g = g / gradient_scaling
updates.append((acc, acc_new))
updates.append((p, p - lr * g))
return updates
```
### RMSprop
RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in
[Lecture 6e of his Coursera Class](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)
RMSprop and Adadelta have both been developed independently around the same time stemming from the need to resolve Adagrad's radically diminishing learning rates. RMSprop in fact is identical to the first update vector of Adadelta that we derived above:
$$ E[g^2]_t = 0.9 E[g^2]_{t-1} + 0.1 g^2_t. $$
$$\theta_{t+1} = \theta_{t} - \frac{\eta}{\sqrt{E[g^2]_t + \epsilon}} g_{t}.$$
RMSprop as well divides the learning rate by an exponentially decaying average of squared gradients. Hinton suggests $\gamma$ to be set to 0.9, while a good default value for the learning rate $\eta$ is 0.001.
```
def dropout(X, p=0.):
if p > 0:
retain_prob = 1 - p
X *= srng.binomial(X.shape, p=retain_prob, dtype=theano.config.floatX)
X /= retain_prob
return X
def model(X, w_h, w_h2, w_o, p_drop_input, p_drop_hidden):
X = dropout(X, p_drop_input)
h = rectify(T.dot(X, w_h))
h = dropout(h, p_drop_hidden)
h2 = rectify(T.dot(h, w_h2))
h2 = dropout(h2, p_drop_hidden)
py_x = softmax(T.dot(h2, w_o))
return h, h2, py_x
def softmax(X):
e_x = T.exp(X - X.max(axis=1).dimshuffle(0, 'x'))
return e_x / e_x.sum(axis=1).dimshuffle(0, 'x')
w_h = init_weights((784, 625))
w_h2 = init_weights((625, 625))
w_o = init_weights((625, 10))
noise_h, noise_h2, noise_py_x = model(X, w_h, w_h2, w_o, 0.2, 0.5)
h, h2, py_x = model(X, w_h, w_h2, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w_h, w_h2, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train)),
np.mean(y_test != predict(X_test))))
print(i, errors[-1])
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
```
# Convolutional Neural Network
In machine learning, a convolutional neural network (CNN, or ConvNet) is a type of feed-forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex, whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing. (Wikipedia)

### Motivation
Convolutional Neural Networks (CNN) are biologically-inspired variants of MLPs.
From Hubel and Wiesel's early work on the cat's visual cortex, we
know the visual cortex contains a complex arrangement of cells. These cells are
sensitive to small sub-regions of the visual field, called a *receptive
field*. The sub-regions are tiled to cover the entire visual field. These
cells act as local filters over the input space and are well-suited to exploit
the strong spatially local correlation present in natural images.
Additionally, two basic cell types have been identified: Simple cells respond
maximally to specific edge-like patterns within their receptive field. Complex
cells have larger receptive fields and are locally invariant to the exact
position of the pattern.
The animal visual cortex being the most powerful visual processing system in
existence, it seems natural to emulate its behavior. Hence, many
neurally-inspired models can be found in the literature.
### Sparse Connectivity
CNNs exploit spatially-local correlation by enforcing a local connectivity
pattern between neurons of adjacent layers. In other words, the inputs of
hidden units in layer **m** are from a subset of units in layer **m-1**, units
that have spatially contiguous receptive fields. We can illustrate this
graphically as follows:

Imagine that layer **m-1** is the input retina. In the above figure, units in
layer **m** have receptive fields of width 3 in the input retina and are thus
only connected to 3 adjacent neurons in the retina layer. Units in layer
**m+1** have a similar connectivity with the layer below. We say that their
receptive field with respect to the layer below is also 3, but their receptive
field with respect to the input is larger (5). Each unit is unresponsive to
variations outside of its receptive field with respect to the retina. The
architecture thus ensures that the learnt "filters" produce the strongest
response to a spatially local input pattern.
However, as shown above, stacking many such layers leads to (non-linear)
"filters" that become increasingly "global" (i.e. responsive to a larger region
of pixel space). For example, the unit in hidden layer **m+1** can encode a
non-linear feature of width 5 (in terms of pixel space).
### Shared Weights
In addition, in CNNs, each filter $h_i$ is replicated across the entire
visual field. These replicated units share the same parameterization (weight
vector and bias) and form a *feature map*.

In the above figure, we show 3 hidden units belonging to the same feature map.
Weights of the same color are shared---constrained to be identical. Gradient
descent can still be used to learn such shared parameters, with only a small
change to the original algorithm. The gradient of a shared weight is simply the
sum of the gradients of the parameters being shared.
Replicating units in this way allows for features to be detected *regardless
of their position in the visual field.* Additionally, weight sharing increases
learning efficiency by greatly reducing the number of free parameters being
learnt. The constraints on the model enable CNNs to achieve better
generalization on vision problems.
### Details and Notation
A feature map is obtained by repeated application of a function across
sub-regions of the entire image, in other words, by *convolution* of the
input image with a linear filter, adding a bias term and then applying a
non-linear function. If we denote the k-th feature map at a given layer as
$h^k$, whose filters are determined by the weights $W^k$ and bias
$b_k$, then the feature map $h^k$ is obtained as follows (for
$tanh$ non-linearities):
$$
h^k_{ij} = \tanh ( (W^k * x)_{ij} + b_k ).
$$
Note
* Recall the following definition of convolution for a 1D signal.
$$ o[n] = f[n]*g[n] = \sum_{u=-\infty}^{\infty} f[u] g[n-u] = \sum_{u=-\infty}^{\infty} f[n-u] g[u]`.
$$
* This can be extended to 2D as follows:
$$o[m,n] = f[m,n]*g[m,n] = \sum_{u=-\infty}^{\infty} \sum_{v=-\infty}^{\infty} f[u,v] g[m-u,n-v]`.
$$
To form a richer representation of the data, each hidden layer is composed of
*multiple* feature maps, $\{h^{(k)}, k=0..K\}$. The weights $W$ of
a hidden layer can be represented in a 4D tensor containing elements for every
combination of destination feature map, source feature map, source vertical
position, and source horizontal position. The biases $b$ can be
represented as a vector containing one element for every destination feature
map. We illustrate this graphically as follows:
**Figure 1**: example of a convolutional layer

The figure shows two layers of a CNN. **Layer m-1** contains four feature maps.
**Hidden layer m** contains two feature maps ($h^0$ and $h^1$).
Pixels (neuron outputs) in $h^0$ and $h^1$ (outlined as blue and
red squares) are computed from pixels of layer (m-1) which fall within their
2x2 receptive field in the layer below (shown as colored rectangles). Notice
how the receptive field spans all four input feature maps. The weights
$W^0$ and $W^1$ of $h^0$ and $h^1$ are thus 3D weight
tensors. The leading dimension indexes the input feature maps, while the other
two refer to the pixel coordinates.
Putting it all together, $W^{kl}_{ij}$ denotes the weight connecting
each pixel of the k-th feature map at layer m, with the pixel at coordinates
(i,j) of the l-th feature map of layer (m-1).
### The Convolution Operator
ConvOp is the main workhorse for implementing a convolutional layer in Theano.
ConvOp is used by ``theano.tensor.signal.conv2d``, which takes two symbolic inputs:
* a 4D tensor corresponding to a mini-batch of input images. The shape of the
tensor is as follows: [mini-batch size, number of input feature maps, image
height, image width].
* a 4D tensor corresponding to the weight matrix $W$. The shape of the
tensor is: [number of feature maps at layer m, number of feature maps at
layer m-1, filter height, filter width]
### MaxPooling
Another important concept of CNNs is *max-pooling,* which is a form of
non-linear down-sampling. Max-pooling partitions the input image into
a set of non-overlapping rectangles and, for each such sub-region, outputs the
maximum value.
Max-pooling is useful in vision for two reasons:
* By eliminating non-maximal values, it reduces computation for upper layers.
* It provides a form of translation invariance. Imagine
cascading a max-pooling layer with a convolutional layer. There are 8
directions in which one can translate the input image by a single pixel.
If max-pooling is done over a 2x2 region, 3 out of these 8 possible
configurations will produce exactly the same output at the convolutional
layer. For max-pooling over a 3x3 window, this jumps to 5/8.
Since it provides additional robustness to position, max-pooling is a
"smart" way of reducing the dimensionality of intermediate representations.
Max-pooling is done in Theano by way of
``theano.tensor.signal.downsample.max_pool_2d``. This function takes as input
an N dimensional tensor (where N >= 2) and a downscaling factor and performs
max-pooling over the 2 trailing dimensions of the tensor.
### The Full Model: CovNet
Sparse, convolutional layers and max-pooling are at the heart of the LeNet
family of models. While the exact details of the model will vary greatly,
the figure below shows a graphical depiction of a LeNet model.

The lower-layers are composed to alternating convolution and max-pooling
layers. The upper-layers however are fully-connected and correspond to a
traditional MLP (hidden layer + logistic regression). The input to the
first fully-connected layer is the set of all features maps at the layer
below.
From an implementation point of view, this means lower-layers operate on 4D
tensors. These are then flattened to a 2D matrix of rasterized feature maps,
to be compatible with our previous MLP implementation.
```
# from theano.tensor.nnet.conv import conv2d
from theano.tensor.nnet import conv2d
from theano.tensor.signal.downsample import max_pool_2d
```
Modify dropout function
```
def model(X, w, w2, w3, w4, w_o, p_drop_conv, p_drop_hidden):
l1a = rectify(conv2d(X, w, border_mode='full'))
l1 = max_pool_2d(l1a, (2, 2))
l1 = dropout(l1, p_drop_conv)
l2a = rectify(conv2d(l1, w2))
l2 = max_pool_2d(l2a, (2, 2))
l2 = dropout(l2, p_drop_conv)
l3a = rectify(conv2d(l2, w3))
l3b = max_pool_2d(l3a, (2, 2))
# convert from 4tensor to normal matrix
l3 = T.flatten(l3b, outdim=2)
l3 = dropout(l3, p_drop_conv)
l4 = rectify(T.dot(l3, w4))
l4 = dropout(l4, p_drop_hidden)
pyx = softmax(T.dot(l4, w_o))
return l1, l2, l3, l4, pyx
```
reshape into conv 4tensor (b, c, 0, 1) format
```
X_train2 = X_train.reshape(-1, 1, 28, 28)
X_test2 = X_test.reshape(-1, 1, 28, 28)
# now 4tensor for conv instead of matrix
X = T.ftensor4()
Y = T.fmatrix()
w = init_weights((32, 1, 3, 3))
w2 = init_weights((64, 32, 3, 3))
w3 = init_weights((128, 64, 3, 3))
w4 = init_weights((128 * 3 * 3, 625))
w_o = init_weights((625, 10))
noise_l1, noise_l2, noise_l3, noise_l4, noise_py_x = model(X, w, w2, w3, w4, w_o, 0.2, 0.5)
l1, l2, l3, l4, py_x = model(X, w, w2, w3, w4, w_o, 0., 0.)
y_x = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(noise_py_x, Y))
params = [w, w2, w3, w4, w_o]
updates = RMSprop(cost, params, lr=0.001)
train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_x, allow_input_downcast=True)
t0 = dt.datetime.now()
errors = []
for i in range(100):
t1 = dt.datetime.now()
for start, end in zip(range(0, X_train.shape[0], 128),
range(128, X_train.shape[0], 128)):
cost = train(X_train2[start:end], y_train2[start:end])
errors.append((np.mean(y_train != predict(X_train2)),
np.mean(y_test != predict(X_test2))))
print(i, errors[-1])
print('Current iter time: ', (dt.datetime.now()-t1).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
print('Total time: ', (dt.datetime.now()-t0).seconds / 60.)
res = np.array(errors)
plt.plot(np.arange(res.shape[0]), res[:, 0], label='train error')
plt.plot(np.arange(res.shape[0]), res[:, 1], label='test error')
plt.legend()
```
# Even more complex networks
## GoogLeNet

[examples](http://www.csc.kth.se/~roelof/deepdream/bvlc_googlenet.html)
| github_jupyter |
```
#Applying transfer learning for horses and humans
#Importing the libraries
import os
import zipfile
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
from tensorflow.keras.applications.inception_v3 import InceptionV3
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(
input_shape = (300,300,3),
include_top = False,
weights = None
)
pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
layer.trainable = False
pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
print("last layer output shape : ", last_layer.output_shape)
last_output = last_layer.output
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.98):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
#Building our model
from tensorflow.keras.optimizers import RMSprop
x = layers.Flatten()(last_output)
x = layers.Dense(1024,activation = 'relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(1,activation='sigmoid')(x)
model = Model(pre_trained_model.input,x)
model.compile(optimizer = RMSprop(0.0001),loss='binary_crossentropy',metrics=['acc'])
model.summary()
#Preparing the dataset
# Get the Horse or Human dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip
# Get the Horse or Human Validation dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = '//tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
train_horses_dir = os.path.join(train_dir,'horses')
train_humans_dir = os.path.join(train_dir,'humans')
validation_horses_dir = os.path.join(validation_dir,'horses')
validation_humans_dir = os.path.join(validation_dir,'humans')
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
#pre-processing the data
train_datagen = ImageDataGenerator(
rescale = 1/255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest'
)
train_generator = train_datagen.flow_from_directory(
train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (300,300)
)
test_datagen = ImageDataGenerator(rescale = 1/255)
validation_generator = test_datagen.flow_from_directory(
validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (300,300)
)
history = model.fit(
train_generator,
epochs = 20,
validation_data = validation_generator
)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
model.save("mymodel.h5")
```
| github_jupyter |
# Laboratory 09 - Digital and IR Imaging
## MAE 3120, Spring 2020
## Grading Rubric
Procedures, Results, Plots, Tables - 60%
Discussion Questions - 30%
Neatness - 10%
## Introduction and Background
Due to this special semester and the abundance of snow days, this lab is created to introduce you to some advanced instrumentation that we did not have time to cover otherwise. Here we will do a brief introduction to high-speed imaging, synchronization, and thermal imaging.
<img src="img/EMSpectrumcolor.jpg" align="center">
The high-speed camera we will be using in this lab responds to the same spectrum than human eye between 400-700 nm (actually a bit broader for the camera). The infra-red camera is responsive between 7.5 and 13 µm.
### Principles of high-speed imaging
High-speed cameras have allowed tremendous discovery into complex physical processes. Digital cameras come in two ways: CCD and CMOS. The active area over which the signal is recorded is called a pixel. Due to different architectures, CMOS cameras are able to reach higher frame per second (fps) than CCDs. They can now go as fast as 20,000 fps at 1 Megapixel! Here you will use a state-of-the-art scientific camera (IDT NX3-S3) with a CMOS sensor that can record images at full frame (1280 × 1024 pixels) at 2,500 fps. In digital cameras, each pixel is connected to a DAC, with depth ranging between 8 and 24 bits. The IDT camera has a 10 bit sensor, i.e. the intensity of each pixel is discretized into 210 = 1024 values.
The recorded pixel intensity is proportional to exposure time (time for which pixel is active or integrating), the lens numerical aperture or f-stop, and sensitivity of your sensor. See your textbook (appendix B) for a good overview of numerical aperture and sensitivity. Here we will focus on exposure time. To record dynamic systems, motion blur should be minimized and hence the exposure time minimized, while still using the sensor dynamic range (# bits) in an optimal manner. With this camera, the exposure time can be decreased to down to 1 µs. To freeze motion of high speed moving objects, it is sometimes beneficial to bring an external light source, such as strobe light or laser. In this lab, you will experiment with pulsed LEDs to do so.
The frame rate (or how many images per second are acquired) is optimized for the problem at hand. Since the cameras have limited amount of memory onboard, the experimentalist has to compromise between recording time and temporal resolution.
### Principles of instrument synchronization with TTL pulses
Transistor-transistor logic (TTL) is a class of digital signal used in digital circuits built from bipolar junction transistor (BJT). TTL is widespread in integrated circuits and used in many applications, such as computers, instruments, cell phones, etc. Due to its wide use, TTL logic levels are employed in circuit with no TTL integrated circuits. In fact, the logic levels are used to create digital words (i.e. bytes) and to synchronize instruments. The binary logical levels are presented below:
| | Logic Level | Voltage | |
|:-:|:-----------:|:---------:|:---------------:|
| 1 | 0 | 0 - 0.8 V | |
| 2 | 1 | 2 Vcc | Vcc = 5 V ± 10% |
Hence any voltage between 0 and 0.8 V corresponds to a bit of 0 and any voltage between 2 and 5 V to a bit of 1. From this, it can be seen that digital words will look like square waves of varying duty cycle. The same protocol is also used for synchronizing and trigging instruments. However, to increase the resolution of the clock down to sub-ns, precise timing is based on either rising or falling edge of TTL pulses.
Both cameras and LEDs can be synchronized and triggered through special timing units, called time delay generators or timing boards.
### Principles of IR imaging
Infra-red imagers measure how the radiative properties of an object change with temperature. Hence, radiations from an object are used to infer temperature of the object. The advantages of radiation pyrometry are: it is a non-contact measurement; the sensor has very fast response time; very high temperatures can be detected.
The fundamental equation for radiation from a body is the Stefan-Boltzmann equation:
$$E = \epsilon \sigma T^4$$
Where $E$ is the emissive power radiated per unit area (W/m<sup>2</sup>), $\epsilon$ is the emissivity (defined as a fraction of radiation emitted from a body, it varies between $0$ and $1$ and is $1$ for a blackbody), $\sigma$ is the Stefan-Boltzmann constant, $\sigma = 5.669 \times 10^{-8} \frac{\textit{W}}{\textit{m}^2\textit{K}^4}$, and $T$ is the surface absolute thermodynamic temperature in K. Here are tables of emissivity for common surfaces.
| Surface | Aluminum (polished) | Aluminum (anodized) | Glass | Water (deep) | Asphalt | Human Skin |
|:------------------------:|:-------------------:|:-------------------:|:-----------:|:------------:|:-----------:|:----------:|
| *Emissivity, $\epsilon$* | 0.03 | 0.84 | 0.62 - 0.95 | 0.95 | 0.85 - 0.93 | 0.97 |
The IR camera you will use in the lab (FLIR A15) is capable of recording at 60 fps at 160 × 128 pixels. It can detect temperature between -40°C and 160°C with a sensitivity of 0.05 K. The ADC is 14 bit.
The camera is calibrated for an emissivity of 0.95, hence when you perform temperature measurements with it, you have to correct for emissivity or your results will be wrong. The indicated temperature, $T_{ind}$, is related to the voltage of each pixel with an assumed emissivity, $\epsilon_{assumed}$ = 0.95. Due to actual emissivity of a body, $\epsilon_{actual}$ the body temperature, $T_H$, is corrected for by:
$$T_H = \left(\frac{\epsilon_{assumed}}{\epsilon_{actual}}\right)^\frac{1}{4} T_{ind}$$
## Equipment
- Instrumentation carts with oscilloscope<br><p></p>
- USB flash drive to download data from the camera computers<br><p></p>
- IDT NX3-S3 camera with computer (brought by TA)<br><p></p>
- IDT Constellation 120 LED strobe lights (brought by TA)<br><p></p>
- FLIR A15 camera (brought by TA)<br><p></p>
- Tripods (brought by TA)
## Procedure
For this lab you will be rotating between stations where you will get to manipulate the equipment and discover their behavior. For some stations you will have to work with another group, use your time between runs to start answering the discussion questions. You will also not follow the order of the procedure chronologically.
### Part I - Principles of high-speed imaging
With another groups (the TA will call you), you will now experiment with fundamentals of high-speed imaging.
Here one student will be operating the catapult while the others will record images of the sequence.
**Preliminary question**: Motion blur can be defined as when the object has moved across the sensor by at least one pixel. Assume that the object is a 2 cm diameter sphere, the diameter is imaged over 20 pixels, and it moves at 1 m/s. What should be the maximum exposure time, so that there is no motion blur?
1. The camera software is “Motion Studio x64”. On the right panel you can select the frame rate, exposure time, and more advanced options, such as gain of the sensor that we will not use here.<br><p></p>
2. To help you focus the camera lens, set the system in live mode, by pressing the “play” arrow. To be able to be transferred over the Ethernet cable, the camera is acquiring data at a lower frame rate, but the exposure time is corrects so you can optimize your settings here.<br><p></p>
3. Focus the camera lens on the hand of the person holding the ball. And optimize for brightness.<br><p></p>
4. Once you have a good set of settings, note them and you acquire data by pressing the “record” button. The camera is now continuously recording and is waiting for your trigger signal (the “check mark” button next to play) to start acquiring the data.<br><p></p>
5. Press record and trigger and look at the sequence under the playback tab. If you are not happy with the results, either, the ball has moved too much between frames (increase frame rate), the ball is blurry (decrease exposure time).<br><p></p>
6. Restart at *Step 4*, noting the parameters at each time, so you do not perform the same test twice, till you have good values that you will report in you lab report. When you restart in live mode, you erase the images saved in the internal memory and the software will ask you to confirm you meant to do so. Comment on the brightness of the system.<br><p></p>
7. To save the data, press the Save symbol on the upper left. To save on memory and transfer time, only select the images of interest in the sequence. Save the data straight to your thumb drive.
### Part II - Instrument Synchronization
With the same groups, you will now synchronize the camera to a strobe light to have brighter images with very short exposure times. This can be accomplished in a very crude way, with powerful flood light; however, in some applications the heat generated by the lights will be detrimental to the system and it is preferred to use pulsed lights.
1. Connect a BNC cable from output of “sync out” of the camera (square wave symbol with an arrow coming out) and monitor signal on oscilloscope when the camera is recording. What kind of signal do you observe? What is its amplitude?<br><p></p>
2. Note the frame rate and time the square wave is in high position. To which parameters do they correspond to on the camera control panel?<br><p></p>
3. Now connect the “sync out” of the camera to the “sync in” of the LEDs, making sure they are set on “pulsed” mode. Hence, every time you record an image and send a TTL signal through the camera “sync out” the LEDs will turn on.<br><p></p>
4. With the LEDs pointing at your scene, what is the minimum exposure time you can now accomplish, while maintaining a good dynamic range.
### Part III - Principles of IR imaging
With another groups (the TA will call you), you will now experiment with fundamentals of thermal imaging. The camera can be operated by clicking on the “PvSimpleUISample” link on the desktop.
1. Test the effect of emissivity with the IR camera. For this, point the camera to the intersection of the white board and the wall; i.e. you should have both in your field of view. Take a snapshot with the computer. What do you observe? Explain why.<br><p></p>
2. Some materials reflect thermal radiation (similar to a mirror reflecting visible light). Move the camera around the lab till you find a material that reflects thermal radiation well. What is the material? Are you surprised? Explain why such materials are used in greenhouses.
# Discussion Questions
1. What is the frame rate typically used in movies?<br><p></p>
- Give the resolution in pixels (horizontal x vertical) of 1080p and 4K TV? At the movie frame rate, you found above to which transfer rate does this correspond? Assume there is no compression and each pixel has 24 bit depth (8 bit/color). Express your values in bit/s. Compare this USB 2, USB 3, and Ethernet protocols.<br><p></p>
- With the high-speed camera, if a pixel is at 1024, then it is “saturated”. Which term did we use when talking with digital signal in general? What should be the maximal intensity value you recommend using on each pixel?<br><p></p>
- You are using a thermal imager to record temperature of a human body. By mistake the emissivity has been set up at 0.15. What temperature would you read for a healthy person?<br><p></p>
- You saw that reflection on IR imagers can lead to spurious measurements. Explain how this could compromise your measurements if you are inspecting electrical systems for abnormal heat sources or hot spots, which would be indication of faulty problems. How could you detect if a hot spot is a reflection or not?<br><p></p>
- Describe a calibration procedure for the Thermal imager (to relate brightness to temperature).<br><p></p>
| github_jupyter |
# License
***
Copyright (C) 2017 J. Patrick Hall, jphall@gwu.edu
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
***
# **Basic** Gradient Descent for Multiple Linear Regression
```
# imports
import pandas as pd # import pandas for easy data manipulation using data frames
import numpy as np # import numpy for numeric calculations on matrices
import time # for timers
# import h2o to check calculations
import h2o
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
```
#### Assign global constants
```
# data-related constants
IN_FILE_PATH = '../data/loan_clean.csv'
Y = 'STD_IMP_REP_loan_amnt'
DROPS = ['id', 'GRP_REP_home_ownership', 'GRP_addr_state', 'GRP_home_ownership',
'GRP_purpose', 'GRP_verification_status', '_WARN_']
# model-related constants
LEARN_RATE = 0.005 # how much each gradient descent step impacts parameters
CONV = 1e-10 # desired precision in parameters
MAX_ITERS = 10000 # maximum number of gradient descent steps to allow
```
### Import clean data and convert to numpy matrices
```
# import data using Pandas
raw = pd.read_csv(IN_FILE_PATH)
# select target column
y = raw[Y].as_matrix()
print(y)
# create input matrix
# add an additional column of 1's for intercept
# by overlaying inputs onto matrix of 1's
numeric = raw.drop(DROPS + [Y], axis=1).as_matrix()
N, p = numeric.shape
X = np.ones(shape=(N, p + 1))
X[:,1:] = numeric
print(X)
```
### Basic Gradient Descent Routines
#### Define squared loss function
* For linear regression, we minimize the squared distance between the regression plane and points in the conditional distribution of **y** given **X**.
* It is convenient to use a scaled mean squared error (MSE) formula:
```
def squared_loss(n, x, y, betas):
""" Squared loss function for multiple linear regression.
:param n: Number of rows in x.
:param x: Matrix of numeric inputs.
:param y: Vector of known target values.
:param beta: Vector or current model parameters.
:return: Scalar MSE value.
"""
yhat = x.dot(betas)
return ((1 / (2 * n)) * (y - yhat)**2).sum()
```
#### Define gradient of loss function
* The derivative of the loss function w.r.t the model parameters is used to update model parameters at each gradient descent step.
* The gradient of our MSE loss function is trivial:
```
def grad(n, y, yhat):
""" Analytical gradient of scaled MSE loss function.
:param n: Number of rows in X.
:param y: Vector of known target values.
:param yhat: Vector of predicted target values.
:return: Vector of gradient values.
"""
return ((1 / n)*(yhat - y))
```
#### Define function for executing gradient descent minimization
For each gradient descent step:
* Predictions are made using the current model parameters.
* The gradient is calculated for each model pararmeter.
* The gradient is used in combination with the learning rate to update each parameter.
```
def grad_descent(X, y, learn_rate, max_iters, sgd_mini_batch_n=0):
""" Routine for executing simple gradient descent with stochastic gradient descent option.
:param X: Matrix of numeric data.
:param y: Vector of known target values.
:param learn_rate: Learning rate.
:param max_iters: Maximum number of gradient descent steps to perform.
:param sgd_mini_batch_n: Minibatch size for sgd optimization.
If > 0 minibatch stochastic gradient descent is performed.
"""
tic = time.time() # start timer
n_betas = X.shape[1] # number of model parameters including bias
betas = np.zeros(shape=n_betas) # parameters start with value of 0
n = y.shape[0] # number of rows in X
# Pandas dataframe for iteration history
iteration_frame = pd.DataFrame(columns=['Iteration', 'Loss'])
print('Iteration history:')
# loop for gradient descent steps
for i in range(max_iters):
# stochastic gradient descent
if sgd_mini_batch_n > 0:
samp_idx = np.random.randint(n, size=sgd_mini_batch_n)
X_samp = X[samp_idx, :]
y_samp = y[samp_idx]
n_samp = X_samp.shape[0]
yhat_samp = X_samp.dot(betas) # model predictions for iteration
# loop for column-wise parameter updates
for j in range(n_betas):
# select column
# calculate column-wise gradient
# update corresponding parameter based on negative gradient
# calculate loss
xj_samp = X_samp[:, j]
xj_grad_samp = grad(n_samp, y_samp, yhat_samp) * xj_samp
betas[j] = betas[j] - learn_rate * xj_grad_samp.sum()
iter_loss = squared_loss(n_samp, X_samp, y_samp, betas)
# standard gradient descent
else:
yhat = X.dot(betas) # model predictions for iteration
# loop for column-wise parameter updates
for j in range(n_betas):
xj = X[:, j]
xj_grad = grad(n, y, yhat) * xj
betas[j] = betas[j] - learn_rate * xj_grad.sum()
iter_loss = squared_loss(n, X, y, betas)
# update loss history
iteration_frame = iteration_frame.append({'Iteration': i,
'Loss': iter_loss},
ignore_index=True)
# progress indicator
if i % 1000 == 0:
print('iter=%d loss=%.6f' % (i, iter_loss))
# convergence check
if i > 0:
if np.abs(iteration_frame.iat[i-1, 1] - iteration_frame.iat[i, 1]) < CONV:
break
# output
%matplotlib inline
iteration_frame.plot.line(title='Iteration Plot', x='Iteration', y='Loss')
print()
print('Model parameters at iteration ' + str(i) + ':')
print(betas)
print()
print('Model trained in %.2f s.' % (time.time()-tic))
```
#### Execute gradient descent
```
grad_descent(X, y, LEARN_RATE, MAX_ITERS)
```
#### Execute stochastic gradient descent
```
grad_descent(X, y, LEARN_RATE, MAX_ITERS, sgd_mini_batch_n=1000)
```
### Use h2o to check model parameters
```
# start h2o
h2o.init()
DROPS = ['id', 'GRP_REP_home_ownership', 'GRP_addr_state', 'GRP_home_ownership',
'GRP_purpose', 'GRP_verification_status', '_WARN_']
# numeric columns
train = h2o.import_file(IN_FILE_PATH)
train = train.drop(DROPS)
X = train.col_names
# initialize non-penalized GLM model
loan_glm = H2OGeneralizedLinearEstimator(family='gaussian', # uses squared error
solver='IRLSM', # necessary for non-penalized GLM
standardize=False, # data is already standardized
compute_p_values=True, # necessary for non-penalized GLM
lambda_=0) # necessary for non-penalized GLM
# train
loan_glm.train(train.col_names, Y, training_frame=train)
# print trained model info
print()
print('Model parameters:')
for name, val in loan_glm.coef().items():
print(name, val)
print()
# shutdown h2o
h2o.cluster().shutdown()
```
| github_jupyter |
# Analyze a large dataset with Google BigQuery
**Learning Objectives**
1. Access an ecommerce dataset
1. Look at the dataset metadata
1. Remove duplicate entries
1. Write and execute queries
## Introduction
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
We have a publicly available ecommerce dataset that has millions of Google Analytics records for the Google Merchandise Store loaded into a table in BigQuery. In this lab, you use a copy of that dataset. Sample scenarios are provided, from which you look at the data and ways to remove duplicate information. The lab then steps you through further analysis the data.
BigQuery can be accessed by its own browser-based interface, Google Data Studio, and many third party tools. In this lab you will use the BigQuery directly in notebook cells using the iPython magic command `%%bigquery`.
The steps you will follow in the lab are analogous to what you would do to prepare data for use in advanced ML operations. You will follow the notebook to experiment with the BigQuery queries provided to analyze the data.
### Set up the notebook environment
__VERY IMPORTANT__: In the cell below you must replace the text `<YOUR PROJECT>` with you GCP project id.
```
import os
import pandas as pd
PROJECT = "<YOUR PROJECT>" #TODO Replace with your project id
os.environ["PROJECT"] = PROJECT
pd.options.display.max_columns = 50
```
## Explore eCommerce data and identify duplicate records
Scenario: You were provided with Google Analytics logs for an eCommerce website in a BigQuery dataset. The data analyst team created a new BigQuery table of all the raw eCommerce visitor session data. This data tracks user interactions, location, device types, time on page, and details of any transaction. Your ultimate plan is to use this data in an ML capacity to create a model that delivers highly accurate predictions of user behavior to support tailored marketing campaigns.
First, a few notes on BigQuery within a python notebook context. Any cell that starts with `%%bigquery` (the BigQuery Magic) will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.
BigQuery supports [two flavors](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#comparison_of_legacy_and_standard_sql) of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with `#standardSQL`.
Our first query is accessing the BigQuery Information Schema which stores all object-related metadata. In this case we want to see metadata details for the "all_sessions_raw" table.
Tip: To run the current cell you can click the cell and hit **shift enter**
```
%%bigquery --project $PROJECT
#standardsql
SELECT *
EXCEPT
(table_catalog, table_schema, is_generated, generation_expression, is_stored,
is_updatable, is_hidden, is_system_defined, is_partitioning_column, clustering_ordinal_position)
FROM `data-to-insights.ecommerce.INFORMATION_SCHEMA.COLUMNS`
WHERE table_name="all_sessions_raw"
```
Next examine how many rows are in the table.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*)
FROM `data-to-insights.ecommerce.all_sessions_raw`
```
Now take a quick at few rows of data in the table.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT *
FROM `data-to-insights.ecommerce.all_sessions_raw`
LIMIT 7
```
### Identify duplicate rows
Seeing a sample amount of data may give you greater intuition for what is included in the dataset. But since the table is quite large, a preview is not likely to render meaningful results. As you scan and scroll through the sample rows you see there is no singular field that uniquely identifies a row, so you need advanced logic to identify duplicate rows.
The query below uses the SQL GROUP BY function on every field and counts (COUNT) where there are rows that have the same values across every field.
If every field is unique, the COUNT will return 1 as there are no other groupings of rows with the exact same value for all fields.
If there is a row with the same values for all fields, they will be grouped together and the COUNT will be greater than 1. The last part of the query is an aggregation filter using HAVING to only show the results that have a COUNT of duplicates greater than 1.
Run the following query to find duplicate records across all columns.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS num_duplicate_rows,
*
FROM `data-to-insights.ecommerce.all_sessions_raw`
GROUP BY fullvisitorid,
channelgrouping,
time,
country,
city,
totaltransactionrevenue,
transactions,
timeonsite,
pageviews,
sessionqualitydim,
date,
visitid,
type,
productrefundamount,
productquantity,
productprice,
productrevenue,
productsku,
v2productname,
v2productcategory,
productvariant,
currencycode,
itemquantity,
itemrevenue,
transactionrevenue,
transactionid,
pagetitle,
searchkeyword,
pagepathlevel1,
ecommerceaction_type,
ecommerceaction_step,
ecommerceaction_option
HAVING num_duplicate_rows > 1;
```
As you can see there are quite a few "duplicate" records (615) when analyzed with these parameters.
In your own datasets, even if you have a unique key, it is still beneficial to confirm the uniqueness of the rows with COUNT, GROUP BY, and HAVING before you begin your analysis.
## Analyze the new all_sessions table
In this section you use a deduplicated table called all_sessions.
Scenario: Your data analyst team has provided you with a relevant query, and your schema experts have identified the key fields that must be unique for each record per your schema.
Run the query to confirm that no duplicates exist, this time against the "all_sessions" table:
```
%%bigquery --project $PROJECT
#standardSQL
SELECT fullvisitorid, # the unique visitor ID
visitid, # a visitor can have multiple visits
date, # session date stored as string YYYYMMDD
time, # time of the individual site hit (can be 0 or more)
v2productname, # not unique since a product can have variants like Color
productsku, # unique for each product
type, # visit and/or event trigger
ecommerceaction_type, # maps to ‘add to cart', ‘completed checkout'
ecommerceaction_step,
ecommerceaction_option,
transactionrevenue, # revenue of the order
transactionid, # unique identifier for revenue bearing transaction
count(*) AS row_count
FROM `data-to-insights.ecommerce.all_sessions`
GROUP BY 1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12
HAVING row_count > 1 # find duplicates
```
The query returns zero records indicating no duplicates exist.
## Write basic SQL against the eCommerce data
In this section, you query for insights on the ecommerce dataset.
A good first path of analysis is to find the total unique visitors
The query below determines the total views by counting product_views and the number of unique visitors by counting fullVisitorID.
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(DISTINCT fullvisitorid) AS unique_visitors
FROM `data-to-insights.ecommerce.all_sessions`;
```
The next query shows total unique visitors(fullVisitorID) by the referring site (channelGrouping):
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(DISTINCT fullvisitorid) AS unique_visitors,
channelgrouping
FROM `data-to-insights.ecommerce.all_sessions`
GROUP BY 2
ORDER BY 2 DESC;
```
To find deeper insights in the data, the next query lists the five products with the most views (product_views) from unique visitors. The query counts number of times a product (v2ProductName) was viewed (product_views), puts the list in descending order, and lists the top 5 entries:
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
( v2productname ) AS ProductName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
```
Now expand your previous query to include the total number of distinct products ordered and the total number of total units ordered (productQuantity):
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(productquantity) AS orders,
sum(productquantity) AS quantity_product_ordered,
v2productname
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
```
Lastly, expand the query to include the average amount of product per order (total number of units ordered/total number of orders, or `SUM(productQuantity)/COUNT(productQuantity)`).
```
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(productquantity) AS orders,
sum(productquantity) AS quantity_product_ordered,
sum(productquantity) / Count(productquantity) AS avg_per_order,
v2productname AS productName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
```
You can see that among these top 5 products by product views that the 22 oz YouTube Bottle Infuser had the highest avg_per_order with 9.38 units per order.
You have completed this lab exercise. In this situation the "all_sessions" was provided to you with the deduplicated records. In the course of your own future analysis you may have to create this on your own using BigQuery and the `create table DATASET.TABLE2 as select * from DATASET.TABLE1` syntax.
Copyright 2019 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
<IMG SRC="https://avatars2.githubusercontent.com/u/31697400?s=400&u=a5a6fc31ec93c07853dd53835936fd90c44f7483&v=4" WIDTH=125 ALIGN="right">
# GIS
This notebook shows how to export model data so it can be viewed in GIS (QGIS or ArcMAP).
### Contents<a name="TOC"></a>
1. [Model types](#modeltypes)
2. [Export vector data](#vectordata)
3. [Export raster data](#rasterdata)
4. [Add symbology (QGIS)](#symbology)
```
import nlmod
import os
import flopy
import numpy as np
import geopandas as gpd
import xarray as xr
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
import logging
from IPython.display import FileLink, FileLinks
print(f'nlmod version: {nlmod.__version__}')
# toon informatie bij het aanroepen van functies
logging.basicConfig(level=logging.INFO)
```
### [1. Model types](#TOC)<a name="modeltypes"></a>
#### structured grid
```
model_ws = 'model1'
model_name = 'IJmuiden'
model_ds_struc = xr.load_dataset(os.path.join(model_ws, 'cache', 'full_model_ds.nc'))
model_ds_struc
# create gisdir
gisdir_struc = os.path.join(model_ws, 'gis')
if not os.path.exists(gisdir_struc):
os.mkdir(gisdir_struc)
```
#### vertex grid
```
model_ws = 'model3'
model_name = 'IJm_planeten'
model_ds_vert = xr.load_dataset(os.path.join(model_ws, 'cache', 'full_model_ds.nc'))
# get modelgrid
sim = flopy.mf6.MFSimulation.load(sim_name='mfsim.nam', sim_ws=model_ds_vert.model_ws,
load_only=['DISV'])
gwf = sim.get_model(model_ds_vert.model_name)
# get vertices
model_ds_vert['vertices'] = nlmod.mdims.get_vertices(model_ds_vert, modelgrid=gwf.modelgrid)
# create gisdir
gisdir_vert = os.path.join(model_ws, 'gis')
if not os.path.exists(gisdir_vert):
os.mkdir(gisdir_vert)
model_ds_vert
```
### [2. Export vector data](#TOC)<a name="vectordata"></a>
#### structured
```
# write model data to a geopackage
fname_geopackage = nlmod.visualise.gis.model_dataset_to_vector_file(model_ds_struc, gisdir=gisdir_struc)
# get download link
FileLink(fname_geopackage, result_html_prefix='klik hier om te downloaden -> ')
# write model data to multiple shapefiles
fnames = nlmod.visualise.gis.model_dataset_to_vector_file(model_ds_struc, driver='ESRI Shapefile', gisdir=gisdir_struc)
# get download link
FileLinks(gisdir_struc, included_suffixes='.shp')
```
#### vertex grid
```
model_ds_vert
# write model data to a geopackage
fname_geopackage = nlmod.visualise.gis.model_dataset_to_vector_file(model_ds_vert, gisdir=gisdir_vert)
# write model data to multiple shapefiles
nlmod.visualise.gis.model_dataset_to_vector_file(model_ds_vert, driver='ESRI Shapefile', gisdir=gisdir_vert)
```
### [4. Export raster data](#TOC)<a name="rasterdata"></a>
TBA
### [5. Add symbology (QGIS)](#TOC)<a name="symbology"></a>
It is always nice to have automatic symbology for your vector data. Some thoughts:
- QGIS can save symbology of a single shapefile in a .qml file
- In QGIS you can add a .qml file to a geopackage thus saving the symbology to a single file.
- You can create a .qml file in QGIS from existing symbology.
- a .qml file is an xml file so theoretically it is possible to modify a .qml file with Python based on the properties of the data.
- It
Some limitations of the current gis functions:
- when exporting shapefiles to gis, attributes cannot have names longer
than 10 characters. Now the automatic character shortening of fiona is
used. This is not optimal.
- when exporting data variables with dimension time only the mean values
in time are exported in the shapefile to avoid extremely large files.
| github_jupyter |
# 11장. 레이블되지 않은 데이터 다루기 : 군집 분석
**아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch11/ch11.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-2nd-edition/blob/master/code/ch11/ch11.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
</table>
`watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
```
#!pip install watermark
%load_ext watermark
%watermark -u -d -v -p numpy,pandas,matplotlib,scipy,sklearn
```
# k-평균 알고리즘을 사용하여 유사한 객체를 그룹핑하기
## 사이킷런을 사용한 k-평균 군집
```
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=150,
n_features=2,
centers=3,
cluster_std=0.5,
shuffle=True,
random_state=0)
import matplotlib.pyplot as plt
plt.scatter(X[:, 0], X[:, 1],
c='white', marker='o', edgecolor='black', s=50)
plt.grid()
plt.tight_layout()
plt.show()
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3,
init='random',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plt.scatter(X[y_km == 0, 0],
X[y_km == 0, 1],
s=50, c='lightgreen',
marker='s', edgecolor='black',
label='cluster 1')
plt.scatter(X[y_km == 1, 0],
X[y_km == 1, 1],
s=50, c='orange',
marker='o', edgecolor='black',
label='cluster 2')
plt.scatter(X[y_km == 2, 0],
X[y_km == 2, 1],
s=50, c='lightblue',
marker='v', edgecolor='black',
label='cluster 3')
plt.scatter(km.cluster_centers_[:, 0],
km.cluster_centers_[:, 1],
s=250, marker='*',
c='red', edgecolor='black',
label='centroids')
plt.legend(scatterpoints=1)
plt.grid()
plt.tight_layout()
plt.show()
```
## 엘보우 방법을 사용하여 최적의 클러스터 개수를 찾기
```
print('왜곡: %.2f' % km.inertia_)
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
random_state=0)
km.fit(X)
distortions.append(km.inertia_)
plt.plot(range(1, 11), distortions, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.tight_layout()
plt.show()
```
## 실루엣 그래프로 군집 품질을 정량화하기
```
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
km = KMeans(n_clusters=3,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(float(i) / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2.)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
plt.show()
```
잘못된 클러스터링:
```
km = KMeans(n_clusters=2,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
plt.scatter(X[y_km == 0, 0],
X[y_km == 0, 1],
s=50,
c='lightgreen',
edgecolor='black',
marker='s',
label='cluster 1')
plt.scatter(X[y_km == 1, 0],
X[y_km == 1, 1],
s=50,
c='orange',
edgecolor='black',
marker='o',
label='cluster 2')
plt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],
s=250, marker='*', c='red', label='centroids')
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(float(i) / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2.)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
plt.show()
```
# 계층적인 트리로 클러스터를 조직화하기
## 상향식으로 클러스터 묶기
```
import pandas as pd
import numpy as np
np.random.seed(123)
variables = ['X', 'Y', 'Z']
labels = ['ID_0', 'ID_1', 'ID_2', 'ID_3', 'ID_4']
X = np.random.random_sample([5, 3])*10
df = pd.DataFrame(X, columns=variables, index=labels)
df
```
## 거리 행렬에서 계층 군집 수행하기
```
from scipy.spatial.distance import pdist, squareform
row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')),
columns=labels,
index=labels)
row_dist
```
함수 설명을 보면 `pdist` 함수에서 계산한 축약된 거리 행렬(상삼각행렬(upper triangular matrix))을 입력 속성으로 사용할 수 있습니다. 아니면 `linkage` 함수에 초기 데이터 배열을 전달하고 `metric='euclidean'` 지표를 매개변수로 사용할 수 있습니다. 앞서 `squareform` 함수로 만든 거리 행렬은 `linkage` 함수가 기대한 값과 다르기 때문에 사용해서는 안됩니다.
```
# 1. 잘못된 방식: squareform 거리 행렬
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(row_dist, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
# 2. 올바른 방식: 축약된 거리 행렬
row_clusters = linkage(pdist(df, metric='euclidean'), method='complete')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
# 3. 올바른 방식: 입력 샘플 행렬
row_clusters = linkage(df.values, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2',
'distance', 'no. of items in clust.'],
index=['cluster %d' % (i + 1)
for i in range(row_clusters.shape[0])])
from scipy.cluster.hierarchy import dendrogram
# 검은색 덴드로그램 만들기 (1/2 부분만)
# from scipy.cluster.hierarchy import set_link_color_palette
# set_link_color_palette(['black'])
row_dendr = dendrogram(row_clusters,
labels=labels,
# 검은색 덴드로그램 만들기 (2/2 부분)
# color_threshold=np.inf
)
plt.tight_layout()
plt.ylabel('Euclidean distance')
plt.show()
```
## 히트맵에 덴드로그램 연결하기
```
fig = plt.figure(figsize=(8, 8), facecolor='white')
axd = fig.add_axes([0.09, 0.1, 0.2, 0.6])
# 노트: matplotlib < v1.5.1일 때는 use orientation='right'를 사용하세요
row_dendr = dendrogram(row_clusters, orientation='left')
# 군집에 맞게 데이터를 재정렬합니다.
df_rowclust = df.iloc[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# 덴드로그램의 축을 제거합니다.
for i in axd.spines.values():
i.set_visible(False)
# 히트맵을 출력합니다.
axm = fig.add_axes([0.23, 0.1, 0.6, 0.6]) # x-위치, y-위치, 너비, 높이
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
```
## 사이킷런에서 병합 군집 적용하기
```
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3,
affinity='euclidean',
linkage='complete')
labels = ac.fit_predict(X)
print('클러스터 레이블: %s' % labels)
ac = AgglomerativeClustering(n_clusters=2,
affinity='euclidean',
linkage='complete')
labels = ac.fit_predict(X)
print('클러스터 레이블: %s' % labels)
```
# DBSCAN을 사용하여 밀집도가 높은 지역 찾기
```
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=200, noise=0.05, random_state=0)
plt.scatter(X[:, 0], X[:, 1])
plt.tight_layout()
plt.show()
```
K-평균과 계층 군집:
```
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))
km = KMeans(n_clusters=2, random_state=0)
y_km = km.fit_predict(X)
ax1.scatter(X[y_km == 0, 0], X[y_km == 0, 1],
edgecolor='black',
c='lightblue', marker='o', s=40, label='cluster 1')
ax1.scatter(X[y_km == 1, 0], X[y_km == 1, 1],
edgecolor='black',
c='red', marker='s', s=40, label='cluster 2')
ax1.set_title('K-means clustering')
ac = AgglomerativeClustering(n_clusters=2,
affinity='euclidean',
linkage='complete')
y_ac = ac.fit_predict(X)
ax2.scatter(X[y_ac == 0, 0], X[y_ac == 0, 1], c='lightblue',
edgecolor='black',
marker='o', s=40, label='cluster 1')
ax2.scatter(X[y_ac == 1, 0], X[y_ac == 1, 1], c='red',
edgecolor='black',
marker='s', s=40, label='cluster 2')
ax2.set_title('Agglomerative clustering')
plt.legend()
plt.tight_layout()
plt.show()
```
DBSCAN:
```
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.2, min_samples=5, metric='euclidean')
y_db = db.fit_predict(X)
plt.scatter(X[y_db == 0, 0], X[y_db == 0, 1],
c='lightblue', marker='o', s=40,
edgecolor='black',
label='cluster 1')
plt.scatter(X[y_db == 1, 0], X[y_db == 1, 1],
c='red', marker='s', s=40,
edgecolor='black',
label='cluster 2')
plt.legend()
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Library
```
import tensorflow as tf
import numpy as np
import os
import glob
import pandas as pd
import PIL
import gc
from PIL import Image
print(f'Numpy version : {np.__version__}')
print(f'Pandas version : {pd.__version__}')
print(f'Tensorflow version : {tf.__version__}')
print(f'Pillow version : {PIL.__version__}')
```
# Dataset
```
!ls /kaggle/input
df_train = pd.read_parquet('/kaggle/input/csv-with-cleaned-ocr-text/train.parquet', engine='pyarrow').sort_values("filename").reset_index(drop=True)
df_test = pd.read_parquet('/kaggle/input/csv-with-cleaned-ocr-text/test.parquet', engine='pyarrow')
df_test
```
# Create TFRecord
```
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _list_float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _list_int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
RESIZE_WIDTH = 512
RESIZE_HEIGHT = 512
TFRECORD_MAX_SIZE = 80 * 1024 * 1024 # 80 MB
TOTAL_IMAGES = len(df_train.index)
# TOTAL_IMAGES = len(df_test.index)
# part 1 : 0:TOTAL_IMAGES // 2 (train)
# part 2 : TOTAL_IMAGES // 2:TOTAL_IMAGES (train) [CURRENT]
# part 3 : 0:TOTAL_IMAGES (test)
START_INDEX = TOTAL_IMAGES // 2
END_INDEX = TOTAL_IMAGES
BATCH_IMAGE = 1024
def create_tfrecord(index, df):
index = str(index).zfill(3)
curr_file = f"train-{index}.tfrecords"
writer = tf.io.TFRecordWriter(curr_file)
for index, row in df.iterrows():
category_str = str(row['category']).zfill(2)
image = f'/kaggle/input/shopee-product-detection-student/train/train/train/{category_str}/{row["filename"]}'
img = open(image, 'rb')
img_read = img.read()
image_decoded = tf.image.decode_jpeg(img_read, channels=3)
resized_img = tf.image.resize_with_pad(image_decoded,target_width=RESIZE_WIDTH,target_height=RESIZE_HEIGHT,method=tf.image.ResizeMethod.BILINEAR)
resized_img = tf.cast(resized_img,tf.uint8)
resized_img = tf.io.encode_jpeg(resized_img)
feature = {
'filename': _bytes_feature(tf.compat.as_bytes(row['filename'])),
'label': _int64_feature(row['category']),
'words': _list_float_feature(row['words']),
'image': _bytes_feature(resized_img),
'height' : _int64_feature(RESIZE_HEIGHT),
'width' : _int64_feature(RESIZE_WIDTH)
}
example = tf.train.Example(features=tf.train.Features(feature=feature))
writer.write(example.SerializeToString())
writer.close()
for i in range(START_INDEX, END_INDEX, BATCH_IMAGE):
print(f'Create TFRecords #{i // BATCH_IMAGE + 1}')
if i + BATCH_IMAGE < END_INDEX:
create_tfrecord(i // BATCH_IMAGE + 1, df_train.loc[i:i+BATCH_IMAGE])
else:
create_tfrecord(i // BATCH_IMAGE + 1, df_train.loc[i:END_INDEX])
gc.collect()
!ls -lah
```
| github_jupyter |
# Numerical Hydrodynamics Assignment
The numerical assignment consists of a short group report on a marine geometry. Your group name indicates which geometry to use, but if your group would prefer to work on a bespoke geometry, come talk to me.
Individuals in the group will have their mark reduced for failing to complete the blackboard quizzes discussed in class and/or failure to contribute to the group work.
### Report format
The report will be submitted in the form of a single Jupyter notebook, like this one. This means it will be a [reproducible engineering document](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261); all of your results will be generated and displayed inside the document itself.
The report should contain short *Introduction*, *Method and Validation*, *Results and Discussion* sections. All results must be generated in the notebook and be publication quality; axis labeled, plot legends included, etc.
**Please keep the method section focused and to the point**. In particular, do not explain how the functions in `VortexPanel` and `BoundaryLayer` work - explain how *your* code works. How did your specific problem require you to modify the methods introduced in the module.
**Please include validation examples** for the geometry generation and post-processing steps. Again, the `VortexPanel` and `BoundaryLayer` routines are already validated - don't do it again. How do you know _your_ routines are working?
It is difficult to measure the "number of pages" in these notebooks, but aim for something like a 4 page report with 700-1000 words.
### Group management
Look for ways that members of the groups can work independently of each other to speed up the process. For example, since you should validate your post-processing code with geometries used in class, and validate your geometry code with simple post-processing metrics we used in class - _these two tasks are independent_ ! Each member who writes a function should prove it is working with a validation example. Your report is then just a sequence of these examples, followed by your final application example.
### Report dependencies
You should use as many pre-developed functions as possible from the `numpy`, `pyplot`, `scipy`, `VortexPanel` and `BoundaryLayer` libraries. Never rewrite functions that have already been developed (and tested). Feel free to look at the code in `VortexPanel.py` or `BoundaryLayer.py` to get ideas on how to write your own functions, but don't make changes inside those files since it would break compatibility with my files. Analytic examples are best for validation, but if you do want to import results from experiments or another numerical method, you can either copy the data directly in the notebook (preferred for small arrays) or import from a data file or website (preferred for large data sets).
**Upload the ipynb file to blackboard 6-May-2021.**
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Convert Your Existing Code to TensorFlow 2.0
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migration_guide">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migration_guide.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migration_guide.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migration_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Important: This doc for users of low level TensorFlow APIs. If you are using the high level APIs (`tf.keras`) there may be little or no action you need to take to make your code fully TensorFlow 2.0 compatible. Check your [optimizer's default learning rate](#keras_optimizer_lr).
It is still possible to run 1.X code, unmodified ([except for contrib](https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md)), in TensorFlow 2.0:
```
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
```
However, this does not let you take advantage of many of the improvements made in TensorFlow 2.0. This guide will help you upgrade your code, making it simpler, more performant, and easier to maintain.
## Automatic conversion script
The first step, before attempting to implement the changes described in this doc, is to try running the [upgrade script](./upgrade.md).
This will do an initial pass at upgrading your code to TensorFlow 2.0. But it can't make your code idiomatic to 2.0. Your code may still make use of `tf.compat.v1` endpoints to access placeholders, sessions, collections, and other 1.x-style functionality.
## Top-level behavioral changes
If your code works in TensorFlow 2.0 using `tf.compat.v1.disable_v2_behavior()`, there are still global behavioral changes you may need to address. The major changes are:
* *Eager execution, `v1.enable_eager_execution()`* : Any code that implicitly uses a `tf.Graph` will fail. Be sure to wrap this code in a `with tf.Graph().as_default()` context.
* *Resource variables, `v1.enable_resource_variables()`*: Some code may depends on non-deterministic behaviors enabled by TF reference variables.
Resource variables are locked while being written to, and so provide more intuitive consistency guarantees.
* This may change behavior in edge cases.
* This may create extra copies and can have higher memory usage.
* This can be disabled by passing `use_resource=False` to the `tf.Variable` constructor.
* *Tensor shapes, `v1.enable_v2_tensorshape()`*: TF 2.0 simplifies the behavior of tensor shapes. Instead of `t.shape[0].value` you can say `t.shape[0]`. These changes should be small, and it makes sense to fix them right away. See [TensorShape](#tensorshape) for examples.
* *Control flow, `v1.enable_control_flow_v2()`*: The TF 2.0 control flow implementation has been simplified, and so produces different graph representations. Please [file bugs](https://github.com/tensorflow/tensorflow/issues) for any issues.
## Make the code 2.0-native
This guide will walk through several examples of converting TensorFlow 1.x code to TensorFlow 2.0. These changes will let your code take advantage of performance optimizations and simplified API calls.
In each case, the pattern is:
### 1. Replace `v1.Session.run` calls
Every `v1.Session.run` call should be replaced by a Python function.
* The `feed_dict` and `v1.placeholder`s become function arguments.
* The `fetches` become the function's return value.
* During conversion eager execution allows easy debugging with standard Python tools like `pdb`.
After that add a `tf.function` decorator to make it run efficiently in graph. See the [Autograph Guide](autograph.ipynb) for more on how this works.
Note that:
* Unlike `v1.Session.run` a `tf.function` has a fixed return signature, and always returns all outputs. If this causes performance problems, create two separate functions.
* There is no need for a `tf.control_dependencies` or similar operations: A `tf.function` behaves as if it were run in the order written. `tf.Variable` assignments and `tf.assert`s, for example, are executed automatically.
### 2. Use Python objects to track variables and losses
All name-based variable tracking is strongly discouraged in TF 2.0. Use Python objects to to track variables.
Use `tf.Variable` instead of `v1.get_variable`.
Every `v1.variable_scope` should be converted to a Python object. Typically this will be one of:
* `tf.keras.layers.Layer`
* `tf.keras.Model`
* `tf.Module`
If you need to aggregate lists of variables (like `tf.Graph.get_collection(tf.GraphKeys.VARIABLES)`), use the `.variables` and `.trainable_variables` attributes of the `Layer` and `Model` objects.
These `Layer` and `Model` classes implement several other properties that remove the need for global collections. Their `.losses` property can be a replacement for using the `tf.GraphKeys.LOSSES` collection.
See the [keras guides](keras.ipynb) for details.
Warning: Many `tf.compat.v1` symbols use the global collections implicitly.
### 3. Upgrade your training loops
Use the highest level API that works for your use case. Prefer `tf.keras.Model.fit` over building your own training loops.
These high level functions manage a lot of the low-level details that might be easy to miss if you write your own training loop. For example, they automatically collect the regularization losses, and set the `training=True` argument when calling the model.
### 4. Upgrade your data input pipelines
Use `tf.data` datasets for data input. These objects are efficient, expressive, and integrate well with tensorflow.
They can be passed directly to the `tf.keras.Model.fit` method.
```
model.fit(dataset, epochs=5)
```
They can be iterated over directly standard Python:
```
for example_batch, label_batch in dataset:
break
```
#### 5. Migrate off `compat.v1` symbols
The `tf.compat.v1` module contains the complete TensorFlow 1.x API, with its original semantics.
The [TF2 upgrade script](upgrade.ipynb) will convert symbols to their 2.0 equivalents if such a conversion is safe, i.e., if it can determine that the behavior of the 2.0 version is exactly equivalent (for instance, it will rename `v1.arg_max` to `tf.argmax`, since those are the same function).
After the upgrade script is done with a piece of code, it is likely there are many mentions of `compat.v1`. It is worth going through the code and converting these manually to the 2.0 equivalent (it should be mentioned in the log if there is one).
## Converting models
### Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import tensorflow_datasets as tfds
```
### Low-level variables & operator execution
Examples of low-level API use include:
* using variable scopes to control reuse
* creating variables with `v1.get_variable`.
* accessing collections explicitly
* accessing collections implicitly with methods like :
* `v1.global_variables`
* `v1.losses.get_regularization_loss`
* using `v1.placeholder` to set up graph inputs
* executing graphs with `Session.run`
* initializing variables manually
#### Before converting
Here is what these patterns may look like in code using TensorFlow 1.x.
```python
in_a = tf.placeholder(dtype=tf.float32, shape=(2))
in_b = tf.placeholder(dtype=tf.float32, shape=(2))
def forward(x):
with tf.variable_scope("matmul", reuse=tf.AUTO_REUSE):
W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)),
regularizer=tf.contrib.layers.l2_regularizer(0.04))
b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))
return W * x + b
out_a = forward(in_a)
out_b = forward(in_b)
reg_loss = tf.losses.get_regularization_loss(scope="matmul")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
outs = sess.run([out_a, out_b, reg_loss],
feed_dict={in_a: [1, 0], in_b: [0, 1]})
```
#### After converting
In the converted code:
* The variables are local Python objects.
* The `forward` function still defines the calculation.
* The `Session.run` call is replaced with a call to `forward`
* The optional `tf.function` decorator can be added for performance.
* The regularizations are calculated manually, without referring to any global collection.
* **No sessions or placeholders.**
```
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
@tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
out_b = forward([0,1])
regularizer = tf.keras.regularizers.l2(0.04)
reg_loss = regularizer(W)
```
### Models based on `tf.layers`
The `v1.layers` module is used to contain layer-functions that relied on `v1.variable_scope` to define and reuse variables.
#### Before converting
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.04))
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
x = tf.layers.dropout(x, 0.1, training=training)
x = tf.layers.dense(x, 64, activation=tf.nn.relu)
x = tf.layers.batch_normalization(x, training=training)
x = tf.layers.dense(x, 10, activation=tf.nn.softmax)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
#### After converting
* The simple stack of layers fits neatly into `tf.keras.Sequential`. (For more complex models see [custom layers and models](keras/custom_layers_and_models.ipynb), and [the functional API](keras/functional.ipynb).)
* The model tracks the variables, and regularization losses.
* The conversion was one-to-one because there is a direct mapping from `v1.layers` to `tf.keras.layers`.
Most arguments stayed the same. But notice the differences:
* The `training` argument is passed to each layer by the model when it runs.
* The first argument to the original `model` function (the input `x`) is gone. This is because object layers separate building the model from calling the model.
Also note that:
* If you were using regularizers of initializers from `tf.contrib`, these have more argument changes than others.
* The code no longer writes to collections, so functions like `v1.losses.get_regularization_loss` will no longer return these values, potentially breaking your training loops.
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
print(train_out)
test_out = model(test_data, training=False)
print(test_out)
# Here are all the trainable variables.
len(model.trainable_variables)
# Here is the regularization loss.
model.losses
```
### Mixed variables & `v1.layers`
Existing code often mixes lower-level TF 1.x variables and operations with higher-level `v1.layers`.
#### Before converting
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
W = tf.get_variable(
"W", dtype=tf.float32,
initializer=tf.ones(shape=x.shape),
regularizer=tf.contrib.layers.l2_regularizer(0.04),
trainable=True)
if training:
x = x + W
else:
x = x + W * 0.5
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu)
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
#### After converting
To convert this code, follow the pattern of mapping layers to layers as in the previous example.
A `v1.variable_scope` is effectively a layer of its own. So rewrite it as a `tf.keras.layers.Layer`. See [the guide](keras/custom_layers_and_models.ipynb) for details.
The general pattern is:
* Collect layer parameters in `__init__`.
* Build the variables in `build`.
* Execute the calculations in `call`, and return the result.
The `v1.variable_scope` is essentially a layer of its own. So rewrite it as a `tf.keras.layers.Layer`. See [the guide](keras/custom_layers_and_models.ipynb) for details.
```
# Create a custom layer for part of the model
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(CustomLayer, self).__init__(*args, **kwargs)
def build(self, input_shape):
self.w = self.add_weight(
shape=input_shape[1:],
dtype=tf.float32,
initializer=tf.keras.initializers.ones(),
regularizer=tf.keras.regularizers.l2(0.02),
trainable=True)
# Call method will sometimes get used in graph mode,
# training will get turned into a tensor
@tf.function
def call(self, inputs, training=None):
if training:
return inputs + self.w
else:
return inputs + self.w * 0.5
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
# Build the model including the custom layer
model = tf.keras.Sequential([
CustomLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
])
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
Some things to note:
* Subclassed Keras models & layers need to run in both v1 graphs (no automatic control dependencies) and in eager mode
* Wrap the `call()` in a `tf.function()` to get autograph and automatic control dependencies
* Don't forget to accept a `training` argument to `call`.
* Sometimes it is a `tf.Tensor`
* Sometimes it is a Python boolean.
* Create model variables in constructor or `Model.build` using `self.add_weight()`.
* In `Model.build` you have access to the input shape, so can create weights with matching shape.
* Using `tf.keras.layers.Layer.add_weight` allows Keras to track variables and regularization losses.
* Don't keep `tf.Tensors` in your objects.
* They might get created either in a `tf.function` or in the eager context, and these tensors behave differently.
* Use `tf.Variable`s for state, they are always usable from both contexts
* `tf.Tensors` are only for intermediate values.
### A note on Slim & contrib.layers
A large amount of older TensorFlow 1.x code uses the [Slim](https://ai.googleblog.com/2016/08/tf-slim-high-level-library-to-define.html) library, which was packaged with TensorFlow 1.x as `tf.contrib.layers`. As a `contrib` module, this is no longer available in TensorFlow 2.0, even in `tf.compat.v1`. Converting code using Slim to TF 2.0 is more involved than converting repositories that use `v1.layers`. In fact, it may make sense to convert your Slim code to `v1.layers` first, then convert to Keras.
* Remove `arg_scopes`, all args need to be explicit
* If you use them, split `normalizer_fn` and `activation_fn` into their own layers
* Separable conv layers map to one or more different Keras layers (depthwise, pointwise, and separable Keras layers)
* Slim and `v1.layers` have different arg names & default values
* Some args have different scales
* If you use Slim pre-trained models, try out `tf.keras.applications` or [TFHub](https://tensorflow.orb/hub)
Some `tf.contrib` layers might not have been moved to core TensorFlow but have instead been moved to the [TF add-ons package](https://github.com/tensorflow/addons).
## Training
There are many ways to feed data to a `tf.keras` model. They will accept Python generators and Numpy arrays as input.
The recommended way to feed data to a model is to use the `tf.data` package, which contains a collection of high performance classes for manipulating data.
If you are still using `tf.queue`, these are now only supported as data-structures, not as input pipelines.
### Using Datasets
The [TensorFlow Datasets](https://tensorflow.org/datasets) package (`tfds`) contains utilities for loading predefined datasets as `tf.data.Dataset` objects.
For this example, load the MNISTdataset, using `tfds`:
```
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
```
Then prepare the data for training:
* Re-scale each image.
* Shuffle the order of the examples.
* Collect batches of images and labels.
```
BUFFER_SIZE = 10 # Use a much larger value for real code.
BATCH_SIZE = 64
NUM_EPOCHS = 5
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
```
To keep the example short, trim the dataset to only return 5 batches:
```
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE).take(5)
test_data = mnist_test.map(scale).batch(BATCH_SIZE).take(5)
STEPS_PER_EPOCH = 5
train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
```
### Use Keras training loops
If you don't need low level control of your training process, using Keras's built-in `fit`, `evaluate`, and `predict` methods is recommended. These methods provide a uniform interface to train the model regardless of the implementation (sequential, functional, or sub-classed).
The advantages of these methods include:
* They accept Numpy arrays, Python generators and, `tf.data.Datasets`
* They apply regularization, and activation losses automatically.
* They support `tf.distribute` [for multi-device training](distribute_strategy.ipynb).
* They support arbitrary callables as losses and metrics.
* They support callbacks like `tf.keras.callbacks.TensorBoard`, and custom callbacks.
* They are performant, automatically using TensorFlow graphs.
Here is an example of training a model using a `Dataset`. (For details on how this works see [tutorials](../tutorials).)
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])
# Model is the full model w/o custom layers
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)
print("Loss {}, Accuracy {}".format(loss, acc))
```
### Write your own loop
If the Keras model's training step works for you, but you need more control outside that step, consider using the `tf.keras.Model.train_on_batch` method, in your own data-iteration loop.
Remember: Many things can be implemented as a `tf.keras.callbacks.Callback`.
This method has many of the advantages of the methods mentioned in the previous section, but gives the user control of the outer loop.
You can also use `tf.keras.Model.test_on_batch` or `tf.keras.Model.evaluate` to check performance during training.
Note: `train_on_batch` and `test_on_batch`, by default return the loss and metrics for the single batch. If you pass `reset_metrics=False` they return accumulated metrics and you must remember to appropriately reset the metric accumulators. Also remember that some metrics like `AUC` require `reset_metrics=False` to be calculated correctly.
To continue training the above model:
```
# Model is the full model w/o custom layers
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
metrics_names = model.metrics_names
for epoch in range(NUM_EPOCHS):
#Reset the metric accumulators
model.reset_metrics()
for image_batch, label_batch in train_data:
result = model.train_on_batch(image_batch, label_batch)
print("train: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
for image_batch, label_batch in test_data:
result = model.test_on_batch(image_batch, label_batch,
# return accumulated metrics
reset_metrics=False)
print("\neval: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
```
<a name="custom_loop"></a>
### Customize the training step
If you need more flexibility and control, you can have it by implementing your own training loop. There are three steps:
1. Iterate over a Python generator or `tf.data.Dataset` to get batches of examples.
2. Use `tf.GradientTape` to collect gradients.
3. Use one of the `tf.keras.optimizers` to apply weight updates to the model's variables.
Remember:
* Always include a `training` argument on the `call` method of subclassed layers and models.
* Make sure to call the model with the `training` argument set correctly.
* Depending on usage, model variables may not exist until the model is run on a batch of data.
* You need to manually handle things like regularization losses for the model.
Note the simplifications relative to v1:
* There is no need to run variable initializers. Variables are initialized on creation.
* There is no need to add manual control dependencies. Even in `tf.function` operations act as in eager mode.
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for epoch in range(NUM_EPOCHS):
for inputs, labels in train_data:
train_step(inputs, labels)
print("Finished epoch", epoch)
```
### New-style metrics and losses
In TensorFlow 2.0, metrics and losses are objects. These work both eagerly and in `tf.function`s.
A loss object is callable, and expects the (y_pred, y_true) as arguments:
```
cce = tf.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
```
A metric object has the following methods:
* `Metric.update_state()` — add new observations
* `Metric.result()` —get the current result of the metric, given the observed values
* `Metric.reset_states()` — clear all observations.
The object itself is callable. Calling updates the state with new observations, as with `update_state`, and returns the new result of the metric.
You don't have to manually initialize a metric's variables, and because TensorFlow 2.0 has automatic control dependencies, you don't need to worry about those either.
The code below uses a metric to keep track of the mean loss observed within a custom training loop.
```
# Create the metrics
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# Update the metrics
loss_metric.update_state(total_loss)
accuracy_metric.update_state(labels, predictions)
for epoch in range(NUM_EPOCHS):
# Reset the metrics
loss_metric.reset_states()
accuracy_metric.reset_states()
for inputs, labels in train_data:
train_step(inputs, labels)
# Get the metric results
mean_loss = loss_metric.result()
mean_accuracy = accuracy_metric.result()
print('Epoch: ', epoch)
print(' loss: {:.3f}'.format(mean_loss))
print(' accuracy: {:.3f}'.format(mean_accuracy))
```
### Keras optimizers
The optimizers in `v1.train`, like `v1.train.AdamOptimizer` and `v1.train.GradientDescentOptimizer`, have equivalents in `tf.keras.optimizers`.
#### Convert `v1.train` to `keras.optimizers`
Here are things to keep in mind when converting your optimizers:
* Upgrading your optimizers [may make old checkpoints incompatible](#checkpoints).
* All epsilons now default to `1e-7` instead of `1e-8` (which is negligible in most use cases).
* `v1.train.GradientDescentOptimizer` can be directly replaced by `tf.keras.optimizers.SGD`.
* `v1.train.MomentumOptimizer` can be directly replaced by the `SGD` optimizer using the momentum argument: `tf.keras.optimizers.SGD(..., momentum=...)`.
* `v1.train.AdamOptimizer` can be converted to use `tf.keras.optimizers.Adam`. The `beta1` and `beta2` arguments have been renamed to `beta_1` and `beta_2`.
* `v1.train.RMSPropOptimizer` can be converted to `tf.keras.optimizers.RMSprop`. The `decay` argument has been renamed to `rho`.
* `v1.train.AdadeltaOptimizer` can be converted directly to `tf.keras.optimizers.Adadelta`.
* `tf.train.AdagradOptimizer` can be converted directly to `tf.keras.optimizers.Adagrad`.
* `tf.train.FtrlOptimizer` can be converted directly to `tf.keras.optimizers.Ftrl`. The `accum_name` and `linear_name` arguments have been removed.
* The `tf.contrib.AdamaxOptimizer` and `tf.contrib.NadamOptimizer`, can be converted directly to `tf.keras.optimizers.Adamax` and `tf.keras.optimizers.Nadam`. The `beta1`, and `beta2` arguments have been renamed to `beta_1` and `beta_2`.
#### New defaults for some `tf.keras.optimizers`
<a id="keras_optimizer_lr"></a>
Warning: If you see a change in convergence behavior for your models, check the default learning rates.
There are no changes for `optimizers.SGD`, `optimizers.Adam`, or `optimizers.RMSprop`.
The following default learning rates have changed:
* `optimizers.Adagrad` from 0.01 to 0.001
* `optimizers.Adadelta` from 1.0 to 0.001
* `optimizers.Adamax` from 0.002 to 0.001
* `optimizers.Nadam` from 0.002 to 0.001
### TensorBoard
TensorFlow 2.0 includes significant changes to the `tf.summary` API used to write summary data for visualization in TensorBoard. For a general introduction to the new tf.summary, there are [several tutorials available](https://www.tensorflow.org/tensorboard/r2/get_started) that use the TF 2.0 API. This includes a [TensorBoard TF2.0 Migration Guide](https://www.tensorflow.org/tensorboard/r2/migrate)
## Saving & Loading
<a id="checkpoints"></a>
### Checkpoint compatibility
TensorFlow 2.0 uses [object-based checkpoints](checkpoints.ipynb).
Old-style name-based checkpoints can still be loaded, if you're careful.
The code conversion process may result in variable name changes, but there are workarounds.
The simplest approach it to line up the names of the new model with the names in the checkpoint:
* Variables still all have a `name` argument you can set.
* Keras models also take a `name` argument as which they set as the prefix for their variables.
* The `v1.name_scope` function can be used to set variable name prefixes. This is very different from `tf.variable_scope`. It only affects names, and doesn't track variables & reuse.
If that does not work for your use-case, try the `v1.train.init_from_checkpoint` function. It takes an `assignment_map` argument, which specifies the mapping from old names to new names.
Note: Unlike object based checkpoints, which can [defer loading](checkpoints.ipynb#loading_mechanics), name-based checkpoints require that all variables be built when the function is called. Some models defer building variables until you call `build` or run the model on a batch of data.
The [TensorFlow Estimator repository](https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py) includes a [conversion tool](#checkpoint_converter) to upgrade the checkpoints for premade estimators from TensorFlow 1.X to 2.0. It may serve as an example of how to build a tool fr a similar use-case.
### Saved models compatibility
There are no significant compatibility concerns for saved models.
* TensorFlow 1.x saved_models work in TensorFlow 2.0.
* TensorFlow 2.0 saved_models even load work in TensorFlow 1.x if all the ops are supported.
## Estimators
### Training with Estimators
Estimators are supported in TensorFlow 2.0.
When you use estimators, you can use `input_fn()`, `tf.estimator.TrainSpec`, and `tf.estimator.EvalSpec` from TensorFlow 1.x.
Here is an example using `input_fn` with train and evaluate specs.
#### Creating the input_fn and train/eval specs
```
# Define the estimator's input_fn
def input_fn():
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label[..., tf.newaxis]
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
return train_data.repeat()
# Define train & eval specs
train_spec = tf.estimator.TrainSpec(input_fn=input_fn,
max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)
eval_spec = tf.estimator.EvalSpec(input_fn=input_fn,
steps=STEPS_PER_EPOCH)
```
### Using a Keras model definition
There are some differences in how to construct your estimators in TensorFlow 2.0.
We recommend that you define your model using Keras, then use the `tf.keras.estimator.model_to_estimator` utility to turn your model into an estimator. The code below shows how to use this utility when creating and training an estimator.
```
def make_model():
return tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])
model = make_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(
keras_model = model
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
### Using a custom `model_fn`
If you have an existing custom estimator `model_fn` that you need to maintain, you can convert your `model_fn` to use a Keras model.
However, for compatibility reasons, a custom `model_fn` will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies.
<a name="minimal_changes"></a>
#### Custom model_fn with minimal changes
To make your custom `model_fn` work in TF 2.0, if you prefer minimal changes to the existing code, `tf.compat.v1` symbols such as `optimizers` and `metrics` can be used.
Using a Keras models in a custom `model_fn` is similar to using it in a custom training loop:
* Set the `training` phase appropriately, based on the `mode` argument.
* Explicitly pass the model's `trainable_variables` to the optimizer.
But there are important differences, relative to a [custom loop](#custom_loop):
* Instead of using `Model.losses`, extract the losses using `Model.get_losses_for`.
* Extract the model's updates using `Model.get_updates_for`.
Note: "Updates" are changes that need to be applied to a model after each batch. For example, the moving averages of the mean and variance in a `layers.BatchNormalization` layer.
The following code creates an estimator from a custom `model_fn`, illustrating all of these concerns.
```
def my_model_fn(features, labels, mode):
model = make_model()
optimizer = tf.compat.v1.train.AdamOptimizer()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
training = (mode == tf.estimator.ModeKeys.TRAIN)
predictions = model(features, training=training)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
total_loss = loss_fn(labels, predictions) + tf.math.add_n(reg_losses)
accuracy = tf.compat.v1.metrics.accuracy(labels=labels,
predictions=tf.math.argmax(predictions, axis=1),
name='acc_op')
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
minimize_op = optimizer.minimize(
total_loss,
var_list=model.trainable_variables,
global_step=tf.compat.v1.train.get_or_create_global_step())
train_op = tf.group(minimize_op, update_ops)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op, eval_metric_ops={'accuracy': accuracy})
# Create the Estimator & Train
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
#### Custom `model_fn` with TF 2.0 symbols
If you want to get rid of all TF 1.x symbols and upgrade your custom `model_fn` to native TF 2.0, you need to update the optimizer and metrics to `tf.keras.optimizers` and `tf.keras.metrics`.
In the custom `model_fn`, besides the above [changes](#minimal_changes), more upgrades need to be made:
* Use [`tf.keras.optimizers`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers) instead of `v1.train.Optimizer`.
* Explicitly pass the model's `trainable_variables` to the `tf.keras.optimizers`.
* To compute the `train_op/minimize_op`,
* Use `Optimizer.get_updates()` if the loss is scalar loss `Tensor`(not a callable). The first element in the returned list is the desired `train_op/minimize_op`.
* If the loss is a callable (such as a function), use `Optimizer.minimize()` to get the `train_op/minimize_op`.
* Use [`tf.keras.metrics`](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) instead of `tf.compat.v1.metrics` for evaluation.
For the above example of `my_model_fn`, the migrated code with 2.0 symbols is shown as:
```
def my_model_fn(features, labels, mode):
model = make_model()
training = (mode == tf.estimator.ModeKeys.TRAIN)
loss_obj = tf.keras.losses.SparseCategoricalCrossentropy()
predictions = model(features, training=training)
# Get both the unconditional losses (the None part)
# and the input-conditional losses (the features part).
reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
total_loss = loss_obj(labels, predictions) + tf.math.add_n(reg_losses)
# Upgrade to tf.keras.metrics.
accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')
accuracy = accuracy_obj.update_state(
y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))
train_op = None
if training:
# Upgrade to tf.keras.optimizers.
optimizer = tf.keras.optimizers.Adam()
# Manually assign tf.compat.v1.global_step variable to optimizer.iterations
# to make tf.compat.v1.train.global_step increased correctly.
# This assignment is a must for any `tf.train.SessionRunHook` specified in
# estimator, as SessionRunHooks rely on global step.
optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()
# Get both the unconditional updates (the None part)
# and the input-conditional updates (the features part).
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
# Compute the minimize_op.
minimize_op = optimizer.get_updates(
total_loss,
model.trainable_variables)[0]
train_op = tf.group(minimize_op, *update_ops)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op,
eval_metric_ops={'Accuracy': accuracy_obj})
# Create the Estimator & Train.
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
### Premade Estimators
[Premade Estimators](https://www.tensorflow.org/guide/premade_estimators) in the family of `tf.estimator.DNN*`, `tf.estimator.Linear*` and `tf.estimator.DNNLinearCombined*` are still supported in the TensorFlow 2.0 API, however, some arguments have changed:
1. `input_layer_partitioner`: Removed in 2.0.
2. `loss_reduction`: Updated to `tf.keras.losses.Reduction` instead of `tf.compat.v1.losses.Reduction`. Its default value is also changed to `tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE` from `tf.compat.v1.losses.Reduction.SUM`.
3. `optimizer`, `dnn_optimizer` and `linear_optimizer`: this arg has been updated to `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`.
To migrate the above changes:
1. No migration is needed for `input_layer_partitioner` since [`Distribution Strategy`](https://www.tensorflow.org/guide/distribute_strategy) will handle it automatically in TF 2.0.
2. For `loss_reduction`, check [`tf.keras.losses.Reduction`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses/Reduction) for the supported options.
3. For `optimizer` args, if you do not pass in an `optimizer`, `dnn_optimizer` or `linear_optimizer` arg, or if you specify the `optimizer` arg as a `string` in your code, you don't need to change anything. `tf.keras.optimizers` is used by default. Otherwise, you need to update it from `tf.compat.v1.train.Optimizer` to its corresponding [`tf.keras.optimizers`](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers)
#### Checkpoint Converter
<a id="checkpoint_converter"></a>
The migration to `keras.optimizers` will break checkpoints saved using TF 1.x, as `tf.keras.optimizers` generates a different set of variables to be saved in checkpoints. To make old checkpoint reusable after your migration to TF 2.0, try the [checkpoint converter tool](https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py).
```
! curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py
```
The tool has builtin help:
```
! python checkpoint_converter.py -h
```
<a id="tensorshape"></a>
## TensorShape
This class was simplified to hold `int`s, instead of `tf.compat.v1.Dimension` objects. So there is no need to call `.value()` to get an `int`.
Individual `tf.compat.v1.Dimension` objects are still accessible from `tf.TensorShape.dims`.
The following demonstrate the differences between TensorFlow 1.x and TensorFlow 2.0.
```
# Create a shape and choose an index
i = 0
shape = tf.TensorShape([16, None, 256])
shape
```
If you had this in TF 1.x:
```python
value = shape[i].value
```
Then do this in TF 2.0:
```
value = shape[i]
value
```
If you had this in TF 1.x:
```python
for dim in shape:
value = dim.value
print(value)
```
Then do this in TF 2.0:
```
for value in shape:
print(value)
```
If you had this in TF 1.x (Or used any other dimension method):
```python
dim = shape[i]
dim.assert_is_compatible_with(other_dim)
```
Then do this in TF 2.0:
```
other_dim = 16
Dimension = tf.compat.v1.Dimension
if shape.rank is None:
dim = Dimension(None)
else:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
shape = tf.TensorShape(None)
if shape:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # or any other dimension method
```
The boolean value of a `tf.TensorShape` is `True` if the rank is known, `False` otherwise.
```
print(bool(tf.TensorShape([]))) # Scalar
print(bool(tf.TensorShape([0]))) # 0-length vector
print(bool(tf.TensorShape([1]))) # 1-length vector
print(bool(tf.TensorShape([None]))) # Unknown-length vector
print(bool(tf.TensorShape([1, 10, 100]))) # 3D tensor
print(bool(tf.TensorShape([None, None, None]))) # 3D tensor with no known dimensions
print()
print(bool(tf.TensorShape(None))) # A tensor with unknown rank.
```
## Other Changes
* Remove `tf.colocate_with`: TensorFlow's device placement algorithms have improved significantly. This should no longer be necessary. If removing it causes a performance degredation [please file a bug](https://github.com/tensorflow/tensorflow/issues).
## Conclusions
The overall process is:
1. Run the upgrade script.
2. Remove contrib symbols.
3. Switch your models to an object oriented style (Keras).
4. Use `tf.keras` or `tf.estimator` training and evaluation loops where you can.
5. Otherwise, use custom loops, but be sure to avoid sessions & collections.
It takes a little work to convert code to idiomatic TensorFlow 2.0, but every change results in:
* Fewer lines of code.
* Increased clarity and simplicity.
* Easier debugging.
| github_jupyter |
# Song Gathering
**JC Nacpil 2021/09/06**
In this notebook, we will build a database of Kpop songs with audio features using the Spotify Web API and Spotipy package. The output files will be used for `KpopSongRecommender`.
## Set-up
### Importing libraries
```
# Library for accessing Spotify API
import spotipy
import spotipy.util as util
from spotipy.oauth2 import SpotifyClientCredentials
from spotipy.oauth2 import SpotifyOAuth
# Scientific and vector computation for python
import numpy as np
# Data manipulation and analysis
import pandas as pd
# Library for this notebook providing utilitiy functions
from utils import repeatAPICall
# Progress bar
from tqdm import tqdm
# Cosine similarity calculation
from sklearn.metrics.pairwise import cosine_similarity
# Deep copy of python data structures
from copy import deepcopy
# Plotting library
import matplotlib.pyplot as plt
```
### Setting up Spotify API
The following are the Spotiy API credentials `CLIENT_ID` and `CLIENT_SECRET` for our application. This allows us to access data from Spotify through the <a href='https://developer.spotify.com/documentation/web-api/'>Web API</a>. It is recommended to register your own application and manage these credentials at <a href = "https://developer.spotify.com/dashboard/">My Dashboard</a>.
```
CLIENT_ID = "dc7ef763416f49aca20c740e46bd1f79"
CLIENT_SECRET = "056f146106544a828574e8e903286fb7"
token = SpotifyClientCredentials(client_id=CLIENT_ID, client_secret=CLIENT_SECRET)
cache_token = token.get_access_token(as_dict = False)
sp = spotipy.Spotify(cache_token)
```
### Utility functions
For this notebook, we will use `repeatAPICall`, which is a function that repeatedly makes API calls until a successful request is reached.
```
def repeatAPICall(func, args, max_retry = 5):
"""
Repeatedly calls spotipy func until a successful API request is made.
Parameters:
func : func
Spotipy client function for making api calls
args: dict
Arguments to pass to func; Key: parameter, Value: parameter value
Check Spotipy API of specified func for details
max_retry: int
Maximum iterations before prompting user to retry or skip
Returns:
result: dict
Result of a successful API call, or none
success: bool
True if API call is successful, False otherwise
"""
success = False
res = None
i = 0
while i < max_retry:
try:
res = func(**args)
success = True
return res, success
except:
print("Error in API call; retrying")
i += 1
pass
if i >= max_retry:
reset_loop = input("Max retry limit reached. Retry {} more time/s?".format(max_retry)).upper()
reset_loop = True if reset_loop == 'Y' else False
# Reset the index of the loop if user chooses to reset
i = 0 if reset_loop else max_retry
return res, success
```
## Step 1: Getting playlists of a given category
In this step, we will gather playlists that are categorized as k-pop. We can use this as a starting point to gather an initial list of kpop artists.
This cell gets the list of playlist categories (with ID) available in Spotify. Let's set the country code to PH so we can get PH-specific results.
```
all_categories = sp.categories(limit = 50, country = 'PH')
categories = all_categories['categories']['items']
for cat in categories:
print("Category: {} | ID : {}".format(cat['name'],cat['id']))
```
This indicates that K-pop categories has `id = kpop`!
Note for the future implementation: OPM has `id = opm`
This next cell gathers the playlists for the `kpop` category and saves it to a DataFrame.
```
kpop_playlists_result = sp.category_playlists('kpop', country='PH', limit = 50)
kpop_playlists = kpop_playlists_result['playlists']['items']
while kpop_playlists_result['playlists']['next']:
kpop_playlists_result = sp.next(kpop_playlists_result['playlists'])
kpop_playlists.extend(kpop_playlists_result['playlists']['items'])
playlists = [playlist['name'] for playlist in kpop_playlists]
playlist_uris = [playlist['uri'] for playlist in kpop_playlists]
playlist_df = pd.DataFrame(zip(playlists,playlist_uris),columns = ["playlist", "playlist_uri"]).drop_duplicates().reset_index(drop=True)
# Save the playlist list to .csv file
filename = 'Data/playlists.csv'
playlist_df.to_csv(filename, index=False)
```
## Step 2: Collecting artists from playlists
In this step, we will use the list of playlists and gather all the artists that appear in them. Since we're using k-pop playlists, we assume that most of the artists we get from this step are k-pop.
**Note:** Usually there will be some non-kpop artists appearing in this list, such as Dua Lipa or Ed Sheeran. These are usually artists that appear on k-pop collabs (ex. Dua Lipa and BLACKPINK - Kiss and Make Up)
```
# Load the existing playlist data
playlist_dir = 'Data/playlists.csv'
playlist_df = pd.read_csv(playlist_dir)
# Loop through playlists to build a list of artists
# Get the list of unique identifiers for each playlist
playlists = playlist_df.playlist.values
playlist_uris = playlist_df.playlist_uri.values
# Create dataframe to store artist data
artist_cols = ['artist','artist_uri']
artist_df = pd.DataFrame(columns = artist_cols)
for playlist,uri in zip(playlists, playlist_uris):
print("Current playlist: {}".format(playlist))
playlist_result, success = repeatAPICall(sp.playlist_tracks,{'playlist_id':uri})
if not success:
print("Error in playlist {}".format(playlist))
continue
# Remove value in playlist_result['items'] when track is listed as None object
playlist_result['items'] = [track for track in playlist_result['items'] if track['track'] is not None]
# Skip the playlist if there are any errors
try:
artist_uris = [track['track']['artists'][0]['uri'] for track in playlist_result['items']]
artists = [track['track']['artists'][0]['name'] for track in playlist_result['items']]
except:
print("Error in playlist {}".format(playlist))
temp_df = pd.DataFrame(zip(artists,artist_uris),columns = artist_cols)
artist_df = pd.concat([artist_df.reset_index(drop=True), temp_df.reset_index(drop=True)]).drop_duplicates()
# Reset the index of our resulting dataframe
artist_df = artist_df.drop_duplicates().reset_index(drop=True)
artist_df
```
At this point, we now have 900~ artists in our database! We save our current output as `artists.csv`. In the next succeeding cells, we will extend the list by getting related acts for every artist in our current list.
```
# Save the existing artist data
artists_dir = 'Data/artists.csv'
artist_df.to_csv(artists_dir, index = False)
```
## Step 3: Extending artist data by gathering related artists
In this step, we will extended our current list of artists by adding related artists to our current list. This will run for a set number of iterations, so after getting the initial list of related artists, we can get then gather even more artists from this new batch.
There are two important steps to improve runtime and avoid repeating processes. First, we label each artist with `temp_processed` (bool), which indicates whether we have already processed that artist's related artists. We set this initally to `False` and update it to `True` when an iteration has finished.
Second, we only filter out artists of certain genres that we are interested in. `sp.artist_related_artists()` returns 20 related artists for a given artist, which can blow up our list exponentially and add artists that we don't want. For example, `BLACKPINK` and `BTS` are related to `Dua Lipa` and `Halsey`, respectively, as they feature together on collabs. However, if we keep these two results and get additional related artists based on them, we are likely to get more pop artists (~20) unrelated to the genre we are looking for. `genre_filter` is a list of substrings that we use to match to an artist's own genre list to decide whether to keep that artist in our list.
```
# Load the existing artist data to extend
# For testing: randomly sample rows
artists_dir = 'Data/artists.csv'
artist_extended_df = pd.read_csv(artists_dir)
artist_extended_df
```
In the next cell, the code will go through each artist and get a list of related artists.
```
# Add a 'processed' column to artist_df indicating if its already been processed by this loop
artist_extended_df['temp_processed'] = False
# Filter for genres
# We use 'k-pop' and 'k-rap' as the genre substrings
# We can also add 'korean' to match korean artists that are not considered k-pop (ex. OSTs)
genre_filter = ['k-pop', 'k-rap']
# genre_filter = ['k-','korean']
# Keep track of iteration progress (see sense check section below)
artist_count = [len(artist_extended_df.artist_uri.values)]
iter_count = [0]
removed_count = [0]
# Set maximum iterations
max_iter = 15
for i in range(max_iter):
print("Current iter: {}".format(i+1))
rel_artists = []
rel_artist_uris = []
# Create temporary df to score artists to be processed
temp_df = artist_extended_df.copy()
temp_df = temp_df[temp_df.temp_processed == False]
# If temp df is empty, end the loop
if temp_df.empty:
print("No more artists to be processed! Breaking loop.")
break
artists = temp_df.artist.values
artist_uris = temp_df.artist_uri.values
print("Iter: {} | Artists count: {} | To be processed : {}".format(i+1, len(artist_extended_df.artist_uri.values), len(artist_uris) ))
# Track number of artists removed per iteration
total_removed = 0
# Loop through artists
for artist,uri in zip(artists,artist_uris):
# Get related artists for the current artist
rel_artists_result, success = repeatAPICall(sp.artist_related_artists,{'artist_id':uri})
if not success:
print("Skipping to next artist.")
continue
# Remove artists whose genres do not contain the substrings in genre_filter
old_count = len(rel_artists_result['artists'])
rel_artists_result['artists'] = [rel_artist for rel_artist in rel_artists_result['artists'] if any(genre in ''.join(rel_artist['genres']) for genre in genre_filter)]
new_count = len(rel_artists_result['artists'])
# Track number of removed artists
removed = old_count - new_count
total_removed += removed
rel_artists.extend([artist['name'] for artist in rel_artists_result['artists']])
rel_artist_uris.extend([artist['uri'] for artist in rel_artists_result['artists']])
# Create dataframe of related artists that were gathered
rel_artist_df = pd.DataFrame(zip(rel_artists,rel_artist_uris),columns = ["artist", "artist_uri"]).drop_duplicates()
rel_artist_df['temp_processed'] = False
# At this step, all the entries in artist_df has been processed and labelled accordingly
artist_extended_df['temp_processed'] = True
# Combine artist_extended_df and rel_artist_df
# Drop duplicates and keep first value
# This ensures that we keep the firtst duplicate songs
# between artist and rel_artist_df (with different temp_processed values)
artist_extended_df = pd.concat([artist_extended_df.reset_index(drop=True), rel_artist_df.reset_index(drop=True)]).drop_duplicates(subset = ['artist', 'artist_uri'], keep = 'first')
# Add metrics to array
iter_count.append(i+1)
artist_count.append(len(artist_extended_df.artist_uri.values))
removed_count.append(total_removed)
print("Done! Final count: {}".format(artist_count[-1]))
```
Here's our final list of artists!
```
artist_extended_df
```
### Sense Check
This cell plots the total number of artists gathered (blue) and related artists removed (orange) as function of the number of iterations. We see that the blue line generally plateaus, indicating that we reached a reasonable upper limit of possible artists gathered. We also see that the number of artists removed is large for `i = 1` at around (8000~). This means that the first iteration removes a large number of non-kpop artists. Without removing these artists per iteration, the loop will not converge to a finite list of artists.
```
plt.plot(iter_count,artist_count, label = "Artists in list")
plt.plot(iter_count[1:], removed_count[1:], label = "Removed in processing (not in genre filter)")
plt.xlabel("Number of iterations")
plt.ylabel("Artist count")
plt.legend()
plt.tight_layout()
# Save the existing artist data
# We drop the temp_processed column before writing to csv
artists_extended_dir = 'Data/artists_extended.csv'
artist_extended_df.drop('temp_processed', axis = 1).to_csv(artists_extended_dir, index = False)
```
## Step 4: Loading top tracks per artist
From our list of k-pop artists, we then get their top 10 tracks. This gives us a reasonable number of songs for our Kpop Song Recommender!
```
# Load the existing artist data
artists_dir = 'Data/artists.csv'
artists_dir = 'Data/artists_extended.csv' # Uncomment this if you want to use the extended artists dataset
artist_df = pd.read_csv(artists_dir)
artists = artist_df.artist.values
artist_uris = artist_df.artist_uri.values
# Loop through artist to build a list of tracks from their top 10 songs
# Create dataframe to store artist data
artist_cols = ['artist', 'artist_uri']
track_cols = ['track','track_uri','popularity']
track_df = pd.DataFrame(columns = artist_cols + track_cols)
for artist,uri in tqdm(zip(artists, artist_uris), total = len(artist_uris)):
# print("Current artist: {}".format(artist))
top10_result, success = repeatAPICall(sp.artist_top_tracks,{'artist_id':uri,'country':'PH'})
if not success:
print("Skipping to next artist.")
continue
# Remove value in playlist_result['items'] when track is listed as None object
#top10_result['tracks'] = [track for track in top10_result['tracks'] if track is not None]
# Skip the playlist if there are any errors
try:
track_uris = [track['uri'] for track in top10_result['tracks']]
tracks = [track['name'] for track in top10_result['tracks']]
popularity = [track['popularity'] for track in top10_result['tracks']]
except:
print("Error in playlist {}".format(playlist))
temp_df = pd.DataFrame(zip(tracks,track_uris, popularity),columns = track_cols)
# Set the artist and artist columns
temp_df['artist'] = artist
temp_df['artist_uri'] = uri
track_df = pd.concat([track_df.reset_index(drop=True), temp_df.reset_index(drop=True)]).drop_duplicates()
```
Note: if a track has multiple artists, and those artists have this track, it will show up multiple times. In the cell below, we see that the number of rows is more than the number of unique track_uri.
We will keep these duplicates for now, but keep this in mind when post-processing the data.
```
print("Number of track_df rows: {}\nNumber of unique track_uri: {}".format(len(track_df), track_df['track_uri'].nunique()))
# Save all tracks to file
tracks_dir = 'Data/tracks_top10.csv'
track_df.to_csv(tracks_dir, index = False)
```
## Step 5: Getting audio features per track
In this last step, we will generate the audio features for each track in our database using Spotify's Audio Features functionality. These include a song's danceability, tempo, energy key, time_signature, liveness, etc. In the main notebook, this will be used to as a basis to recommend kpop songs that are similar to a user's top tracks in terms of these features.
```
# Load the track data
tracks_dir = 'Data/tracks_top10.csv'
track_df = pd.read_csv(tracks_dir)
tracks = track_df.track.values
track_uris = track_df.track_uri.values
track_df
```
In this next cell, we will go through each track and generate its audio features (saved as a dataframe). Each track is identified by its unique `track_uri`.
`sp.audio_feautures()` takes a list of track_ids (maximum of 100). We will loop through the list of track uri in batches of 100 to minimize the amount of API requests.
```
batch_size = 100
# This list of columns is taken directly from the keys of a feature dictionary
features_cols = ['danceability',
'energy',
'key',
'loudness',
'mode',
'speechiness',
'acousticness',
'instrumentalness',
'liveness',
'valence',
'tempo',
'type',
'id',
'uri',
'track_href',
'analysis_url',
'duration_ms',
'time_signature']
features_df = pd.DataFrame(columns = features_cols)
for i in tqdm(range(0, len(track_uris), batch_size)):
# Select the current batch
track_uris_batch = track_uris[i:i+batch_size]
features_result, success = repeatAPICall(sp.audio_features,{'tracks':track_uris_batch})
if not success:
print("Skipping to next batch.")
continue
# Deepcopy the list of dictionaries to be modified
# This is necessary for this particular structure
features_dicts = deepcopy(features_result)
# Drop None in features_dict
# This will mean that some of our songs will not have features
if any(d is None for d in features_dicts):
print("Batch: {} to {} | Some songs do not have features; dropping from list.".format(i+1, i+1+batch_size))
print("Count: {}".format(len(features_dicts)))
features_dicts = [d for d in features_dicts if d is not None]
print("New count: {}".format(len(features_dicts)))
temp_df = pd.DataFrame.from_records(features_dicts)
features_df = pd.concat([features_df.reset_index(drop=True), temp_df.reset_index(drop=True)])
temp_df_count = len(temp_df.index)
if temp_df_count != batch_size:
print("Batch: {} to {} | Dataframe rows count: {}".format(i+1, i+1+batch_size, temp_df_count))
# Reset index and rename 'uri' to 'track_uri'
# Drop duplicates based on track_uri
features_df = features_df.rename(columns={'uri':'track_uri'}).drop_duplicates(subset=['track_uri'])
features_df
```
Finally, we left join the features to the `track_df` using `track_uri`
```
# Merge features to track_df by track_uri
# Note: some rows will not have features. We keep them for now to retain the track info
track_features_df = track_df.merge(features_df, on='track_uri', how='left').reset_index(drop = True)
track_features_df
# Save tracks with features to file
# Save all tracks to file
tracks_features_dir = 'Data/tracks_top10_features.csv'
track_features_df.to_csv(tracks_features_dir, index = False)
```
## Done!
After running this notebook, you should now have the following updated files in your Data Folder:
1. playlists.csv
2. artists.csv
3. artists_extended.csv
4. tracks_top10.csv
5. tracks_top10_features.csv
The following code cells will try to load all albums by an artist. This will be more computatinally expensive than the previous segment where we only got an artist's top 10 tracks.
```
# Load the existing artist data
artists_dir = 'Data/artists.csv'
artists_dir = 'Data/artists_extended.csv' # Uncomment this if you want to use the extended artists dataset
artist_df = pd.read_csv(artists_dir)
artists = artist_df.artist.values
artist_uris = artist_df.artist_uri.values
# Loop through artist to build a list of albums
# Create dataframe to store artist data
artist_cols = ['artist', 'artist_uri']
album_cols = ['album','album_uri','release_date','release_date_precision','total_tracks']
album_df = pd.DataFrame(columns = artist_cols + album_cols)
for artist,uri in tqdm(zip(artists, artist_uris), total = len(artist_uris)):
func_params = {
'artist_id':uri,
'country':'PH',
'album_type':['album','single','compilation']
}
albums_result, success = repeatAPICall(sp.artist_albums, func_params)
if not success:
print("Skipping to next artist.")
continue
albums_list = albums_result['items']
while albums_result['next']:
# print("Going to next 50 albums.")
albums_result, success = repeatAPICall(sp.next,{'result':albums_result})
if not success:
print("Skipping to next artist.")
continue
albums_list.extend(albums_result['items'])
# Skip the artist if there are any errors
try:
album_uris = [album['uri'] for album in albums_list]
albums = [album['name'] for album in albums_list]
release_dates = [album['release_date'] for album in albums_list]
release_date_precisions = [album['release_date_precision'] for album in albums_list]
totals = [album['total_tracks'] for album in albums_list]
except:
print("Error in artist {}".format(artist))
temp_df = pd.DataFrame(zip(albums,album_uris, release_dates, release_date_precisions, totals),columns = album_cols)
# Set the artist and artist columns
temp_df['artist'] = artist
temp_df['artist_uri'] = uri
album_df = pd.concat([album_df.reset_index(drop=True), temp_df.reset_index(drop=True)]).drop_duplicates()
album_df = album_df.reset_index(drop = True)
album_df
album_df.sample(50)
```
| github_jupyter |
```
import numpy as np
import json
from keras.models import Model
from keras.layers import Input
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras import backend as K
def format_decimal(arr, places=8):
return [round(x * 10**places) / 10**places for x in arr]
```
### pipeline 6
```
data_in_shape = (24, 24, 2)
conv_0 = Conv2D(5, 3, 3, activation='relu', border_mode='valid', subsample=(2, 2), dim_ordering='tf', bias=True)
bn_0 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_1 = Conv2D(4, 3, 3, activation='relu', border_mode='same', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_1 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_2 = Conv2D(3, 3, 3, activation='relu', border_mode='same', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_2 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
pool_0 = MaxPooling2D(pool_size=(2, 2), strides=None, border_mode='valid', dim_ordering='tf')
conv_3 = Conv2D(4, 3, 3, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_3 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_4 = Conv2D(2, 3, 3, activation='relu', border_mode='same', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_4 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
pool_1 = MaxPooling2D(pool_size=(2, 2), strides=None, border_mode='valid', dim_ordering='tf')
input_layer = Input(shape=data_in_shape)
x = conv_0(input_layer)
x = bn_0(x)
x = conv_1(x)
x = bn_1(x)
x = conv_2(x)
x = bn_2(x)
x = pool_0(x)
x = conv_3(x)
x = bn_3(x)
x = conv_4(x)
x = bn_4(x)
output_layer = pool_1(x)
model = Model(input=input_layer, output=output_layer)
np.random.seed(7000)
data_in = 2 * np.random.random(data_in_shape) - 1
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(7000 + i)
if i % 6 == 5:
# std should be positive
weights.append(0.5 * np.random.random(w.shape))
else:
weights.append(np.random.random(w.shape) - 0.5)
model.set_weights(weights)
result = model.predict(np.array([data_in]))
print({
'input': {'data': format_decimal(data_in.ravel().tolist()), 'shape': list(data_in_shape)},
'weights': [{'data': format_decimal(weights[i].ravel().tolist()), 'shape': list(weights[i].shape)} for i in range(len(weights))],
'expected': {'data': format_decimal(result[0].ravel().tolist()), 'shape': list(result[0].shape)}
})
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import time
np.set_printoptions(precision=4, linewidth=200)
%load_ext autoreload
%autoreload 2
%matplotlib inline
print(tf.__version__)
from utils.reader import ptb_raw_data
from utils.batcher import ptb_batcher
from utils.conditional_scope import cond_name_scope, cond_variable_scope
from utils.unrolled_rnn import make_rnn_variables
from utils.unrolled_rnn import make_rnn_outputs
from utils.unrolled_rnn import make_summary_nodes
from utils.unrolled_rnn import make_placeholders
from utils.unrolled_rnn import make_train_op
from utils.batcher import generate_epoch
X_train, X_val, X_test, vocab_size = ptb_raw_data('bigdata/simple-examples/data')
EMBEDDING_SIZE=64
HIDDEN_SIZE=256
BATCH_SIZE=32
NUM_STEPS=16
NUM_EPOCHS_INIT_LR=3
NUM_EPOCHS_TOTAL=8
INITIAL_LR=5e0
LR_DECAY_RATE=0.75
MAX_NORM=0.5
tf.reset_default_graph()
placeholders = make_placeholders(
batch_size=BATCH_SIZE,
num_steps=NUM_STEPS
)
rnn_vars = make_rnn_variables(
vocab_size=vocab_size,
embedding_size=EMBEDDING_SIZE,
hidden_size=HIDDEN_SIZE,
initializer_scale=0.2
)
rnn_outputs = make_rnn_outputs(
input_sequence=placeholders['inputs'],
vocab_size=vocab_size,
hidden_size=HIDDEN_SIZE,
batch_size=BATCH_SIZE,
num_steps=NUM_STEPS,
rnn_variables=rnn_vars
)
summary_nodes = make_summary_nodes(
targets=placeholders['targets'],
logits=rnn_outputs['logits'],
)
training_nodes = make_train_op(
summary_nodes['loss'],
placeholders['learning_rate'],
placeholders['max_norm'],
)
training_outputs = {**summary_nodes, **training_nodes}
with tf.Session() as sess:
# Bookkeeping
run_id = time.time()
writer = tf.summary.FileWriter('logs/{0}'.format(run_id), sess.graph)
coord = tf.train.Coordinator()
sess.run(tf.global_variables_initializer())
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
learning_rate = INITIAL_LR
max_norm = MAX_NORM
for i in range(NUM_EPOCHS_TOTAL):
if i >= NUM_EPOCHS_INIT_LR:
learning_rate *= LR_DECAY_RATE
for batch_idx, (inputs, targets) in enumerate(generate_epoch(X_train, BATCH_SIZE, NUM_STEPS)):
outputs = sess.run(
training_outputs,
feed_dict={
placeholders['inputs']: inputs,
placeholders['targets']: targets,
placeholders['learning_rate']: learning_rate,
placeholders['max_norm']: max_norm,
}
)
if (batch_idx % 64 == 63):
print('step: {0} loss: {1} gradient norm: {2} correct words: {3}'.format(
batch_idx+1,
outputs['loss'],
outputs['gradient_global_norm'],
outputs['num_correct_predictions'],
))
total_loss, total_batches = 0, 0
for inputs, targets in generate_epoch(X_val, BATCH_SIZE, NUM_STEPS):
outputs = sess.run(
summary_nodes,
feed_dict={
placeholders['inputs']: inputs,
placeholders['targets']: targets
},
)
total_loss += outputs['loss']
total_batches += 1
print('validation perplexity:', np.exp(total_loss / total_batches))
total_loss, total_batches = 0, 0
for inputs, targets in generate_epoch(X_test, BATCH_SIZE, NUM_STEPS):
outputs = sess.run(
summary_nodes,
feed_dict={
placeholders['inputs']: inputs,
placeholders['targets']: targets
},
)
total_loss += outputs['loss']
total_batches += 1
print('test perplexity:', np.exp(total_loss / total_batches))
# Bookkeeping
writer.close()
coord.request_stop()
coord.join(threads)
```
| github_jupyter |
<table width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) and Utku Birkan
<br>
updated by Özlem Salehi | September 20, 2019
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
<h2> Initializing a qubit with an arbitrary state </h2>
Qiskit circuits have a built in method called initialize which allows starting from a specifed state instead of having all qubits start as 0.
Note that the state should be a valid quantum state and the length of the vector should match the number of qubits. If not, exception is raised.
Let's create a quantum circuit with two qubits and initialize it by setting equal probabilities to each outcome.
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
# we define a quantum circuit with two qubits and two bits
qreg1 = QuantumRegister(2) # quantum register with two qubits
creg1 = ClassicalRegister(2) # classical register with two bits
mycircuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers
# initialization
init_state=[1/2,1/2,1/2,1/2]
mycircuit1.initialize(init_state,qreg1)
# measure the qubits
mycircuit1.measure(qreg1,creg1)
# draw the circuit
mycircuit1.draw()
# execute the program 1000 times
job = execute(mycircuit1,Aer.get_backend('qasm_simulator'),shots=1000)
# print the results
counts = job.result().get_counts(mycircuit1)
print(counts) # counts is a dictionary
```
<h3>Task 1</h3>
Create a quantum circuit with a single qubit. Use the function you have written for creating random quantum states in the notebook <a href="../B28_Quantum_State.ipynb">Quantum States</a> and initilize your qubit to a random state. Use statevector simulator to check if you are successful.
```
%load randqstate.py
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
#
# Your code here
#
```
<a href="B32_Initializing_a_Qubit_Solutions.ipynb#task1">click for our solution</a>
### Task 2
Create a quantum circuit with a single qubit. Choose a random angle $\theta$ and use the function you have written for creating random quantum states in the notebook <a href="../B30_Visualization_of_a_Qubit.ipynb">Visualization of a Qubit</a>. Use statevector simulator to check if you are successful.
```
%load randqstate2.py
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
from math import pi
#
# Your code here
#
```
<a href="B32_Initializing_a_Qubit_Solutions.ipynb#task2">click for our solution</a>
| github_jupyter |
```
import xarray as xr
import pandas as pd
import numpy as np
from os.path import join, basename
%matplotlib inline
import matplotlib as mpl
# mpl.rcParams.keys()
import matplotlib.pyplot as plt
import seaborn as sns
rc = {'savefig.bbox': 'tight', 'savefig.format': 'png', 'savefig.dpi':300}
context = 'paper'# 'talk'
sns.set(context=context, style='whitegrid', font_scale=0.75 if context == 'talk' else 1., rc=rc)
sns.set(context=context, style='whitegrid', font_scale=1.3 if context == 'paper' else 1., rc=rc)
from string import ascii_uppercase as letters
# import gis plotting packages
from gistools import pandas2geopandas
import gistools.plot_tools as gplt
import cartopy.crs as ccrs
from plot_tools import *
ddir = r'/scratch/compound_hotspots/data/4-postprocessed'
grdc_dir = r'/scratch/grdc'
fig_dir = r'/scratch/compound_hotspots/reports/figures'
fn_grdc = join(grdc_dir, r'grdc_discharge_1980-2014_v20180912.csv')
# naming
models_rename = {
"anu": "W3RA (ANU)",
"nerc": "Jules (NERC)",
"cnrs": "Orchid. (CNRS)",
"ecmwf": "HTESS. (ECMWF)",
"jrc": "LISFL. (JRC)",
# "univk": "W.Gap3 (UNIVK)",
# "univu": "PCR-WB (UNIVU)",
"mean": "ensemble mean"
}
model_seq = [v for k, v in models_rename.items()]
```
## validation
### select grdc data
```
obs_name = 'grdc'
fn_pm = join(ddir, r'cmf_v362_e2o_validation_grdc_pm_am.nc')
pm_am = xr.open_dataset(fn_pm)
pm_am_med = pm_am.mean('ensemble').expand_dims('ensemble')
pm_am_med['ensemble'] = xr.Variable('ensemble', ['mean'])
pm_am = xr.concat([pm_am, pm_am_med], 'ensemble')
fn_pm = join(ddir, r'cmf_v362_e2o_validation_grdc_pm.nc')
pm = xr.open_dataset(fn_pm)
pm_med = pm.mean('ensemble').expand_dims('ensemble')
pm_med['ensemble'] = xr.Variable('ensemble', ['mean'])
pm = xr.concat([pm, pm_med], 'ensemble')
# load meta data
df_meta = pd.read_csv(fn_grdc, index_col=0).reindex(pm['grdc_id'])
# select natural most downstream stations
postfix='nat'
df_meta = df_meta[np.logical_and.reduce((
df_meta['nathum_human'] == 0,
df_meta['ds_stat_no'] >= 0
))]
pm = pm.sel(grdc_id=df_meta.index)
pm_am = pm_am.sel(grdc_id=df_meta.index)
pm_am = pm_am.where(pm_am['doy_uniform_p']<0.05, drop=True)
print(pm.grdc_id.size, pm_am['grdc_id'].size)
model='mean'
obs_name='grdc'
# max_count1, max_count2 = 70, 35
max_count1, max_count2 = 260, 120
pm_sel = pm.sel(ensemble=model)
pm_am_sel = pm_am.sel(ensemble=model)
n1, n2 = pm_sel.grdc_id.size, pm_am_sel.grdc_id.size
n1, n2
```
## analysis
```
snap_df = pd.read_csv(join(grdc_dir, r'20170124_GRDC_Stations_snap_2dist1e+04_1upa5.0e-01.csv'), index_col=0)
snap_df = snap_df.reindex(df_meta.index)
pm['kge'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm['kge_bias'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
(1-pm['kge_bias']).to_series().apply(np.abs).unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm['kge_pearson_coef'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm['lag'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm_am['am_bias'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm_am['am_rank_corr'].to_series().unstack(0).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
pm_am['am_doy_diff'].to_series().unstack(0).apply(np.abs).rename(columns=models_rename)[model_seq].describe().loc[['25%', '50%', '75%'], :]
```
### Figure validation 1 - multi model ensemble boxplots
```
box_kwargs=dict(whis=[5,95], boxprops=dict(linewidth=1.), medianprops=dict(linewidth=1.5),
showfliers=False, flierprops=dict(markersize=2))
fig, ((ax1, ax3, ax4), (ax11, ax12, ax13)) = plt.subplots(2,3, figsize=(15, 10), sharey=True,
gridspec_kw=dict(wspace=0.15, hspace=0.3))
data = pm['kge_bias'].to_series().unstack(0).rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax1, orient="h", **box_kwargs)
ax1.set_xlim(-0.1, 3.1)
ax1.set_xlabel('bias [-]')
ax1.set_title(f'{letters[0]}. Bias')
ax1.set_ylabel('models - daily', fontsize=14)
data = pm['kge_pearson_coef'].to_series().unstack(0).rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax3, orient="h", **box_kwargs)
ax3.set_xlim(-0.05, 1.0)
ax3.set_xlabel('pearson rho [-]')
ax3.set_ylabel('')
ax3.set_title(f'{letters[1]}. Correlation')
data = pm['lag'].to_series().unstack(0).rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax4, orient="h", **box_kwargs)
ax4.set_xlim(-10, 10)
ax4.set_xlabel('lag [days]')
ax4.set_ylabel('')
ax4.set_title(f'{letters[2]}. Time lag (cross correlation)')
data = pm_am['am_bias'].to_series().unstack(0).rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax11, orient="h", **box_kwargs)
ax11.set_xlim(-0.1, 3.1)
ax11.set_xlabel('bias [-]')
ax11.set_title(f'{letters[3]}. AM bias')
ax11.set_ylabel('models - annual maxima', fontsize=14)
data = pm_am['am_rank_corr'].to_series().unstack(0).rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax12, orient="h", **box_kwargs)
ax12.set_xlim(-0.1, 1.0)
ax12.set_xlabel('spearman rho [-]')
ax12.set_ylabel('')
ax12.set_title(f'{letters[4]}. AM rank correlation')
data = pm_am['am_doy_diff'].to_series().unstack().T.rename(columns=models_rename)[model_seq]
sns.boxplot(data=data, ax=ax13, orient="h", **box_kwargs)
ax13.set_xlim(-60, 60)
ax13.set_xlabel('lag [days]')
ax13.set_ylabel('')
ax13.set_title(f'{letters[5]}. AM Time lag (mean flood day)')
fn = join(fig_dir, '{}_{}_validation_{}').format(context, obs_name, postfix)
plt.savefig(fn)
```
## figure 2 - map
```
import cartopy.crs as ccrs
import cartopy.feature as cfeature
cl = cfeature.COLORS['land_alt1']
crs = ccrs.PlateCarree()
cmap = plt.cm.viridis_r
vmin, vmax, n =0, 1, 11
cticks=np.linspace(vmin, vmax, n)
#
column = 'kge'
if obs_name == 'grdc':
model='mean'
var = pm[column].sel(ensemble=model).to_series().sort_values()
else:
var = pm_sel[column].to_series().sort_values()
gdf = pandas2geopandas(df_meta)#.to_crs(crs.proj4_init)
gdf = gdf.reindex(var.index)
gdf[column] = var
fig = plt.figure(figsize=(15, 10))
axg = fig.add_subplot(projection=crs)
basemap(axg, bbox=(-180, -60, 180, 90), gridlines=False, outline=False,)
plot_choropleth(
fig, axg, gdf, column=column,
cmap=cmap, cticks=cticks, vmin=vmin, vmax=vmax, discrete=False,
cbar_kwargs=dict(label=f'{models_rename[model]} {column.upper()} [-]', location='right'),
cbar_pos = dict(pad=0.02, fraction=0.01, shrink=0.6),
plot_kwargs=dict(markersize=30, edgecolor=(0.5, 0.5, 0.5, 0.5), linewidth=0.5, zorder=2,
# label='selected {} gauges (n = {:d})'.format(obs_name, len(gdf))
)
)
# gdf.plot(ax=ax, zorder=3, markersize=10, color='green',
# label='selected {} gauges (n = {:d})'.format(obs_name, len(dfg)), )
# ax.legend(loc='lower center')
xlim, ylim = ax.get_xlim(), ax.get_ylim()
print(xlim, ylim)
ax.set_xlim([-max(xlim), max(xlim)])
ax.set_ylim([-max(ylim), max(ylim)])
fn = join(fig_dir, f'{context}_{obs_name}_validation_{column}_{model}_{postfix}')
print(basename(fn))
plt.savefig(fn)
```
| github_jupyter |
# SHAP
[SHAP](https://github.com/slundberg/shap)'s goal is to explain machine learning output using a game theoretic approach. A primary use of SHAP is to understand how variables and values influence predictions visually and quantitatively. The API of SHAP is built along the `explainers`. These explainers are appropriate only for certain types or classes of algorithms. For example, you should use the `TreeExplainer` for tree-based models. Below, we take a look at three of these explainers.
Note that SHAP is a part of the movement to promote `explanable artificial intelligence (AI)`. There are other APIs available that do similar things to SHAP.
- [LIME](https://github.com/marcotcr/lime)
- [Alibi](https://docs.seldon.io/projects/alibi/en/stable/index.html)
- [ELI5](https://eli5.readthedocs.io/)
A great book on explanable AI or interpretable machine learning is [available online](https://christophm.github.io/interpretable-ml-book/).
## Linear explainer
The `LinearExplainer` is used to understand the outputs of linear predictors (e.g. linear regression). We will generate some data and use the `LinearRegression` model to learn the parameters from the data.
```
%matplotlib inline
import numpy as np
import pandas as pd
from patsy import dmatrices
from numpy.random import normal
import matplotlib.pyplot as plt
np.random.seed(37)
n = 100
x_0 = normal(10, 1, n)
x_1 = normal(5, 2.5, n)
x_2 = normal(20, 1, n)
y = 3.2 + (2.7 * x_0) - (4.8 * x_1) + (1.3 * x_2) + normal(0, 1, n)
df = pd.DataFrame(np.hstack([
x_0.reshape(-1, 1),
x_1.reshape(-1, 1),
x_2.reshape(-1, 1),
y.reshape(-1, 1)]), columns=['x0', 'x1', 'x2', 'y'])
y, X = dmatrices('y ~ x0 + x1 + x2 - 1', df, return_type='dataframe')
print(f'X shape = {X.shape}, y shape {y.shape}')
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y)
```
Before you can use SHAP, you must initialize the `JavaScript`.
```
import shap
shap.initjs()
```
Here, we create the `LinearExplainer`. We have to pass in the dataset `X`.
```
explainer = shap.LinearExplainer(model, X)
shap_values = explainer.shap_values(X)
```
A force plot can be used to explain each individual data point's prediction. Below, we look at the force plots of the first, second and third observations (indexed 0, 1, 2).
- First observation prediction explanation: the values of x1 and x2 are pushing the prediction value downard.
- Second observation prediction explanation: the x0 value is pushing the prediction value higher, while x1 and x2 are pushing the value lower.
- Third observation prediction explanation: the x0 and x1 values are pushing the prediction value lower and the x2 value is slightly nudging the value lower.
```
shap.force_plot(explainer.expected_value, shap_values[0,:], X.iloc[0,:])
shap.force_plot(explainer.expected_value, shap_values[1,:], X.iloc[1,:])
shap.force_plot(explainer.expected_value, shap_values[2,:], X.iloc[2,:])
```
The force plot can also be used to visualize explanation over all observations.
```
shap.force_plot(explainer.expected_value, shap_values, X)
```
The summary plot is a way to understand variable importance.
```
shap.summary_plot(shap_values, X)
```
Just for comparison, the visualization of the variables' importance coincide with the coefficients of the linear regression model.
```
s = pd.Series(model.coef_[0], index=X.columns)
s
```
## Tree explainer
The `TreeExplainer` is appropriate for algorithms using trees. Here, we generate data for a classification problem and use `RandomForestClassifier` as the model that we want to explain.
```
from scipy.stats import binom
def make_classification(n=100):
X = np.hstack([
np.array([1 for _ in range(n)]).reshape(n, 1),
normal(0.0, 1.0, n).reshape(n, 1),
normal(0.0, 1.0, n).reshape(n, 1)
])
z = np.dot(X, np.array([1.0, 2.0, 3.0])) + normal(0.0, 1.0, n)
p = 1.0 / (1.0 + np.exp(-z))
y = binom.rvs(1, p)
df = pd.DataFrame(np.hstack([X, y.reshape(-1, 1)]), columns=['intercept', 'x0', 'x1', 'y'])
return df
df = make_classification()
y, X = dmatrices('y ~ x0 + x1 - 1', df, return_type='dataframe')
print(f'X shape = {X.shape}, y shape {y.shape}')
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100, random_state=37)
model.fit(X, y.values.reshape(1, -1)[0])
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
shap_interaction_values = explainer.shap_interaction_values(X)
```
Here are the forced plots for three observations.
```
shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X.iloc[0,:])
shap.force_plot(explainer.expected_value[1], shap_values[1][1,:], X.iloc[1,:])
shap.force_plot(explainer.expected_value[1], shap_values[1][95,:], X.iloc[95,:])
```
Here is the force plot for all observations.
```
shap.force_plot(explainer.expected_value[1], shap_values[1], X)
```
Below is the summary plot.
```
shap.summary_plot(shap_values[1], X)
```
Below are [dependence plots](https://christophm.github.io/interpretable-ml-book/pdp.html).
```
shap.dependence_plot('x0', shap_values[1], X)
shap.dependence_plot('x1', shap_values[1], X)
shap.dependence_plot(('x0', 'x0'), shap_interaction_values[1], X)
shap.dependence_plot(('x0', 'x1'), shap_interaction_values[1], X)
shap.dependence_plot(('x1', 'x1'), shap_interaction_values[1], X)
```
Lastly, the summary plot.
```
shap.summary_plot(shap_interaction_values[1], X)
```
## Kernel explainer
The `KernelExplainer` is the general purpose explainer. Here, we use it to explain the `LogisticRegression` model. Notice the `link` parameter, which can be `identity` or `logit`. This argument specifies the model link to connect the feature importance values to the model output.
```
from sklearn.linear_model import LogisticRegression
df = make_classification(n=10000)
X = df[['x0', 'x1']]
y = df.y
model = LogisticRegression(fit_intercept=True, solver='saga', random_state=37)
model.fit(X, y.values.reshape(1, -1)[0])
df = make_classification()
X = df[['x0', 'x1']]
y = df.y
```
Observe that we pass in the proababilistic prediction function to the `KernelExplainer`.
```
explainer = shap.KernelExplainer(model.predict_proba, link='logit', data=X)
shap_values = explainer.shap_values(X)
```
Again, example force plots on a few observations.
```
shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X.iloc[0,:], link='logit')
shap.force_plot(explainer.expected_value[1], shap_values[1][1,:], X.iloc[1,:], link='logit')
shap.force_plot(explainer.expected_value[1], shap_values[1][99,:], X.iloc[99,:], link='logit')
```
The force plot over all observations.
```
shap.force_plot(explainer.expected_value[1], shap_values[1], X, link='logit')
```
Lastly, the summary plot.
```
shap.summary_plot(shap_values[1], X)
```
| github_jupyter |
```
import numpy as np
import random
from math import *
import time
import copy
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR, MultiStepLR
torch.cuda.set_device(1)
torch.set_default_tensor_type('torch.DoubleTensor')
# activation function
def activation(x):
return x * torch.sigmoid(x)
# build ResNet with three blocks
class Net(torch.nn.Module):
def __init__(self,input_width,layer_width):
super(Net,self).__init__()
self.layer_in = torch.nn.Linear(input_width, layer_width)
self.layer1 = torch.nn.Linear(layer_width, layer_width)
self.layer2 = torch.nn.Linear(layer_width, layer_width)
self.layer3 = torch.nn.Linear(layer_width, layer_width)
self.layer4 = torch.nn.Linear(layer_width, layer_width)
self.layer5 = torch.nn.Linear(layer_width, layer_width)
self.layer6 = torch.nn.Linear(layer_width, layer_width)
self.layer_out = torch.nn.Linear(layer_width, 1)
def forward(self,x):
y = self.layer_in(x)
y = y + activation(self.layer2(activation(self.layer1(y)))) # residual block 1
y = y + activation(self.layer4(activation(self.layer3(y)))) # residual block 2
y = y + activation(self.layer6(activation(self.layer5(y)))) # residual block 3
output = self.layer_out(y)
return output
dimension = 3
input_width,layer_width = dimension, 8
net = Net(input_width,layer_width).cuda() # network for u on gpu
# defination of exact solution
def u_ex(x):
x_norm = torch.norm(x, dim = 1) # compute norm of x; dim = 1 - > sum by row
u_x = torch.sin(pi/2*(1-x_norm)).reshape([x.size()[0],1])
return u_x
# defination of f(x)
def f(x):
x_norm = torch.norm(x, dim = 1)
f_temp = 1/4*pi**2*torch.sin(pi/2*(1-x_norm)) + 1/2*pi*(dimension-1)/x_norm*torch.cos(pi/2*(1-x_norm))
return f_temp.reshape([x.size()[0],1])
# generate points by random
def generate_sample(data_size):
i = 0
x = torch.tensor([])
while x.size()[0] < data_size:
temp_x = 2*torch.rand(data_size, dimension) - 1
index = torch.where(torch.norm(temp_x, dim = 1) < 1)
temp_x = temp_x[index]
x = torch.cat((x,temp_x),0)
x = x[1:data_size, :]
return x
def model(x):
x_norm = torch.norm(x, p = 2, dim = 1)
return (1-x_norm**2).reshape([x.size()[0], 1]) * net(x)
# loss function to DGM by auto differential
def loss_function(x):
# x = generate_sample(data_size).cuda()
# x.requires_grad = True
u_hat = model(x)
grad_u_hat = torch.autograd.grad(outputs = u_hat, inputs = x, grad_outputs = torch.ones(u_hat.shape).cuda(), create_graph = True)
laplace_u = torch.zeros([len(grad_u_hat[0]), 1]).cuda()
for index in range(dimension):
p_temp = grad_u_hat[0][:, index].reshape([len(grad_u_hat[0]), 1])
temp = torch.autograd.grad(outputs = p_temp, inputs = x, grad_outputs = torch.ones(p_temp.shape).cuda(), create_graph = True, allow_unused = True)[0]
laplace_u = temp[:, index].reshape([len(grad_u_hat[0]), 1]) + laplace_u
part_2 = torch.sum((-laplace_u - f(x))**2) / len(x)
return part_2
data_size = 1000
x = generate_sample(data_size).cuda()
x.requires_grad = True
def relative_l2_error():
data_size_temp = 500
x = generate_sample(data_size_temp).cuda()
predict = model(x)
exact = u_ex(x)
value = torch.sqrt(torch.sum((predict - exact)**2))/torch.sqrt(torch.sum((exact)**2))
return value
optimizer = optim.Adam(net.parameters())
epoch = 5000
data_size = 1000
loss_record = np.zeros(epoch)
error_record = np.zeros(epoch)
time_start = time.time()
for i in range(epoch):
optimizer.zero_grad()
x = generate_sample(data_size).cuda()
x.requires_grad = True
loss = loss_function(x)
loss_record[i] = float(loss)
error = relative_l2_error()
error_record[i] = float(error)
np.save("DGM_loss_3d.npy", loss_record)
np.save("DGM_error_3d.npy", error_record)
if i % 50 == 0:
print("current epoch is: ", i)
print("current loss is: ", loss.detach())
print("current error is: ", error.detach())
if i == epoch - 1:
# save model parameters
torch.save(net.state_dict(), 'net_params_DGM.pkl')
loss.backward()
optimizer.step()
torch.cuda.empty_cache() # clear memory
time_end = time.time()
print('total time is: ', time_end-time_start, 'seconds')
```
| github_jupyter |
# Paired programming activity
For this activity, work with your partner to write code to answer these questions. If you're stuck, put up a pink sticky. Feel free to do the questions out of order if you want.
For each question, you can try different values for the input to see if your code is right!
**Question 1:** Given a string, count and report how many times each letter appears. Example string: "computing is too important to be left to men" (credit: Karen Sparck Jones).
*Hint:* If you loop over a string, it goes through each character.
*Challenge:* How short can you make your code and have it still work?
```
# define string (remember you can change this if you want to test out different strings!)
string = "computing is too important to be left to men"
# write code here
# advanced answer
from collections import Counter
print(Counter(''.join(filter(str.isalpha, string))))
# alternate answer
letter_counts = dict()
for letter in string:
if letter not in letter_counts:
letter_counts[letter] = 1
else:
letter_counts[letter] += 1
letter_counts.pop(' ')
print(letter_counts)
```
**Question 2:** Given two numbers, find the largest of them. Try at least 2 different sets of numbers: one where x is bigger and one where y is bigger.
*Hint:* Use if-then-else. (python has a max function built in, but write your own)
```
# find the largest of two numbers (also try one where x is bigger)
# define numbers
x = 1
y = 2
# write code here
if x > y:
print(x)
else:
print(y)
```
**Question 3:** Given three numbers, find the largest of them. Try at least 3 sets of numbers: where x is biggest, where y is biggest, and where z is biggest.
```
# find the largest of three numbers
# define numbers
x = 1
y = 2
z = 3
# write code here
if x > y and x > z:
print(x)
elif y > z:
print(y)
else:
print(z)
```
**Question 4:** Given a list or a string, find the length of it. (Again, python has the `len` function, but write your own code to do this.) Test your code at least twice: using a string, and using a list.
```
# find the length of a string or a list
# define string and list
string = 'mystring'
mylist = [0,1,2,3,4,5]
# write code here
c = 0
for i in string:
c += 1
print(c)
```
**Question 5:** Given a character (i.e. a string of length 1), print `True` if it is a vowel, and `False` otherwise. Test your code out with something that is a vowel and something that is not a vowel. Remeber that you can have capital and lowercase vowels!
```
# find whether a character is a vowel
# define character
x = 'A'
# write code here
vowels = ['A','a','E','e','I','i','O','o','U','u']
if x in vowels:
print(True)
else:
print(False)
```
**Question 6a:** Write code that sums all the numbers in a list of numbers. For example, the sum of `[1, 2, 3, 4]` should be 10. (Again, these exist in python but write your own code to do this.)
```
# find the sum of a list of numbers
# define list of numbers
numbers = [1,2,3,4]
# write code here
s = 0
for i in numbers:
s += i
print(s)
```
**Question 6b:** Write code that multiplies all the numbers in a list of numbers. For example, the product of `[1,2,3,4]` should be 24. (Again, these exist in python but write your own code to do this.)
```
# find the product of a list of numbers
# define list of numbers
numbers = [1,2,3,4]
# write code here
p = 1
for i in numbers:
p *= i
print(p)
```
**Question 7:** Find the reversal of a string. So if you give “this is too much” it will prnt out “hcum oot si siht”.
```
# find the reversal of a string
# define string
string = 'this is too much'
# write code here
rev = ''
for i in range(len(string)):
rev += string[len(string)-i-1]
print(rev)
```
**Question 8:** Write code that recognizes palindromes. A palindrome is a word or phrase that is the same forward and backward when you remove any spaces. If the string is a palindrome, print `True`. If it's not a palindrome, print `False`. You can test your code with “a man a plan a canal panama” or “radar”.
```
# write code that prints True if the string is a palindrome
# define string
x = 'a man a plan a canal panama'
# write code here
rev = ''
for i in range(len(x)):
rev += x[len(x)-i-1]
rev = rev.replace(' ','')
x = x.replace(' ','')
if x == rev:
print(True)
else:
print(False)
print(rev)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/5_exploring_model_families/4_resnet/1.3)%20Intro%20to%20resnet50-v1%20network%20-%20mxnet%20backend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### Train a classifier using resnet50 on natural-images dataset
### Understand what lies inside resnet50 network
# What is resnet
## Readings on resnet
1) Points from https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035
- The core idea of ResNet is introducing a so-called “identity shortcut connection” that skips one or more layers
- The deeper model should not produce a training error higher than its shallower counterparts.
- solves the problem of vanishing gradiens as network depth increased - https://medium.com/@anishsingh20/the-vanishing-gradient-problem-48ae7f501257
2) Points from https://medium.com/@14prakash/understanding-and-implementing-architectures-of-resnet-and-resnext-for-state-of-the-art-image-cf51669e1624
- Won 1st place in the ILSVRC 2015 classification competition with top-5 error rate of 3.57% (An ensemble model)
- Efficiently trained networks with 100 layers and 1000 layers also.
- Replacing VGG-16 layers in Faster R-CNN with ResNet-101. They observed a relative improvements of 28%
3) Read more here
- https://arxiv.org/abs/1512.03385
- https://d2l.ai/chapter_convolutional-modern/resnet.html
- https://cv-tricks.com/keras/understand-implement-resnets/
- https://mc.ai/resnet-architecture-explained/
# Table of Contents
## [Install](#0)
## [Load experiment with resnet base architecture](#1)
## [Visualize resnet](#2)
## [Train the classifier](#3)
## [Run inference on trained classifier](#4)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
## Dataset - Natural Images Classification
- https://www.kaggle.com/prasunroy/natural-images
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z" -O natural-images.zip && rm -rf /tmp/cookies.txt
! unzip -qq natural-images.zip
```
# Imports
```
#Using mxnet-gluon backend
# When installed using pip
from monk.gluon_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.gluon_prototype import prototype
```
<a id='1'></a>
# Load experiment with resnet base architecture
## Creating and managing experiments
- Provide project name
- Provide experiment name
- For a specific data create a single project
- Inside each project multiple experiments can be created
- Every experiment can be have diferent hyper-parameters attached to it
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "resnet-intro");
```
### This creates files and directories as per the following structure
workspace
|
|--------Project
|
|
|-----resnet-intro
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
## Set dataset and select the model
## Quick mode training
- Using Default Function
- dataset_path
- model_name
- freeze_base_network
- num_epochs
## Sample Dataset folder structure
natural-images
|
|-----train
|------aeroplane
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------car
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------.... (and so on)
|
|
|-----val
|------aeroplane
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------car
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------.... (and so on)
```
gtf.Default(dataset_path="natural-images/train",
model_name="resnet50_v1",
freeze_base_network=False,
num_epochs=5);
```
## From the summary above
- Model Params
Model name: resnet50_v1
Num of potentially trainable layers: 107
Num of actual trainable layers: 107
<a id='2'></a>
# Visualize resnet
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224), port=8082);
```
## resnet block - 1
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/resnet50_v1_bottleneck_block1_mxnet.png')
```
## Properties
- This block has 2 branches
- First branch is the identity branch, it takes the input and pushes it as the output, the Residual
- Second branch has these layers
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_1x1 -> batchnorm
- The branches are added elementwise, so both the branches need to have same sized output
- The final layer to this block is relu
### Unlike resnet18-v1 Block 1 this has a bottleneck block
- The bottleneck
- The num features in first and middle convolutions is input_features/4
- The final convolution has features = input_features
- Eg.
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_1x1 -> batchnorm
(feat-in)-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------(feat-out)
256 ----------> 64------------------------>-----------------------------64------------------------>-----------------------------256--------------------------------------------> 256
## resnet block - 2
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/resnet50_v1_bottleneck_block2_mxnet.png')
```
## Properties
- This block has 2 branches
- First branch has these layers
- conv_1x1 -> batchnorm
- Second branch has these layers
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_1x1 -> batchnorm
- The branches are added elementwise, so both the branches need to have same sized output
- The final layer to this block is relu
### Unlike resnet18-v1 Block 1 this has a bottleneck block
- The bottleneck
- The num features in first and middle convolutions is input_features/4
- The final convolution has features = input_features
- Eg.
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_1x1 -> batchnorm
(feat-in)-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------(feat-out)
256 ----------> 64------------------------>-----------------------------64------------------------>-----------------------------256--------------------------------------------> 256
## resnet fully connected chain
```
from IPython.display import Image
Image(filename='imgs/resnet50_v1_block_fc_mxnet.png')
```
## resnet Network
- Creating network and blocks using monk from scratch will be dealt in different roadmap series
```
from IPython.display import Image
Image(filename='imgs/resnet50_v1_mxnet.png')
```
## Properties
- This network
- has 12 type-1 blocks
- has 4 type-2 blocks
- post these blocks the type-3 (fc) block exists
<a id='3'></a>
# Train the classifier
```
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='4'></a>
# Run inference on trained classifier
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "resnet-intro", eval_infer=True);
output = gtf.Infer(img_name = "natural-images/test/test1.jpg");
from IPython.display import Image
Image(filename='natural-images/test/test1.jpg')
output = gtf.Infer(img_name = "natural-images/test/test2.jpg");
from IPython.display import Image
Image(filename='natural-images/test/test2.jpg')
output = gtf.Infer(img_name = "natural-images/test/test3.jpg");
from IPython.display import Image
Image(filename='natural-images/test/test3.jpg')
```
# Goals Completed
### Train a classifier using resnet50 on natural-images dataset
### Understand what lies inside resnet50 network
| github_jupyter |
# Minimum inter-class distances for different norms and different datasets
```
import os
os.chdir("../")
import sys
import json
import math
import numpy as np
import pickle
from PIL import Image
from sklearn import metrics
from sklearn.metrics import pairwise_distances as dist
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='paper')
import provable_robustness_max_linear_regions.data as dt
from utils import NumpyEncoder, normalize_per_feature_0_1, har, tinyimagenet
```
## Plot settings:
```
SMALL_SIZE = 14
MEDIUM_SIZE = 18
BIGGER_SIZE = 26
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('text', usetex=True)
# dictionary that maps color string to 'good looking' seaborn colors that are easily distinguishable
colors = {
"orange": sns.xkcd_rgb["yellowish orange"],
"red": sns.xkcd_rgb["pale red"],
"green": sns.xkcd_rgb["medium green"],
"blue": sns.xkcd_rgb["denim blue"],
"yellow": sns.xkcd_rgb["amber"],
"purple": sns.xkcd_rgb["dusty purple"],
"cyan": sns.xkcd_rgb["cyan"]
}
```
## Calculate distances:
Estimated runtime (if no file with data is present): 3 days
Note: The dataset HAR is not included in this repository because of storage issues. You can download the dataset from https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones. After downloading, create a folder 'har' in the root folder of the repository and extract the dataset into the folder.
Note: The dataset TINY-IMAGENET-200 is not included in this repository because of storage issues. You can download the dataset from https://tiny-imagenet.herokuapp.com/. After downloading, create a folder 'tiny-imagenet-200' in the root folder of the repository and extract the dataset into the folder.
Without downloading the two datasets, the following code will not be executable.
```
def load_from_json(file_name):
if not os.path.exists("res/" + file_name + ".json"):
return None
else:
with open("res/" + file_name + ".json", 'r') as fp:
return json.load(fp)
def save_to_json(dictionary, file_name):
if not os.path.exists("res"):
os.makedirs("res")
with open("res/" + file_name + ".json", 'w') as fp:
json.dump(dictionary, fp, cls = NumpyEncoder)
dataset_to_n_points = {"mnist": 10000, "fmnist": 10000, "cifar10": 10000, "gts": 10000, "tinyimagenet": 98179, "har": 2947}
minimum_distances = dict()
for dataset in ["mnist", "fmnist", "gts", "har", "tinyimagenet", "cifar10"]:
minimum_distances[dataset] = load_from_json("min_distances_dataset={}_n_points={}".format(dataset, dataset_to_n_points[dataset]))
if not minimum_distances[dataset]:
if dataset in ["mnist", "fmnist"]:
_, x_test, _, y_test = dt.get_dataset(dataset)
sample_inputs = x_test[:dataset_to_n_points[dataset]]
sample_labels = y_test[:dataset_to_n_points[dataset]]
sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 784)
elif dataset in ["gts", "cifar10"]:
_, x_test, _, y_test = dt.get_dataset(dataset)
sample_inputs = x_test[:dataset_to_n_points[dataset]]
sample_labels = y_test[:dataset_to_n_points[dataset]]
sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 3072)
elif dataset == "har":
_, _, x_test, y_test, _ = har()
sample_inputs = x_test[:dataset_to_n_points[dataset]]
sample_labels = y_test[:dataset_to_n_points[dataset]]
elif dataset == "tinyimagenet":
x_train, y_train = tinyimagenet()
sample_inputs = x_train[:dataset_to_n_points[dataset]]
sample_labels = y_train[:dataset_to_n_points[dataset]]
sample_inputs = sample_inputs.reshape(sample_inputs.shape[0], 12288)
minimum_distances[dataset] = {"inner": {"inf": [], "2": [], "1": []}, "outer": {"inf": [], "2": [], "1": []}}
scipy_norm_to_key = {"chebyshev": "inf", "l2": "2", "l1": "1"}
for norm in ['chebyshev', 'l2', 'l1']:
pairwise_distances = dist(sample_inputs, sample_inputs, norm)
np.fill_diagonal(pairwise_distances, np.inf)
for i, sample_input in enumerate(sample_inputs):
row = pairwise_distances[i]
label = sample_labels[i].argmax()
inner_class_row = [x if sample_labels[j].argmax() == label else np.inf for j, x in enumerate(row)]
minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]].append(np.min(inner_class_row))
minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]] = np.sort(minimum_distances[dataset]["inner"][scipy_norm_to_key[norm]])
for i, sample_input in enumerate(sample_inputs):
row = pairwise_distances[i]
label = sample_labels[i].argmax()
inner_class_row = [x if sample_labels[j].argmax() != label else np.inf for j, x in enumerate(row)]
minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]].append(np.min(inner_class_row))
minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]] = np.sort(minimum_distances[dataset]["outer"][scipy_norm_to_key[norm]])
save_to_json(minimum_distances[dataset], "min_distances_dataset={}_n_points={}".format(dataset, dataset_to_n_points[dataset]))
```
## Plot:
```
# name to save the plot
save_name = "fig_distances_all_datasets_all_norms"
# number of model types and parameter combinations
n_cols = 3
n_rows = 1
fig, ax = plt.subplots(n_rows, n_cols, figsize = (6 * n_cols, 5 * n_rows), sharey = True, dpi=400)
norm_to_latex = {"inf":"\ell_\infty", "2":"\ell_2", "1": "\ell_1"}
color_map = {"mnist": colors["blue"], "cifar10": colors["cyan"], "fmnist": colors["yellow"], "gts": colors["green"], "tinyimagenet": colors["red"], "har": colors["purple"]}
label_map = {"mnist": "MNIST", "cifar10": "CIFAR10", "fmnist": "FMNIST", "gts": "GTS", "tinyimagenet": "TINY-IMAGENET", "har": "HAR"}
for dataset in ["mnist", "fmnist", "gts", "tinyimagenet", "har", "cifar10"]:
for i, norm in enumerate(["inf", "2", "1"]):
# filters duplicates with different class
minimum_distances[dataset]["outer"][norm] = [value for value in minimum_distances[dataset]["outer"][norm] if value >= 0.0001]
ax[i].plot(minimum_distances[dataset]["outer"][norm], np.linspace(0.0, 1.0, len(minimum_distances[dataset]["outer"][norm])), c=color_map[dataset], label = label_map[dataset])
ax[i].set_xlabel("distance in ${}$".format(norm_to_latex[norm]))
ax[i].legend()
ax[0].set_ylabel("percentage of points")
ax[0].set_title("$\ell_\infty$ distances")
ax[1].set_title("$\ell_2$ distances")
ax[2].set_title("$\ell_1$ distances")
ax[0].set_xlim(left = 0.0)
ax[1].set_xlim(left = 0.0)
ax[2].set_xlim(left = 0.0)
fig.tight_layout()
fig.savefig('res/{}.pdf'.format(save_name))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_parent" href="https://github.com/giswqs/geemap/tree/master/tutorials/Image/02_image_visualization.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_parent" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/tutorials/Image/02_image_visualization.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_parent" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/tutorials/Image/02_image_visualization.ipynb"><img width=26px src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# Image Visualization
This notebook was adapted from the [Earth Engine JavaScript API Documentation](https://developers.google.com/earth-engine/image_visualization).
The [Get Started](https://developers.google.com/earth-engine/getstarted#adding-data-to-the-map) page illustrates how to visualize an image using `Map.addLayer()`. If you add a layer to the map without any additional parameters, by default the Code Editor assigns the first three bands to red, green and blue, respectively. The default stretch is based on the type of data in the band (e.g. floats are stretched in `[0,1]`, 16-bit data are stretched to the full range of possible values), which may or may not be suitable. To achieve desirable visualization effects, you can provide visualization parameters to `Map.addLayer()`.

## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Add layers to map
```
Map = emap.Map()
# Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
# Center the map and display the image.
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(image, {}, 'default color composite')
Map.addLayerControl()
Map
```
## RGB composites
The following illustrates the use of parameters to style a Landsat 8 image as a false-color composite:
```
# Create a default map
Map = emap.Map()
# Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
# Define the visualization parameters.
vizParams = {
'bands': ['B5', 'B4', 'B3'],
'min': 0,
'max': 0.5,
'gamma': [0.95, 1.1, 1]
}
# Center the map and display the image.
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(image, vizParams, 'false color composite')
# Display the map
Map.addLayerControl()
Map
```
## Color palettes
To display a single band of an image in color, set the `parameter` with a color ramp represented by a list of CSS-style color strings. (See this [reference](http://en.wikipedia.org/wiki/Web_colors) for more information). The following example illustrates how to use colors from cyan (`00FFFF`) to blue (`0000FF`) to render a [Normalized Difference Water Index (NDWI)](http://www.tandfonline.com/doi/abs/10.1080/01431169608948714) image.
In this example, note that the `min` and `max` parameters indicate the range of pixel values to which the palette should be applied. Intermediate values are linearly stretched. Also note that the `opt_show` parameter is set to `False`. This results in the visibility of the layer being off when it is added to the map. It can always be turned on again using the Layer Manager in the upper right corner of the map. The result should look something like below.
```
# Create a default map
Map = emap.Map()
#Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
#Create an NDWI image, define visualization parameters and display.
ndwi = image.normalizedDifference(['B3', 'B5'])
ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(ndwi, ndwiViz, 'NDWI', True)
# Display the map
Map.addLayerControl()
Map
```
## Masking
You can use `image.updateMask()` to set the opacity of individual pixels based on where pixels in a mask image are non-zero. Pixels equal to zero in the mask are excluded from computations and the opacity is set to 0 for display. The following example uses an NDWI threshold (see the [Relational Operations section](https://developers.google.com/earth-engine/image_relational.html) for information on thresholds) to update the mask on the NDWI layer created previously:
```
# Create a default map
Map = emap.Map()
#Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
#Create an NDWI image, define visualization parameters and display.
ndwi = image.normalizedDifference(['B3', 'B5'])
ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(ndwi, ndwiViz, 'NDWI', False)
# Mask the non-watery parts of the image, where NDWI < 0.4.
ndwiMasked = ndwi.updateMask(ndwi.gte(0.4))
Map.addLayer(ndwiMasked, ndwiViz, 'NDWI masked')
# Display the map
Map.addLayerControl()
Map
```
## Visualization images
Use the `image.visualize()` method to convert an image into an 8-bit RGB image for display or export. For example, to convert the false-color composite and NDWI to 3-band display images, use:
```
# Create a default map
Map = emap.Map()
#Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
#Create an NDWI image, define visualization parameters and display.
ndwi = image.normalizedDifference(['B3', 'B5'])
ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(ndwi, ndwiViz, 'NDWI', False)
# Mask the non-watery parts of the image, where NDWI < 0.4.
ndwiMasked = ndwi.updateMask(ndwi.gte(0.4));
Map.addLayer(ndwiMasked, ndwiViz, 'NDWI masked', False)
# Create visualization layers.
imageRGB = image.visualize(**{'bands': ['B5', 'B4', 'B3'], 'max': 0.5})
ndwiRGB = ndwiMasked.visualize(**{
'min': 0.5,
'max': 1,
'palette': ['00FFFF', '0000FF']
})
Map.addLayer(imageRGB, {}, 'imageRGB')
Map.addLayer(ndwiRGB, {}, 'ndwiRGB')
# Display the map
Map.addLayerControl()
Map
```
## Mosaicking
You can use masking and `imageCollection.mosaic()` (see the [Mosaicking section](https://developers.google.com/earth-engine/ic_composite_mosaic.html) for information on mosaicking) to achieve various cartographic effects. The `mosaic()` method renders layers in the output image according to their order in the input collection. The following example uses `mosaic()` to combine the masked NDWI and the false color composite and obtain a new visualization.
In this example, observe that a list of the two visualization images is provided to the ImageCollection constructor. The order of the list determines the order in which the images are rendered on the map. The result should look something like below.
```
# Create a default map
Map = emap.Map()
#Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
#Create an NDWI image, define visualization parameters and display.
ndwi = image.normalizedDifference(['B3', 'B5'])
ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}
Map.setCenter(-122.1899, 37.5010, 10) # San Francisco Bay
Map.addLayer(ndwi, ndwiViz, 'NDWI', False)
# Mask the non-watery parts of the image, where NDWI < 0.4.
ndwiMasked = ndwi.updateMask(ndwi.gte(0.4));
Map.addLayer(ndwiMasked, ndwiViz, 'NDWI masked', False)
# Create visualization layers.
imageRGB = image.visualize(**{'bands': ['B5', 'B4', 'B3'], 'max': 0.5})
ndwiRGB = ndwiMasked.visualize(**{
'min': 0.5,
'max': 1,
'palette': ['00FFFF', '0000FF']
})
Map.addLayer(imageRGB, {}, 'imageRGB', False)
Map.addLayer(ndwiRGB, {}, 'ndwiRGB', False)
# Mosaic the visualization layers and display (or export).
mosaic = ee.ImageCollection([imageRGB, ndwiRGB]).mosaic()
Map.addLayer(mosaic, {}, 'mosaic');
# Display the map
Map.addLayerControl()
Map
```
## Clipping
The `image.clip()` method is useful for achieving cartographic effects. The following example clips the mosaic shown above to an arbitrary buffer zone around the city of San Francisco.
Note that the coordinates are provided to the `Geometry` constructor and the buffer length is specified as 20,000 meters. Learn more about geometries on the [Geometries page](https://developers.google.com/earth-engine/geometries). The result, shown with the map in the background, should look something like below.
```
# Create a default map
Map = emap.Map()
#Load an image.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318')
#Create an NDWI image, define visualization parameters and display.
ndwi = image.normalizedDifference(['B3', 'B5'])
ndwiViz = {'min': 0.5, 'max': 1, 'palette': ['00FFFF', '0000FF']}
Map.setCenter(-122.4344, 37.7599, 10) # San Francisco Bay
Map.addLayer(ndwi, ndwiViz, 'NDWI', False)
# Mask the non-watery parts of the image, where NDWI < 0.4.
ndwiMasked = ndwi.updateMask(ndwi.gte(0.4));
Map.addLayer(ndwiMasked, ndwiViz, 'NDWI masked', False)
# Create visualization layers.
imageRGB = image.visualize(**{'bands': ['B5', 'B4', 'B3'], 'max': 0.5})
ndwiRGB = ndwiMasked.visualize(**{
'min': 0.5,
'max': 1,
'palette': ['00FFFF', '0000FF']
})
Map.addLayer(imageRGB, {}, 'imageRGB', False)
Map.addLayer(ndwiRGB, {}, 'ndwiRGB', False)
# Mosaic the visualization layers and display (or export).
mosaic = ee.ImageCollection([imageRGB, ndwiRGB]).mosaic()
Map.addLayer(mosaic, {}, 'mosaic', False);
# Create a circle by drawing a 20000 meter buffer around a point.
roi = ee.Geometry.Point([-122.4481, 37.7599]).buffer(20000)
clipped = mosaic.clip(roi)
# Display a clipped version of the mosaic.
Map.addLayer(clipped, {}, 'Clipped image')
# Display the map
Map.addLayerControl()
Map
```
## Rendering categorical maps
Palettes are also useful for rendering discrete valued maps, for example a land cover map. In the case of multiple classes, use the palette to supply a different color for each class. (The `image.remap()` method may be useful in this context, to convert arbitrary labels to consecutive integers). The following example uses a palette to render land cover categories:
```
# Create a default map
Map = emap.Map()
#Load 2012 MODIS land cover and select the IGBP classification.
cover = ee.Image('MODIS/051/MCD12Q1/2012_01_01') \
.select('Land_Cover_Type_1')
#Define a palette for the 18 distinct land cover classes.
igbpPalette = [
'aec3d4', #water
'152106', '225129', '369b47', '30eb5b', '387242', #forest
'6a2325', 'c3aa69', 'b76031', 'd9903d', '91af40', #shrub, grass
'111149', #wetlands
'cdb33b', #croplands
'cc0013', #urban
'33280d', #crop mosaic
'd7cdcc', #snow and ice
'f7e084', #barren
'6f6f6f' #tundra
]
#Specify the min and max labels and the color palette matching the labels.
Map.setCenter(-99.229, 40.413, 5)
Map.addLayer(cover, {'min': 0, 'max': 17, 'palette': igbpPalette}, 'IGBP classification')
Map.addLayerControl()
Map
```
## Thumbnail images
Use the `ee.Image.getThumbURL()` method to generate a PNG or JPEG thumbnail image for an `ee.Image` object. Printing the outcome of an expression ending with a call to `getThumbURL()` results in a URL being printed to the console. Visiting the URL sets Earth Engine servers to work on generating the requested thumbnail on-the-fly. The image is displayed in the browser when processing completes. It can be downloaded by selecting appropriate options from the image’s right-click context menu.
The `getThumbURL()` method shares parameters with `Map.addLayer()`, described in the [visualization parameters table](https://developers.google.com/earth-engine/image_visualization#mapVisParamTable) above. Additionally, it takes optional `dimension`, `region`, and `crs` arguments that control the spatial extent, size, and display projection of the thumbnail.

A single-band image will default to grayscale unless a `palette` argument is supplied. A multi-band image will default to RGB visualization of the first three bands, unless a `bands` argument is supplied. If only two bands are provided, the first band will map to red, the second to blue, and the green channel will be zero filled.
The following are a series of examples demonstrating various combinations of `getThumbURL()` parameter arguments. Visit the URLs printed to the console when you run this script to view the thumbnails.
```
# Fetch a digital elevation model.
image = ee.Image('CGIAR/SRTM90_V4')
# Request a default thumbnail of the DEM with defined linear stretch.
# Set masked pixels (ocean) to 1000 so they map as gray.
thumbnail1 = image.unmask(1000).getThumbURL({
'min': 0,
'max': 3000,
'dimensions': 500
})
print('Default extent and size:', thumbnail1)
# Specify region by GeoJSON, define palette, set size of the larger aspect dimension.
thumbnail2 = image.getThumbURL({
'min': 0,
'max': 3000,
'palette': ['00A600','63C600','E6E600','E9BD3A','ECB176','EFC2B3','F2F2F2'],
'dimensions': 500,
'region': ee.Geometry.Rectangle([-84.6, -55.9, -32.9, 15.7]),
})
print('GeoJSON region, palette, and max dimension:', thumbnail2)
```
| github_jupyter |
# Exploring OpenEEW data
## Import openeew package
```
from openeew.data.aws import AwsDataClient
from openeew.data.df import get_df_from_records
```
## Import other packages
```
import folium
from datetime import datetime
import plotnine as pn
import pandas as pd
from geopy.distance import distance
# Allow nested asyncio event loop
# See https://github.com/erdewit/nest_asyncio
import nest_asyncio
nest_asyncio.apply()
```
## Get past earthquake date and location
```
# Check SSN website:
# http://www2.ssn.unam.mx:8080/sismos-fuertes/
eq = {
'latitude': 16.218,
'longitude': -98.0135,
'date_utc': '2018-02-16 23:39:39'
}
```
## View epicenter on map
```
m = folium.Map(
location=[eq['latitude'], eq['longitude']],
zoom_start=7
)
folium.Circle(
radius=10000,
location=[eq['latitude'], eq['longitude']],
color='crimson',
fill='crimson',
).add_to(m)
m
```
## Initialize OpenEEW data client
```
data_client = AwsDataClient('mx')
```
## Get devices as of earthquake date
```
devices = data_client.get_devices_as_of_date(eq['date_utc'])
devices
for d in devices:
folium.Marker(
[d['latitude'], d['longitude']],
popup = folium.Popup(
d['device_id'],
sticky=True
)
).add_to(m)
m
```
## Get records for date range
```
# For generality we could calculate
# these dates based on the eq date
start_date_utc = '2018-02-16 23:39:00'
end_date_utc = '2018-02-16 23:43:00'
records_df = get_df_from_records(
data_client.get_filtered_records(
start_date_utc,
end_date_utc
)
)
# Get UTC date from Unix time sample_t for plotting
records_df['sample_dt'] = \
records_df['sample_t'].apply(lambda x: datetime.utcfromtimestamp(x))
# Select required columns
records_df = records_df[
[
'device_id',
'x',
'y',
'z',
'sample_dt'
]
]
records_df.head()
```
## Plot records for single device
```
def plot_seismograms(device_id):
# Get earthquake date as datetime.datetime object
eq_dt = AwsDataClient._get_dt_from_str(eq['date_utc'])
plots = []
for axis in ['x', 'y', 'z']:
plots.append(
pn.ggplot(
records_df[records_df['device_id'] == device_id],
pn.aes('sample_dt', axis)
) + \
pn.geom_line(color='blue') + \
pn.scales.scale_x_datetime(
date_breaks='1 minute',
date_labels='%H:%M:%S'
) + \
pn.geoms.geom_vline(
xintercept=eq_dt,
color='crimson'
) + \
pn.labels.ggtitle('device {}, axis {}'.format(device_id, axis))
)
for p in plots:
print(p)
plot_seismograms('006')
plot_seismograms('000')
```
## Compare max accelerations
```
# For each device, get max acceleration of horizontal axes
# Store these values as pandas Series
pgas = pd.Series(name='pga')
pgas.index.name = 'device_id'
for device_id in records_df.device_id.unique():
# Get horizontal axes from device metadata
horizontal_axes = [
d['horizontal_axes'] for d in devices
if d['device_id'] == device_id
][0]
# Get max accel as sqrt of sum of squares of horizontal axes
pgas[device_id] = \
(records_df[records_df['device_id'] == device_id][horizontal_axes] ** 2) \
.sum(axis=1) \
.pow(0.5) \
.max()
pgas = pgas.sort_values(ascending=False)
pgas
```
## Compare relationship between distance and max acceleration
```
# Use a pandas DataFrame for convenience
devices_df = pd.DataFrame(devices)
devices_df = devices_df[
[
'device_id',
'latitude',
'longitude'
]
]
# Use the geopy.distance.distance function
# to get distance from devices to epicenter
devices_df['dist_from_eq'] = devices_df.apply(
lambda r: round(
distance(
(r['latitude'], r['longitude']),
(eq['latitude'], eq['longitude'])
).km, 3),
axis=1
)
devices_df = devices_df.merge(pgas, left_on='device_id', right_index=True)
devices_df.sort_values('dist_from_eq')
# Plot using linear scale
pn.ggplot(
devices_df,
pn.aes('dist_from_eq', 'pga')
) + \
pn.geom_point(color='blue') + \
pn.labels.ggtitle('PGA vs distance from epicenter')
```
| github_jupyter |
```
import os
os.environ["CUDA_VISIBLE_DEVICES"]="1"
os.environ["TF_FORCE_GPU_ALLOW_GROWTH"]="true"
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
from tqdm import tqdm
tf.compat.v1.enable_v2_behavior()
from tf_agents.agents.ppo import ppo_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics, py_metrics
from tf_agents.policies import random_tf_policy, epsilon_greedy_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.networks import actor_distribution_network, value_network
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
from tf_agents.specs import array_spec
from tf_agents.environments import utils, wrappers
from tf_agents.trajectories import time_step as ts
from tf_agents.drivers import dynamic_episode_driver
from tf_agents.drivers import py_driver
from vectorincrement import *
```
# Running RL with tf.agents
```
num_iterations = 500 # @param {type:"integer"}
collect_episodes_per_iteration = 2 # @param {type:"integer"}
replay_buffer_capacity = 1000 # @param {type:"integer"}
batch_size = 64
fc_layer_params = ()
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 25 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 10 # @param {type:"integer"}
v_n = 2
v_k = 2
v_seed = 10
do_transform = True
time_limit = 20
def get_env():
"""Return a copy of the environment."""
env = VectorIncrementEnvironmentTFAgents(v_n=v_n, v_k=v_k, v_seed=v_seed,
do_transform=do_transform)
env = wrappers.TimeLimit(env, time_limit)
env = tf_py_environment.TFPyEnvironment(env)
return env
train_env = get_env()
eval_env = get_env()
def create_networks(observation_spec, action_spec):
actor_net = actor_distribution_network.ActorDistributionNetwork(
observation_spec,
action_spec,
fc_layer_params=(100,),
activation_fn=tf.nn.elu)
value_net = value_network.ValueNetwork(
observation_spec,
fc_layer_params=(100,),
activation_fn=tf.nn.elu)
return actor_net, value_net
actor_net, value_net = create_networks(train_env.observation_spec(), train_env.action_spec())
global_step = tf.compat.v1.train.get_or_create_global_step()
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate, epsilon=1e-5)
tf_agent = ppo_agent.PPOAgent(
train_env.time_step_spec(),
train_env.action_spec(),
optimizer,
actor_net,
value_net,
num_epochs=num_epochs,
train_step_counter=global_step,
discount_factor=0.995,
gradient_clipping=0.5,
entropy_regularization=1e-2,
importance_ratio_clipping=0.2,
use_gae=True,
use_td_lambda_return=True
)
tf_agent.initialize()
```
| github_jupyter |
```
import time
import numpy as np
import scipy as sc
import bsplines as bsp
import HyCho_FEM as fem
import HyCho_PIC
#import HyCho_PIC as pic
import utilitis_opt as utils_opt
import utilitis_pic_Rel
#import utilitis_pic_Rel as utils_pic_fast
import matplotlib.pyplot as plt
from scipy.sparse.linalg import splu
from scipy.sparse import block_diag
from scipy.sparse.linalg import inv
#====================================================================================
# calling epyccel
#====================================================================================
from pyccel.epyccel import epyccel
utils_pic_fast = epyccel(utilitis_pic_Rel)
print('pyccelization of pic functions done!')
#====================================================================================
#====================================================================================
# calling epyccel
#====================================================================================
from pyccel.epyccel import epyccel
pic = epyccel(HyCho_PIC)
print('pyccelization of pic functions done!')
#====================================================================================
#===== Is this run a restart? (restart = 0: no, restart = 1: yes) ===================
restart = 0
max_time = 30*60 # maximum runtime of program in minutes
time_restart_files = 40*60 # time interval for restart files in minutes
name_particles = 'restart_files/particles1.npy'
name_fields = 'restart_files/fields1.npy'
name_time_step = 'restart_files/time_step1.npy'
name_control = 'restart_files/control_variate1.npy'
#====================================================================================
#=========================== time integration =======================================
time_integr = 1 # do time integration? (1 : yes, 0: no)
title = 'results/run_L=327.7_Nel=2000_T=300_dt=0.04_Np=4e6_nuh=6e-2_xi=0_False.txt' # name of file to save data
#====================================================================================
#===== physical parameters ==========================================================
wpe = 5. # cold electron plasma frequency
nuh = 6e-2 # ratio of cold/hot electron densities (nh/nc)
nh = nuh*wpe**2 # hot electron density
wpar = 0.2 # parallel thermal velocity of energetic particles
wperp = 0.53 # perpendicular thermal velocity of energetic particles
xi = 0. # inhomogeneity factor of background magnetic field
rel = 1 # relativistic effects? (1: yes, 0: no)
bc_d = 1 # damping of E and j at boundaries? (1: yes, 0: no)
bc_f = 1 # field line dependence of initial distribution function? (1: yes, 0: no)
#===================================================================================
#===== numerical parameters =========================================================
bc = False # boundary conditions (True: periodic, False: homogeneous Dirichlet)
k = 2. # wavenumber of initial wave field perturbations
Lz = 327.7 # length of z-domain
Nel = 1800 # number of elements z-direction
T = 1. # simulation time
dt = 0.04 # time step
p = 3 # degree of B-spline basis functions in V0
Np = np.int(5e6) # number of markers
control = 1 # control variate for noise reduction? (1: yes, 0: no)
Ld = 0.046*Lz # length of damping region at each end
#====================================================================================
#===== evaluation points for the magnetic field======================================
eva_points_Bx = np.array([100., 200., 300.])
#====================================================================================
#===== initial conditions ===========================================================
amp = 1e-4 # amplitude of initial wave field perturbations
Ex0 = lambda z : 0*z # initial Ex
Ey0 = lambda z : 0*z # initial Ey
'''
def Bx0(z):
values = np.zeros(z.shape, dtype=float)
modesz = np.linspace(0, Nel, Nel + 1) - Nel/2
modesz = np.delete(modesz, int(Nel/2))
for i in range(Nel):
values += amp*np.sin(2*np.pi*modesz[i]*z/Lz)
return values
'''
Bx0 = lambda z : amp*np.sin(104*2*np.pi*z/Lz) # initial Bx
By0 = lambda z : 0*z # initial By
jx0 = lambda z : 0*z # initial jcx
jy0 = lambda z : 0*z # initial jcy
#====================================================================================
#===== discretization of spatial domain =============================================
dz = Lz/Nel # element size
el_b = np.linspace(0., Lz, Nel + 1) # element boundaries
Nbase0 = Nel + p - bc*p # total number of basis functions in V0
Nbase0_dof = Nbase0 - 2 + 2*bc # number of degrees of freedom in V1
Nbase1 = Nbase0 - 1 + bc # total number of basis functions in V1
Nbase1_dof = Nbase1 # number of degrees of freedom in V1
#====================================================================================
#===== some diagnostic values =======================================================
nh = nuh*wpe**2 # hot electron density
Eh_eq = Lz*nh/2*(wpar**2 + 2*wperp**2) # equilibrium energetic electron energy
energies = np.empty(4, dtype=float) # energies: E, B, Cold, Hot
Bx = np.empty(len(eva_points_Bx), dtype=float)
#====================================================================================
#===== background field in z-direction ==============================================
B_background_z = lambda z : 1. + xi*(z - Lz/2)**2
#====================================================================================
#===== initial energetic electron distribution function =============================
def fh0(z, vx, vy, vz):
xiB = 1. - 1/B_background_z(z)
xiz = 1. + (wperp**2/wpar**2 - 1.)*xiB*bc_f
return nh/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - xiz*(vx**2 + vy**2)/(2*wperp**2))
#====================================================================================
#===== Maxwellian for control variate ===============================================
maxwell = lambda vx, vy, vz : nh/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - (vx**2 + vy**2)/(2*wperp**2))
#====================================================================================
#===== sampling distribution for initial markers ====================================
g_sampling = lambda vx, vy, vz : 1/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - (vx**2 + vy**2)/(2*wperp**2))*1/Lz
#====================================================================================
#===== masking function to damp wave fields near boundaries =========================
def damp(z):
if z <= Ld:
return np.sin(np.pi*z/(2*Ld))
elif z >= Lz - Ld:
return np.sin(np.pi*(Lz - z)/(2*Ld))
else:
return 1
#====================================================================================
#===== spline knot vector, global mass matrices (in V0 and V1) and gradient matrix ==
T0 = bsp.make_knots(el_b, p, bc)
T1 = T0[1:-1]
M0 = fem.mass_V0(T0, p, bc)
M1 = fem.mass_V1(T0, p, bc)
MB = fem.mass_V0_B(T0, p, bc, B_background_z)
G = sc.sparse.csc_matrix(fem.GRAD(T0, p, bc))
if bc == False:
M0 = M0[1:-1, 1:-1]
MB = MB[1:-1, 1:-1]
G = G[:, 1:-1]
D = sc.sparse.csr_matrix(bsp.collocation_matrix(T1, p - 1, eva_points_Bx, bc, normalize=True))
print('matrix assembly done!')
#====================================================================================
#=================== coefficients for pp-forms ======================================
if p == 3:
pp_0 = np.asfortranarray([[1/6, -1/(2*dz), 1/(2*dz**2), -1/(6*dz**3)], [2/3, 0., -1/dz**2, 1/(2*dz**3)], [1/6, 1/(2*dz), 1/(2*dz**2), -1/(2*dz**3)], [0., 0., 0., 1/(6*dz**3)]])
pp_1 = np.asfortranarray([[1/2, -1/dz, 1/(2*dz**2)], [1/2, 1/dz, -1/dz**2], [0., 0., 1/(2*dz**2)]])
elif p == 2:
pp_0 = np.asfortranarray([[1/2, -1/dz, 1/(2*dz**2)], [1/2, 1/dz, -1/dz**2], [0., 0., 1/(2*dz**2)]])
pp_1 = np.asfortranarray([[1., -1/dz], [0., 1/dz]])
else:
print('Only cubic and quadratic splines implemented!')
#====================================================================================
#===== reserve memory for unknowns ==================================================
ex = np.empty(Nbase0, dtype=float)
ey = np.empty(Nbase0, dtype=float)
bx = np.empty(Nbase1, dtype=float)
by = np.empty(Nbase1, dtype=float)
yx = np.empty(Nbase0, dtype=float)
yy = np.empty(Nbase0, dtype=float)
uj = np.empty(4*Nbase0_dof + 2*Nbase1_dof, dtype=float)
z_old = np.empty(Np, dtype=float)
spans0 = np.empty(Np, dtype=int)
jh_x = np.empty(Nbase0, dtype=float)
jh_y = np.empty(Nbase0, dtype=float)
Fh = np.zeros(4*Nbase0_dof + 2*Nbase1_dof, dtype=float)
#====================================================================================
#===== initial coefficients with commuting projectors ===============================
proj = fem.projectors_1d(T0, p, bc)
ex[:] = proj.PI_0(Ex0)
ey[:] = proj.PI_0(Ey0)
bx[:] = proj.PI_1(Bx0)
by[:] = proj.PI_1(By0)
yx[:] = proj.PI_0(jx0)
yy[:] = proj.PI_0(jy0)
if bc == False:
uj[:] = np.concatenate((ex[1:-1], ey[1:-1], bx, by, yx[1:-1], yy[1:-1]))
else:
uj[:] = np.concatenate((ex, ey, bx, by, yx, yy))
print('projection of initial fields done!')
#====================================================================================
#===== construct block matrices for field update ====================================
I = sc.sparse.identity(Nbase1_dof)
A1 = sc.sparse.bmat([[M0, None, None, None, None, None],[None, M0, None, None, None, None], [None, None, I, None, None, None], [None, None, None, I, None, None], [None, None, None, None, M0, None], [None, None, None, None, None, M0]], format='csc')
A2 = sc.sparse.bmat([[None, None, None, G.T.dot(M1), -M0, None],[None, None, -G.T.dot(M1), None, None, -M0], [None, G, None, None, None, None], [-G, None, None, None, None, None], [wpe**2*M0, None, None, None, None, -MB], [None, wpe**2*M0, None, None, MB, None]], format='csc')
LHS = A1 - dt/2*A2
RHS = A1 + dt/2*A2
LU = sc.sparse.linalg.splu(LHS)
print('LU factorization done!')
if bc_d == 1:
if bc == False:
greville = bsp.greville(T0, p, bc)[1:-1]
colloq = sc.sparse.csc_matrix(bsp.collocation_matrix(T0, p, greville, bc)[:, 1:-1])
else:
greville = bsp.greville(T0, p, bc)
colloq = sc.sparse.csc_matrix(bsp.collocation_matrix(T0, p, greville, bc))
g_greville = np.zeros(Nbase0_dof, dtype=float)
for i in range(Nbase0_dof):
g_greville[i] = damp(greville[i])
G_greville = sc.sparse.diags(g_greville, 0)
DAMP = inv(colloq).dot(G_greville.dot(colloq))
else:
DAMP = sc.sparse.identity(Nbase0_dof)
DAMP_block = sc.sparse.block_diag((DAMP, DAMP, sc.sparse.identity(Nbase1_dof), sc.sparse.identity(Nbase1_dof), DAMP, DAMP), format='csr')
print('damping assembly done!')
#====================================================================================
#===== Is this run a restart? (restart = 0: no, restart = 1: yes) ===================
restart = 0
max_time = 30*60 # maximum runtime in minutes
time_restart_files = 40*60 # time after which the current configuration is saved in minutes
name_particles = 'restart_files/particles1.npy'
name_fields = 'restart_files/fields1.npy'
name_time_step = 'restart_files/time_step1.npy'
name_control = 'restart_files/control_variate1.npy'
#====================================================================================
#===== saving data? (save = 1: yes, save = 0: no). If yes, name directory ===========
save = 1
title = 'results/short_old.txt'
saving_step = 1 # save data only every saving_stepth time step
time_integr = 1 # do time integration? (1 : yes, 0: no)
#====================================================================================
#===== physical parameters ==========================================================
eps0 = 1.0 # vacuum permittivity
mu0 = 1.0 # vacuum permeability
c = 1.0 # speed of light
qe = -1.0 # electron charge
me = 1.0 # electron mass
B0z = 1.0 # minimum of background magnetic field in z-direction
wce = qe*B0z/me # electron cyclotron frequency
wpe = 5*np.abs(wce) # cold electron plasma frequency
nuh = 6e-2 # ratio of cold/hot electron densities (nh/nc)
nh = nuh*wpe**2 # hot electron density
wpar = 0.2*c # parallel thermal velocity of energetic particles
wperp = 0.53*c # perpendicular thermal velocity of energetic particles
xi = 0. # inhomogeneity factor of background magnetic field
bcs_d = 1 # damping of wave fields at boundaries? (1: yes, 0: no)
bcs_g = 1 # field line dependence of initial distribution function? (1: yes, 0: no)
#====================================================================================
#===== initial conditions ===========================================================
k = 2. # wavenumber of initial wave field perturbations
amp = 1e-4 # amplitude of initial wave field perturbations
eps = 0. # amplitude of spatial pertubation of initial distribution function
Ex0 = lambda z : 0*z # initial Ex
Ey0 = lambda z : 0*z # initial Ey
Bx0 = lambda z : amp*np.sin(104*2*np.pi*z/Lz) # initial Bx
By0 = lambda z : 0*z # initial By
jx0 = lambda z : 0*z # initial jcx
jy0 = lambda z : 0*z # initial jcy
#====================================================================================
#===== numerical parameters =========================================================
Lz = 327.7 # length of z-domain
Nel = 1800 # number of elements z-direction
T = 1. # simulation time
dt = 0.04 # time step
p = 3 # degree of B-spline basis functions in V0
Np = np.int(5e6) # number of markers
control = 1 # control variate for noise reduction? (1: yes, 0: no)
Ld = 0.046*Lz # length of damping region at each end
#====================================================================================
#===== evaluation points for the magnetic field======================================
#eva_points_Bx = np.linspace(40., 280., 7)
eva_points_Bx = np.array([100., 200., 300.])
#====================================================================================
#====== create parameter list =======================================================
pa = np.zeros(1*(Nel + p - 1) + 5)
pa[0] = eps0
pa[1] = mu0
pa[2] = c
pa[3] = qe
pa[4] = me
pa[5] = B0z
pa[6] = wce
pa[7] = wpe
pa[8] = nuh
pa[9] = nh
pa[10] = wpar
pa[11] = wperp
pa[12] = k
pa[13] = amp
pa[14] = eps
pa[15] = Lz
pa[16] = Nel
pa[17] = T
pa[18] = dt
pa[19] = p
pa[20] = Np
pa[21] = control
pa[22] = saving_step
pa[23] = xi
pa[24] = Ld
pa[29] = bcs_d
pa[30] = bcs_g
#====================================================================================
#===== discretization of spatial domain =============================================
dz = Lz/Nel # element size
el_b = np.linspace(0, Lz, Nel + 1) # element boundaries
Nbase0 = Nel + p # total number of basis functions in V0
Nbase0_0 = Nbase0 - 2 # number of degrees of freedom in V1
Nbase1 = Nbase0 - 1 # total number of basis functions in V1
Nbase1_0 = Nbase1 # number of degrees of freedom in V1
#====================================================================================
#===== some diagnostic values =======================================================
Eh_eq = Lz*nh*me/2*(wpar**2 + 2*wperp**2) # equilibrium energetic electron energy
en_E = np.array([]) # electric field energy
en_B = np.array([]) # magnetic field energy
en_C = np.array([]) # cold plasma energy
en_H = np.array([]) # energetic electron energy
#====================================================================================
#===== background field in z-direction ==============================================
B_background_z = lambda z : B0z*(1 + xi*(z - Lz/2)**2)
#====================================================================================
#===== initial energetic electron distribution function =============================
def fh0(z, vx, vy, vz):
xiB = 1 - B0z/B_background_z(z)
xiz = 1 + (wperp**2/wpar**2 - 1)*xiB*bcs_g
return (1 + eps*np.cos(k*z))*nh/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - xiz*(vx**2 + vy**2)/(2*wperp**2))
#====================================================================================
#===== Maxwellian for control variate ===============================================
maxwell = lambda vx, vy, vz : nh/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - (vx**2 + vy**2)/(2*wperp**2))
#====================================================================================
#===== sampling distribution for initial markers ====================================
g_sampling = lambda vx, vy, vz : 1/((2*np.pi)**(3/2)*wpar*wperp**2)*np.exp(-vz**2/(2*wpar**2) - (vx**2 + vy**2)/(2*wperp**2))*1/Lz
#====================================================================================
#===== masking function to damp wave fields near boundaries =========================
def damp(z):
if z <= Ld:
return np.sin(np.pi*z/(2*Ld))
elif z >= Lz - Ld:
return np.sin(np.pi*(Lz - z)/(2*Ld))
else:
return 1.0
#====================================================================================
#===== spline knot vector, global mass matrices (in V0 and V1) and gradient matrix ==
Tz = bsp.make_knots(el_b, p, False)
tz = Tz[1:-1]
M0, C0 = utils_opt.matrixAssembly_V0(p, Nbase0, Tz, False)
M1 = utils_opt.matrixAssembly_V1(p, Nbase0, Tz, False)
Mb = utils_opt.matrixAssembly_backgroundField(p, Nbase0, Tz, False, B_background_z)
G = utils_opt.GRAD_1d(p, Nbase0, False)
D = bsp.collocation_matrix(tz, p - 1, eva_points_Bx, False, normalize=True)
print('matrix assembly done!')
#====================================================================================
#===== reserve memory for unknowns ==================================================
ex = np.empty(Nbase0)
ey = np.empty(Nbase0)
bx = np.empty(Nbase1)
by = np.empty(Nbase1)
yx = np.empty(Nbase0)
yy = np.empty(Nbase0)
uj = np.empty(4*Nbase0_0 + 2*Nbase1_0)
z_old = np.empty(Np)
#====================================================================================
#===== initial coefficients with commuting projectors ===============================
proj = utils_opt.projectors_1d(p, Nbase0, Tz, False)
ex[:] = proj.PI_0(Ex0)
ey[:] = proj.PI_0(Ey0)
bx[:] = proj.PI_1(Bx0)
by[:] = proj.PI_1(By0)
yx[:] = proj.PI_0(jx0)
yy[:] = proj.PI_0(jy0)
uj[:] = np.concatenate((ex[1:-1], ey[1:-1], bx, by, yx[1:-1], yy[1:-1]))
print('projection of initial fields done!')
#====================================================================================
#===== construct block matrices for field update ====================================
ZERO_00 = np.zeros((Nbase0_0, Nbase0_0))
ZERO_01 = np.zeros((Nbase0_0, Nbase1_0))
ZERO_11 = np.zeros((Nbase1_0, Nbase1_0))
A1 = np.diag(np.ones(4*Nbase0_0 + 2*Nbase1_0))
A1[0:Nbase0_0, 0:Nbase0_0] = M0
A1[Nbase0_0:2*Nbase0_0, Nbase0_0:2*Nbase0_0] = M0
A1[2*Nbase0_0 + 2*Nbase1_0:3*Nbase0_0 + 2*Nbase1_0, 2*Nbase0_0 + 2*Nbase1_0:3*Nbase0_0 + 2*Nbase1_0] = M0
A1[3*Nbase0_0 + 2*Nbase1_0:4*Nbase0_0 + 2*Nbase1_0, 3*Nbase0_0 + 2*Nbase1_0:4*Nbase0_0 + 2*Nbase1_0] = M0
A2 = np.block([[ZERO_00, ZERO_00, ZERO_01, c**2*np.dot(G.T, M1), -mu0*c**2*M0, ZERO_00], [ZERO_00, ZERO_00, -c**2*np.dot(G.T, M1), ZERO_01, ZERO_00, -mu0*c**2*M0], [ZERO_01.T, G, ZERO_11, ZERO_11, ZERO_01.T, ZERO_01.T], [-G, ZERO_01.T, ZERO_11, ZERO_11, ZERO_01.T, ZERO_01.T], [eps0*wpe**2*M0, ZERO_00, ZERO_01, ZERO_01, ZERO_00, qe/me*Mb], [ZERO_00, eps0*wpe**2*M0, ZERO_01, ZERO_01, -qe/me*Mb, ZERO_00]])
LHS = sc.sparse.csc_matrix(A1 - 1/2*dt*A2)
RHS = sc.sparse.csc_matrix(A1 + 1/2*dt*A2)
LU = sc.sparse.linalg.splu(LHS)
print('LU factorization done!')
if bcs_d == 1:
grev = bsp.greville(Tz, p, False)
coll = bsp.collocation_matrix(Tz, p, grev, False)[1:-1, 1:-1]
gi = np.zeros(Nbase0)
for i in range(Nbase0):
gi[i] = damp(grev[i])
Gi = np.diag(gi[1:-1])
DAMP = np.dot(np.dot(np.linalg.inv(coll), Gi), coll)
else:
DAMP = np.identity(Nbase0_0)
DAMP_block = sc.linalg.block_diag(DAMP, DAMP, np.identity(Nbase1_0), np.identity(Nbase1_0), DAMP, DAMP)
print('damping assembly done!')
#====================================================================================
np.allclose(RHS_old.toarray(), RHS.toarray())
np.allclose(LHS_old.toarray(), LHS.toarray())
np.allclose(DAMP_block_old, DAMP_block.toarray())
#===== create particles (z, vx, vy, vz, wk) and sample according to sampling distribution
particles = np.zeros((Np, 5), order='F', dtype=float)
particles[:, 0] = np.random.rand (Np)*Lz
particles[:, 1] = np.random.randn(Np)*wperp
particles[:, 2] = np.random.randn(Np)*wperp
particles[:, 3] = np.random.randn(Np)*wpar
particles[:, 4] = np.random.rand(Np)
np.save('particles', particles)
#====================================================================================
particles = np.load('particles.npy')
z_old = np.empty(Np)
jh = np.zeros(2*Nbase0)
Fh = np.zeros(4*Nbase0_0 + 2*Nbase1_0)
g0 = g_sampling(particles[:, 1], particles[:, 2], particles[:, 3])
w0 = fh0(particles[:, 0], particles[:, 1], particles[:, 2], particles[:, 3])/g_sampling(particles[:, 1], particles[:, 2], particles[:, 3])
z_old[:] = particles[:, 0]
utils_pic_fast.borisGemRel_bc_2(particles, -dt/2, qe, me, Tz, tz, p, ex, ey, bx, by, B0z, xi, Lz, c)
particles[:, 0] = z_old
particles[:, 4] = w0 - control*maxwell(particles[:, 1], particles[:, 2], particles[:, 3])/g0
me/(2*Np) * particles[:, 4].dot(particles[:, 1]**2 + particles[:, 2]**2 + particles[:, 3]**2) + control*Eh_eq
particles[:10, 1]
pic.current(particles[:, 0], particles[:, 1:], T0, p, np.floor(particles[:, 0]/dz).astype(int) + p, jh_x, jh_y, Nbase0, rel)
jh = np.empty(2*Nbase0, dtype=float)
utils_pic_fast.hotCurrentRel_bc_2(particles[:, 0], particles[:, 1:], T0, p, -1., jh, 1.)
np.allclose(jh[0::2], jh_x)
np.allclose(jh[1::2], jh_y)
#particles_hycho = np.copy(particles)
particles_old = np.copy(particles)
bx
ex = np.random.rand(Nbase0)
ey = np.random.rand(Nbase0)
bx = np.random.rand(Nbase1)
by = np.random.rand(Nbase1)
pic.pusher_reflecting(particles_hycho, -dt/2, T0, T1, p, np.floor(particles_hycho[:, 0]/dz).astype(int) + p, Lz, dz, ex, ey, bx, by, pp_0, pp_1, xi, rel)
utils_pic_fast.borisGemRel_bc_2(particles_old, -dt/2, qe, me, Tz, tz, p, ex, ey, bx, by, B0z, xi, Lz, c)
np.allclose(particles_hycho, particles_old, atol=1e-14, rtol=1e-8)
particles_hycho[:10, 3]
particles_old[:10, 3]
particles_old[:, 4] = w0 - control*maxwell(particles_old[:, 1], particles_old[:, 2], particles_old[:, 3])/g0
particles_old[:, 4]
me/(2*Np) * particles_old[:, 4].dot(particles_old[:, 1]**2 + particles_old[:, 2]**2 + particles_old[:, 3]**2) + control*Eh_eq
np.save('particles_old', particles)
```
| github_jupyter |
# **Grammar Error Correction using BERT**
***Use of BERT Masked Language Model (MLM) for Grammar Error Correction (GEC), without the use of annotated data***
Sunil Chomal | sunilchomal@gmail.com
```
%%html
<img src='/nbextensions/google.colab/GEC.png' />
```
**High level workflow**
• Tokenize the sentence using Spacy
• Check for spelling errors using Hunspell
• For all preposition, determiners & helper verbs, create a set of probable sentences
• Create a set of sentences with each word “masked”, deleted or an additional determiner, preposition or helper verb added
• Used BERT Masked Language Model to determine possible suggestions for masks
• Use the GED model to select appropriate solutions
```
# install pytorch_pretrained_bert the previous version of Pytorch-Transformers
!pip install -U pytorch_pretrained_bert
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# Check to confirm that GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
from keras.preprocessing.sequence import pad_sequences
import numpy as np
def check_GE(sents):
"""Check of the input sentences have grammatical errors
:param list: list of sentences
:return: error, probabilities
:rtype: (boolean, (float, float))
"""
# Create sentence) and label lists
# We need to add special tokens at the beginning and end of each sentence
# for BERT to work properly
sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sents]
labels =[0]
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
# Padding Sentences
# Set the maximum sequence length. The longest sequence in our training set
# is 47, but we'll leave room on the end anyway.
# In the original paper, the authors used a length of 512.
MAX_LEN = 128
predictions = []
true_labels = []
# Pad our input tokens
input_ids = pad_sequences(
[tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
maxlen=MAX_LEN, dtype="long", truncating="post", padding="post"
)
# Index Numbers and Padding
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# pad sentences
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN,
dtype ="long", truncating="post",padding ="post")
# Attention masks
# Create attention masks
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i > 0) for i in seq]
attention_masks.append(seq_mask)
prediction_inputs = torch.tensor(input_ids)
prediction_masks = torch.tensor(attention_masks)
prediction_labels = torch.tensor(labels)
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = modelGED(prediction_inputs, token_type_ids=None,
attention_mask=prediction_masks)
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
# label_ids = b_labels.to("cpu").numpy()
# Store predictions and true labels
predictions.append(logits)
# true_labels.append(label_ids)
# print(predictions)
flat_predictions = [item for sublist in predictions for item in sublist]
# print(flat_predictions)
prob_vals = flat_predictions
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
# flat_true_labels = [item for sublist in true_labels for item in sublist]
# print(flat_predictions)
return flat_predictions, prob_vals
# load previously trained BERT Grammar Error Detection model
# from self google drive
# from google.colab import drive
# drive.mount('/content/drive')
# !cp './drive/My Drive/Colab Notebooks/S89A/bert-based-uncased-GED.pth' .
#
# CREDIT: https://stackoverflow.com/a/39225039
#
import requests
def download_file_from_google_drive(id, destination):
print("Trying to fetch {}".format(destination))
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in progress_bar(response.iter_content(CHUNK_SIZE)):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def progress_bar(some_iter):
try:
from tqdm import tqdm
return tqdm(some_iter)
except ModuleNotFoundError:
return some_iter
# load previously trained BERT Grammar Error Detection model
# download from public google drive link
download_file_from_google_drive("1al7v87aRxebSUCXrN2Sdd0jGUS0zZ3vn", "./bert-based-uncased-GED.pth")
# https://pytorch.org/tutorials/beginner/saving_loading_models.html
from pytorch_pretrained_bert import BertForSequenceClassification
modelGED = BertForSequenceClassification.from_pretrained("bert-base-uncased",
num_labels=2)
# restore model
modelGED.load_state_dict(torch.load('bert-based-uncased-GED.pth'))
modelGED.eval()
# Load pre-trained model (weights) for Masked Language Model (MLM)
model = BertForMaskedLM.from_pretrained('bert-large-uncased')
model.eval()
# Load pre-trained model tokenizer (vocabulary)
tokenizerLarge = BertTokenizer.from_pretrained('bert-large-uncased')
# install the packages for Hunspell
!sudo apt-get install libhunspell-1.6-0 libhunspell-dev
!pip install cyhunspell
from hunspell import Hunspell
import os
# download the gn_GB dictionary for hunspell
download_file_from_google_drive("1jC5BVF9iZ0gmRQNmDcZnhfFdEYv8RNok", "./en_GB-large.dic")
download_file_from_google_drive("1g8PO8kdw-YmyOY_HxjnJ5FfdJFX4bsPv", "./en_GB-large.aff")
gb = Hunspell("en_GB-large", hunspell_data_dir=".")
# List of common determiners
# det = ["", "the", "a", "an"]
det = ['the', 'a', 'an', 'this', 'that', 'these', 'those', 'my', 'your', 'his',
'her', 'its', 'our', 'their', 'all', 'both', 'half', 'either', 'neither',
'each', 'every', 'other', 'another', 'such', 'what', 'rather', 'quite']
# List of common prepositions
prep = ["about", "at", "by", "for", "from", "in", "of", "on", "to", "with",
"into", "during", "including", "until", "against", "among",
"throughout", "despite", "towards", "upon", "concerning"]
# List of helping verbs
helping_verbs = ['am', 'is', 'are', 'was', 'were', 'being', 'been', 'be',
'have', 'has', 'had', 'do', 'does', 'did', 'will', 'would',
'shall', 'should', 'may', 'might', 'must', 'can', 'could']
# test sentences
org_text = []
org_text.append("They drank the pub .")
org_text.append("I am looking forway to see you soon .")
org_text.append("The cat sat at mat .")
org_text.append("Giant otters is an apex predator .")
org_text.append('There is no a doubt, tracking system has brought many benefits in this information age .')
import spacy
import numpy as np
def create_spelling_set(org_text):
""" Create a set of sentences which have possible corrected spellings
"""
sent = org_text
sent = sent.lower()
sent = sent.strip().split()
nlp = spacy.load("en")
proc_sent = nlp.tokenizer.tokens_from_list(sent)
nlp.tagger(proc_sent)
sentences = []
for tok in proc_sent:
# check for spelling for alphanumeric
if tok.text.isalpha() and not gb.spell(tok.text):
new_sent = sent[:]
# append new sentences with possible corrections
for sugg in gb.suggest(tok.text):
new_sent[tok.i] = sugg
sentences.append(" ".join(new_sent))
spelling_sentences = sentences
# retain new sentences which have a
# minimum chance of correctness using BERT GED
new_sentences = []
for sent in spelling_sentences:
no_error, prob_val = check_GE([sent])
exps = [np.exp(i) for i in prob_val[0]]
sum_of_exps = sum(exps)
softmax = [j/sum_of_exps for j in exps]
if(softmax[1] > 0.6):
new_sentences.append(sent)
# if no corrections, append the original sentence
if len(spelling_sentences) == 0:
spelling_sentences.append(" ".join(sent))
# eliminate dupllicates
[spelling_sentences.append(sent) for sent in new_sentences]
spelling_sentences = list(dict.fromkeys(spelling_sentences))
return spelling_sentences
def create_grammar_set(spelling_sentences):
""" create a new set of sentences with deleted determiners,
prepositions & helping verbs
"""
new_sentences = []
for text in spelling_sentences:
sent = text.strip().split()
for i in range(len(sent)):
new_sent = sent[:]
if new_sent[i] not in list(set(det + prep + helping_verbs)):
continue
del new_sent[i]
text = " ".join(new_sent)
# retain new sentences which have a
# minimum chance of correctness using BERT GED
no_error, prob_val = check_GE([text])
exps = [np.exp(i) for i in prob_val[0]]
sum_of_exps = sum(exps)
softmax = [j/sum_of_exps for j in exps]
if(softmax[1] > 0.6):
new_sentences.append(text)
# eliminate dupllicates
[spelling_sentences.append(sent) for sent in new_sentences]
spelling_sentences = list(dict.fromkeys(spelling_sentences))
return spelling_sentences
def create_mask_set(spelling_sentences):
"""For each input sentence create 2 sentences
(1) [MASK] each word
(2) [MASK] for each space between words
"""
sentences = []
for sent in spelling_sentences:
sent = sent.strip().split()
for i in range(len(sent)):
# (1) [MASK] each word
new_sent = sent[:]
new_sent[i] = '[MASK]'
text = " ".join(new_sent)
new_sent = '[CLS] ' + text + ' [SEP]'
sentences.append(new_sent)
# (2) [MASK] for each space between words
new_sent = sent[:]
new_sent.insert(i, '[MASK]')
text = " ".join(new_sent)
new_sent = '[CLS] ' + text + ' [SEP]'
sentences.append(new_sent)
return sentences
import math
from difflib import SequenceMatcher
def check_grammar(org_sent, sentences, spelling_sentences):
""" check grammar for the input sentences
"""
n = len(sentences)
# what is the tokenized value of [MASK]. Usually 103
text = '[MASK]'
tokenized_text = tokenizerLarge.tokenize(text)
mask_token = tokenizerLarge.convert_tokens_to_ids(tokenized_text)[0]
LM_sentences = []
new_sentences = []
i = 0 # current sentence number
l = len(org_sent.strip().split())*2 # l is no of sentencees
mask = False # flag indicating if we are processing space MASK
for sent in sentences:
i += 1
print(".", end="")
if i%50 == 0:
print("")
# tokenize the text
tokenized_text = tokenizerLarge.tokenize(sent)
indexed_tokens = tokenizerLarge.convert_tokens_to_ids(tokenized_text)
# Create the segments tensors.
segments_ids = [0] * len(tokenized_text)
# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# Predict all tokens
with torch.no_grad():
predictions = model(tokens_tensor, segments_tensors)
# index of the masked token
mask_index = (tokens_tensor == mask_token).nonzero()[0][1].item()
# predicted token
predicted_index = torch.argmax(predictions[0, mask_index]).item()
predicted_token = tokenizerLarge.convert_ids_to_tokens([predicted_index])[0]
# second best prediction. Can you used to create more options
# second_index = torch.topk(predictions[0, mask_index], 2).indices[1].item()
# second_prediction = tokenizer.convert_ids_to_tokens([second_index])[0]
text = sent.strip().split()
mask_index = text.index('[MASK]')
if not mask:
# case of MASKed words
mask = True
text[mask_index] = predicted_token
try:
# retrieve original word
org_word = spelling_sentences[i//l].strip().split()[mask_index-1]
# print(">>> " + org_word)
except:
# print(spelling_sentences[i%l - 1])
# print(tokenized_text)
# print("{0} {1} {2}".format(i, l, mask_index))
print("!", end="")
continue
# print("{0} - {1}".format(org_word, predicted_token))
# check if the prediction is an inflection of the original word
# if org_word.isalpha() and predicted_token not in gb_infl[org_word]:
# continue
# use SequenceMatcher to see if predicted word is similar to original word
if SequenceMatcher(None, org_word, predicted_token).ratio() < 0.6:
if org_word not in list(set(det + prep + helping_verbs)) or predicted_token not in list(set(det + prep + helping_verbs)):
continue
if org_word == predicted_token:
continue
else:
# case for MASKed spaces
mask = False
# print("{0}".format(predicted_token))
# only allow determiners / prepositions / helping verbs in spaces
if predicted_token in list(set(det + prep + helping_verbs)) :
text[mask_index] = predicted_token
else:
continue
# if org_word == "in":
# print(">>>>>> " + predicted_token)
# print(tokenized_text)
# print(mask_index)
text.remove('[SEP]')
text.remove('[CLS]')
new_sent = " ".join(text)
# print(new_sent)
# retain new sentences which have a
# minimum chance of correctness using BERT GED
no_error, prob_val = check_GE([new_sent])
exps = [np.exp(i) for i in prob_val[0]]
sum_of_exps = sum(exps)
softmax = [j/sum_of_exps for j in exps]
if no_error and softmax[1] > 0.996:
# print(org_word)
# print(predicted_token)
# print(SequenceMatcher(None, org_word, predicted_token).ratio())
# print("{0} - {1}, {2}".format(prob_val[0][1], prob_val[0][0], prob_val[0][1] - prob_val[0][0]))
# print("{0} - {1:.2f}".format(new_sent, softmax[1]*100) )
print("*", end="")
new_sentences.append(new_sent)
# print("{0}\t{1}".format(predicted_token, second_prediction))
print("")
# remove duplicate suggestions
spelling_sentences = []
[spelling_sentences.append(sent) for sent in new_sentences]
spelling_sentences = list(dict.fromkeys(spelling_sentences))
spelling_sentences
return spelling_sentences
# org_text = []
# with open("./drive/My Drive/Colab Notebooks/S89A/CoNLL_2013_DS.txt") as file:
# org_text = file.readlines()
# predict for each of the test samples
for sent in org_text:
print("Input Sentence >>> " + sent)
sentences = create_spelling_set(sent)
spelling_sentences = create_grammar_set(sentences)
sentences = create_mask_set(spelling_sentences)
print("processing {0} possibilities".format(len(sentences)))
sentences = check_grammar(sent, sentences, spelling_sentences)
print("Suggestions & Probabilities")
if len(sentences) == 0:
print("None")
continue
no_error, prob_val = check_GE(sentences)
for i in range(len(prob_val)):
exps = [np.exp(i) for i in prob_val[i]]
sum_of_exps = sum(exps)
softmax = [j/sum_of_exps for j in exps]
print("{0} - {1:0.4f}%".format(sentences[i], softmax[1]*100))
print("-"*60)
print()
```
| github_jupyter |
# _*Max-Cut and Traveling Salesman Problem*_
## Introduction
Many problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lie at the core of complex decision-making and definition of strategies.
Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function.
**Typical optimization problems**
Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objects
Maximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects
We consider here max-cut problems of practical interest in many fields, and show how they can be mapped on quantum computers manually and how Qiskit's optimization module supports this.
### Weighted Max-Cut
Max-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.
The formal definition of this problem is the following:
Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)
$$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$
In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$ (or even negative), corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes
$$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$
In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that
$$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$
where $\mathrm{const} = \sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian
$$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$
Qiskit's optimization module can generate the Ising Hamiltonian for the first profit function $\tilde{C}$.
To this extent, function $\tilde{C}$ can be modeled as a `QuadraticProgram`, which provides the `to_ising()` method.
### Approximate Universal Quantum Computing for Optimization Problems
There has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial optimization problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the *Quantum Approximate Optimization Algorithm* (QAOA) by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature.
The algorithm works as follows:
1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.
1. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.
1. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$.
1. Evaluate
$$C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$$
by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen.
1. Use a classical optimizer to choose a new set of controls.
1. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.
1. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer.
It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form
$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$
where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.
One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with.
References:
- A. Lucas, Frontiers in Physics 2, 5 (2014)
- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)
- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)
- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017)
```
# useful additional packages
import matplotlib.pyplot as plt
import matplotlib.axes as axes
%matplotlib inline
import numpy as np
import networkx as nx
from qiskit import Aer
from qiskit.tools.visualization import plot_histogram
from qiskit.circuit.library import TwoLocal
from qiskit.optimization.applications.ising import max_cut, tsp
from qiskit.aqua.algorithms import VQE, NumPyMinimumEigensolver
from qiskit.aqua.components.optimizers import SPSA
from qiskit.aqua import aqua_globals
from qiskit.aqua import QuantumInstance
from qiskit.optimization.applications.ising.common import sample_most_likely
from qiskit.optimization.algorithms import MinimumEigenOptimizer
from qiskit.optimization.problems import QuadraticProgram
# setup aqua logging
import logging
from qiskit.aqua import set_qiskit_aqua_logging
# set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
```
## Max-Cut problem
```
# Generating a graph of 4 nodes
n=4 # Number of nodes in graph
G=nx.Graph()
G.add_nodes_from(np.arange(0,n,1))
elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)]
# tuple is (i,j,weight) where (i,j) is the edge
G.add_weighted_edges_from(elist)
colors = ['r' for node in G.nodes()]
pos = nx.spring_layout(G)
def draw_graph(G, colors, pos):
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
edge_labels = nx.get_edge_attributes(G, 'weight')
nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=edge_labels)
draw_graph(G, colors, pos)
# Computing the weight matrix from the random graph
w = np.zeros([n,n])
for i in range(n):
for j in range(n):
temp = G.get_edge_data(i,j,default=0)
if temp != 0:
w[i,j] = temp['weight']
print(w)
```
### Brute force approach
Try all possible $2^n$ combinations. For $n = 4$, as in this example, one deals with only 16 combinations, but for n = 1000, one has 1.071509e+30 combinations, which is impractical to deal with by using a brute force approach.
```
best_cost_brute = 0
for b in range(2**n):
x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))]
cost = 0
for i in range(n):
for j in range(n):
cost = cost + w[i,j]*x[i]*(1-x[j])
if best_cost_brute < cost:
best_cost_brute = cost
xbest_brute = x
print('case = ' + str(x)+ ' cost = ' + str(cost))
colors = ['r' if xbest_brute[i] == 0 else 'c' for i in range(n)]
draw_graph(G, colors, pos)
print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute))
```
### Mapping to the Ising problem
Qiskit provides functionality to directly generate the Ising Hamiltonian as well as create the corresponding `QuadraticProgram`.
```
qubitOp, offset = max_cut.get_operator(w)
print('Offset:', offset)
print('Ising Hamiltonian:')
print(qubitOp.print_details())
# mapping Ising Hamiltonian to Quadratic Program
qp = QuadraticProgram()
qp.from_ising(qubitOp, offset)
qp.to_docplex().prettyprint()
# solving Quadratic Program using exact classical eigensolver
exact = MinimumEigenOptimizer(NumPyMinimumEigensolver())
result = exact.solve(qp)
print(result)
```
Since the problem was cast to a minimization problem, the solution of $-4$ corresponds to the optimum.
### Checking that the full Hamiltonian gives the right cost
```
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = NumPyMinimumEigensolver(qubitOp)
result = ee.run()
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('max-cut objective:', result.eigenvalue.real + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'c' for i in range(n)]
draw_graph(G, colors, pos)
```
### Running it on quantum computer
We run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
```
aqua_globals.random_seed = np.random.default_rng(123)
seed = 10598
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
# construct VQE
spsa = SPSA(maxiter=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa, quantum_instance=quantum_instance)
# run VQE
result = vqe.run(quantum_instance)
# print results
x = sample_most_likely(result.eigenstate)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
print('max-cut objective:', result.eigenvalue.real + offset)
print('solution:', max_cut.get_graph_solution(x))
print('solution objective:', max_cut.max_cut_value(x, w))
# plot results
colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'c' for i in range(n)]
draw_graph(G, colors, pos)
# create minimum eigen optimizer based on VQE
vqe_optimizer = MinimumEigenOptimizer(vqe)
# solve quadratic program
result = vqe_optimizer.solve(qp)
print(result)
colors = ['r' if result.x[i] == 0 else 'c' for i in range(n)]
draw_graph(G, colors, pos)
```
## Traveling Salesman Problem
In addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time.
The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice.
The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.
Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)
$$\sum_{i} x_{i,p} = 1 ~~\forall p$$
$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$
For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is
$$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$
where it is assumed the boundary condition of the Hamiltonian cycles $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is
$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$
Putting this all together in a single objective function to be minimized, we get the following:
$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$
where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$.
Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian.
```
# Generating a graph of 3 nodes
n = 3
num_qubits = n ** 2
ins = tsp.random_tsp(n, seed=123)
print('distance\n', ins.w)
# Draw the graph
G = nx.Graph()
G.add_nodes_from(np.arange(0, ins.dim, 1))
colors = ['r' for node in G.nodes()]
for i in range(0, ins.dim):
for j in range(i+1, ins.dim):
G.add_edge(i, j, weight=ins.w[i,j])
pos = {k: v for k, v in enumerate(ins.coord)}
draw_graph(G, colors, pos)
```
### Brute force approach
```
from itertools import permutations
def brute_force_tsp(w, N):
a=list(permutations(range(1,N)))
last_best_distance = 1e10
for i in a:
distance = 0
pre_j = 0
for j in i:
distance = distance + w[j,pre_j]
pre_j = j
distance = distance + w[pre_j,0]
order = (0,) + i
if distance < last_best_distance:
best_order = order
last_best_distance = distance
print('order = ' + str(order) + ' Distance = ' + str(distance))
return last_best_distance, best_order
best_distance, best_order = brute_force_tsp(ins.w, ins.dim)
print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance))
def draw_tsp_solution(G, order, colors, pos):
G2 = nx.DiGraph()
G2.add_nodes_from(G)
n = len(order)
for i in range(n):
j = (i + 1) % n
G2.add_edge(order[i], order[j], weight=G[order[i]][order[j]]['weight'])
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G2, node_color=colors, edge_color='b', node_size=600, alpha=.8, ax=default_axes, pos=pos)
edge_labels = nx.get_edge_attributes(G2, 'weight')
nx.draw_networkx_edge_labels(G2, pos, font_color='b', edge_labels=edge_labels)
draw_tsp_solution(G, best_order, colors, pos)
```
### Mapping to the Ising problem
```
qubitOp, offset = tsp.get_operator(ins)
print('Offset:', offset)
print('Ising Hamiltonian:')
print(qubitOp.print_details())
qp = QuadraticProgram()
qp.from_ising(qubitOp, offset, linear=True)
qp.to_docplex().prettyprint()
result = exact.solve(qp)
print(result)
```
### Checking that the full Hamiltonian gives the right cost
```
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector
ee = NumPyMinimumEigensolver(qubitOp)
result = ee.run()
print('energy:', result.eigenvalue.real)
print('tsp objective:', result.eigenvalue.real + offset)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
```
### Running it on quantum computer
We run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
```
aqua_globals.random_seed = np.random.default_rng(123)
seed = 10598
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
spsa = SPSA(maxiter=300)
ry = TwoLocal(qubitOp.num_qubits, 'ry', 'cz', reps=5, entanglement='linear')
vqe = VQE(qubitOp, ry, spsa, quantum_instance=quantum_instance)
result = vqe.run(quantum_instance)
print('energy:', result.eigenvalue.real)
print('time:', result.optimizer_time)
x = sample_most_likely(result.eigenstate)
print('feasible:', tsp.tsp_feasible(x))
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
aqua_globals.random_seed = np.random.default_rng(123)
seed = 10598
backend = Aer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed)
# create minimum eigen optimizer based on VQE
vqe_optimizer = MinimumEigenOptimizer(vqe)
# solve quadratic program
result = vqe_optimizer.solve(qp)
print(result)
z = tsp.get_tsp_solution(x)
print('solution:', z)
print('solution objective:', tsp.tsp_value(z, ins.w))
draw_tsp_solution(G, z, colors, pos)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
ERROR: type should be string, got "https://www.kaggle.com/muzzzdy/sms-spam-detection-with-various-classifiers\n\nhttps://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/\n\n```\nimport numpy as np\n\nfrom google.colab import drive\ndrive.mount(\"/content/drive\")\n```\n\n###data examination\n\n```\n%cd \"/content/drive/My Drive\"\n!ls\nimport pandas\n\nmydata = pandas.read_csv(\"spam.csv\",encoding=\"latin-1\")\n\nmydata.head()\nmydata = mydata.drop(labels=[\"Unnamed: 2\",\"Unnamed: 3\",\"Unnamed: 4\"],axis=1)\n\n\nmydata = mydata.rename(columns = {\"v1\":\"label\",\"v2\":\"inp\"})\nmydata.describe()\n\nmydata[\"length\"] = mydata[\"inp\"].apply(len)\nmydata.head()\n\nimport matplotlib as mat\n\nmat.rcParams[\"patch.force_edgecolor\"] = True\nmat.pyplot.style.use(\"seaborn-bright\")\nmydata.hist(column=\"length\",by=\"label\",bins=50,figsize=(8,3))\n\n\nmydata['label'] = mydata['label'].astype('category')\nmydata['label'].cat.codes.corr(mydata['length'])\n\n\nmydata['label'].value_counts()/mydata.shape[0]\n```\n\n### preprocessing\n\n```\nimport string\n\ndef text_process(text):\n text = text.translate(str.maketrans('','',string.punctuation))\n text = [word.lower() for word in text.split()]\n\n return \" \".join(text)\ninpcol = mydata['inp'].copy()\npreprocessed = inpcol.apply(text_process)\npreprocessed[0]\nmaxlen = 0 \nfor document in preprocessed:\n document_len = len(document.split())\n if document_len>maxlen:\n maxlen = document_len\n\nprint(maxlen)\nprint(document.split())\nfrom keras.preprocessing.text import one_hot\n\nvocab_size = 20000\nencoded_docs = [one_hot(document,vocab_size) for document in preprocessed]\nprint(encoded_docs[0])\nfrom keras.preprocessing.sequence import pad_sequences\n\npadded_docs = pad_sequences(encoded_docs,maxlen=maxlen, padding = 'post')\nprint(padded_docs)\nprint(np.shape(padded_docs))\n\nmydata2 = []\n\n\nfor i in range(len(mydata)):\n myex = [None,None]\n myex[0] = padded_docs[i]\n\n if mydata.loc[i][0] == 'ham':\n myex[1] = 0\n elif mydata.loc[i][0] == 'spam':\n myex[1] = 1\n \n mydata2.append(myex)\nprint(np.shape(mydata2))\nprint(mydata2[0])\n```\n\n###Network\n\n```\nfrom sklearn.model_selection import train_test_split\nxtrain, xtest, ytrain, ytest = train_test_split(padded_docs[:],\n np.array(mydata2)[:,1],\n test_size = 0.3,\n random_state = 22)\nprint(np.shape(xtrain),np.shape(ytrain))\nxtrain[0]\n#from keras.optimizers import Adam\n#myopt = Adam(lr=0.00001)\nfrom keras.layers import LSTM, BatchNormalization, Dropout, Input, Dense, Embedding, Flatten\nfrom keras.models import Sequential\n\n\n\n\nmodel = Sequential()\n\nmodel.add(Embedding(vocab_size, 512,input_length=maxlen))\nmodel.add(Flatten())\nmodel.add(Dense(64))\nmodel.add(Dense(1,activation='sigmoid'))\nmodel.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['acc'])\n\nmodel.fit(xtrain,ytrain,epochs=100,batch_size=128,validation_split=0.3)\n```\n\n" | github_jupyter |
# Redshift ML BYOM Remote Inference using Amazon SageMaker Random Cut Forests
_**Run Predictions from your Amazon Redshift cluster on a model trained and deployed on Amazon Sagemaker**_
---
---
## Contents
1. [Introduction](#Introduction)
2. [Setup Parameters](#Setup-Parameters)
3. [Training](#Training)
4. [Inference](#Training)
5. [Redshift ML BYOM Remote Inference](#Redshift-ML-BYOM-Remote-Inference)
6. [Conclusion](#Conclusion)
---
# Introduction
***
Amazon SageMaker Random Cut Forest (RCF) is an algorithm designed to detect anomalous data points within a dataset. Examples of when anomalies are important to detect include when website activity uncharactersitically spikes, when temperature data diverges from a periodic behavior, or when changes to public transit ridership reflect the occurrence of a special event.
In this notebook, we will use the SageMaker RCF algorithm to train an RCF model on the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset which records the amount New York City taxi ridership over the course of six months. We will then use this model to predict anomalous events by emitting an "anomaly score" for each data point. The main goals of this notebook are,
* to learn how to obtain, transform, and store data for use in Amazon SageMaker;
* to create an AWS SageMaker training job on a data set to produce an RCF model,
* use the RCF model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* deeply understand the RCF model,
* understand how the Amazon SageMaker RCF algorithm works.
If you would like to know more please check out the [SageMaker RCF Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html).
# Setup Parameters
***
Please set below input parameters
1. REDSHIFT_IAM_ROLE: The IAM role arn attached to Redshift Cluster.
2. REDSHIFT_USER: Database users to run SQL commands
3. REDSHIFT_ENDPOINT: Redshift Cluster end point.
4. SAGEMAKER_S3_BUCKET: S3 Bucket to store training input/output
```
REDSHIFT_ENDPOINT = 'redshift-cluster.xxxxxxxxxx.us-east-1.redshift.amazonaws.com:5439/dev'
REDSHIFT_USER="awsuser"
REDSHIFT_IAM_ROLE='your-amazon-redshift-sagemaker-iam-role-arn'
SAGEMAKER_S3_BUCKET='your-s3-bucket'
import boto3
import botocore
import sagemaker
import sys
bucket = SAGEMAKER_S3_BUCKET
prefix = "sagemaker/rcf-benchmarks"
execution_role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket where the original data is downloaded and stored.
downloaded_data_bucket = f"sagemaker-sample-files"
downloaded_data_prefix = "datasets/tabular/anomaly_benchmark_taxi"
def check_bucket_permission(bucket):
# check if the bucket exists
permission = False
try:
boto3.Session().client("s3").head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print(
"Hey! You either forgot to specify your S3 bucket"
" or you gave your bucket an invalid name!"
)
except botocore.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "403":
print(f"Hey! You don't have permission to access the bucket, {bucket}.")
elif e.response["Error"]["Code"] == "404":
print(f"Hey! Your bucket, {bucket}, doesn't exist!")
else:
raise
else:
permission = True
return permission
if check_bucket_permission(downloaded_data_bucket):
print(
f"Downloaded training data will be read from s3://{downloaded_data_bucket}/{downloaded_data_prefix}"
)
```
## Obtain and Inspect Example Data
Our data comes from the Numenta Anomaly Benchmark (NAB) NYC Taxi dataset [[1](https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv)]. We downloaded data from [here](https://raw.githubusercontent.com/numenta/NAB/master/data/realKnownCause/nyc_taxi.csv) and stored in an S3 bucket. These data consists of the number of New York City taxi passengers over the course of six months aggregated into 30-minute buckets. We know, a priori, that there are anomalous events occurring during the NYC marathon, Thanksgiving, Christmas, New Year's day, and on the day of a snow storm.
> [1] https://github.com/numenta/NAB/blob/master/data/realKnownCause/nyc_taxi.csv
```
%%time
import pandas as pd
data_filename = "NAB_nyc_taxi.csv"
s3 = boto3.client("s3")
s3.download_file(downloaded_data_bucket, f"{downloaded_data_prefix}/{data_filename}", data_filename)
taxi_data = pd.read_csv(data_filename, delimiter=",")
```
Before training any models it is important to inspect our data, first. Perhaps there are some underlying patterns or structures that we could provide as "hints" to the model or maybe there is some noise that we could pre-process away. The raw data looks like this:
```
taxi_data.head()
```
Human beings are visual creatures so let's take a look at a plot of the data.
```
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams["figure.dpi"] = 100
taxi_data.plot()
```
Human beings are also extraordinarily good at perceiving patterns. Note, for example, that something uncharacteristic occurs at around datapoint number 6000. Additionally, as we might expect with taxi ridership, the passenger count appears more or less periodic. Let's zoom in to not only examine this anomaly but also to get a better picture of what the "normal" data looks like.
```
taxi_data[5500:6500].plot()
```
Here we see that the number of taxi trips taken is mostly periodic with one mode of length approximately 50 data points. In fact, the mode is length 48 since each datapoint represents a 30-minute bin of ridership count. Therefore, we expect another mode of length $336 = 48 \times 7$, the length of a week. Smaller frequencies over the course of the day occur, as well.
For example, here is the data across the day containing the above anomaly:
```
taxi_data[5952:6000]
```
# Training
***
Next, we configure a SageMaker training job to train the Random Cut Forest (RCF) algorithm on the taxi cab data.
## Hyperparameters
Particular to a SageMaker RCF training job are the following hyperparameters:
* **`num_samples_per_tree`** - the number randomly sampled data points sent to each tree. As a general rule, `1/num_samples_per_tree` should approximate the the estimated ratio of anomalies to normal points in the dataset.
* **`num_trees`** - the number of trees to create in the forest. Each tree learns a separate model from different samples of data. The full forest model uses the mean predicted anomaly score from each constituent tree.
* **`feature_dim`** - the dimension of each data point.
In addition to these RCF model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.m4`, `ml.c4`, or `ml.c5`
* Current limitations:
* The RCF algorithm does not take advantage of GPU hardware.
```
from sagemaker import RandomCutForest
session = sagemaker.Session()
# specify general training job information
rcf = RandomCutForest(
role=execution_role,
instance_count=1,
instance_type="ml.m4.xlarge",
data_location=f"s3://{bucket}/{prefix}/",
output_path=f"s3://{bucket}/{prefix}/output",
num_samples_per_tree=512,
num_trees=50,
)
# automatically upload the training data to S3 and run the training job
rcf.fit(rcf.record_set(taxi_data.value.to_numpy().reshape(-1, 1)))
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training successfully completed and the output RCF model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print(f"Training job name: {rcf.latest_training_job.job_name}")
```
# Inference
***
A trained Random Cut Forest model does nothing on its own. We now want to use the model we computed to perform inference on data. In this case, it means computing anomaly scores from input time series data points.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up. We recommend using the `ml.c5` instance type as it provides the fastest inference time at the lowest cost.
```
rcf_inference = rcf.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
Congratulations! You now have a functioning SageMaker RCF inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print(f"Endpoint name: {rcf_inference.endpoint}")
```
## Data Serialization/Deserialization
We can pass data in a variety of formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `csv_serializer` and `json_deserializer` when configuring the inference endpoint.
```
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
rcf_inference.serializer = CSVSerializer()
rcf_inference.deserializer = JSONDeserializer()
```
Let's pass the training dataset, in CSV format, to the inference endpoint so we can automatically detect the anomalies we saw with our eyes in the plots, above. Note that the serializer and deserializer will automatically take care of the datatype conversion from Numpy NDArrays.
For starters, let's only pass in the first six datapoints so we can see what the output looks like.
```
taxi_data_numpy = taxi_data.value.to_numpy().reshape(-1, 1)
print(taxi_data_numpy[:6])
results = rcf_inference.predict(
taxi_data_numpy[:6], initial_args={"ContentType": "text/csv", "Accept": "application/json"}
)
```
## Computing Anomaly Scores
Now, let's compute and plot the anomaly scores from the entire taxi dataset.
```
results = rcf_inference.predict(taxi_data_numpy)
scores = [datum["score"] for datum in results["scores"]]
# add scores to taxi data frame and print first few values
taxi_data["score"] = pd.Series(scores, index=taxi_data.index)
taxi_data.head()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
#
# *Try this out* - change `start` and `end` to zoom in on the
# anomaly found earlier in this notebook
#
start, end = 0, len(taxi_data)
# start, end = 5500, 6500
taxi_data_subset = taxi_data[start:end]
ax1.plot(taxi_data_subset["value"], color="C0", alpha=0.8)
ax2.plot(taxi_data_subset["score"], color="C1")
ax1.grid(which="major", axis="both")
ax1.set_ylabel("Taxi Ridership", color="C0")
ax2.set_ylabel("Anomaly Score", color="C1")
ax1.tick_params("y", colors="C0")
ax2.tick_params("y", colors="C1")
ax1.set_ylim(0, 40000)
ax2.set_ylim(min(scores), 1.4 * max(scores))
fig.set_figwidth(10)
```
Note that the anomaly score spikes where our eyeball-norm method suggests there is an anomalous data point as well as in some places where our eyeballs are not as accurate.
Below we print and plot any data points with scores greater than 3 standard deviations (approx 99.9th percentile) from the mean score.
```
score_mean = taxi_data["score"].mean()
score_std = taxi_data["score"].std()
score_cutoff = score_mean + 3 * score_std
anomalies = taxi_data_subset[taxi_data_subset["score"] > score_cutoff]
anomalies
```
The following is a list of known anomalous events which occurred in New York City within this timeframe:
* `2014-11-02` - NYC Marathon
* `2015-01-01` - New Year's Eve
* `2015-01-27` - Snowstorm
Note that our algorithm managed to capture these events along with quite a few others. Below we add these anomalies to the score plot.
```
ax2.plot(anomalies.index, anomalies.score, "ko")
fig
```
With the current hyperparameter choices we see that the three-standard-deviation threshold, while able to capture the known anomalies as well as the ones apparent in the ridership plot, is rather sensitive to fine-grained peruturbations and anomalous behavior. Adding trees to the SageMaker RCF model could smooth out the results as well as using a larger data set.
# Redshift ML BYOM Remote Inference
## Setup Run SQL function using Redshift Data API to get SQL query output directly into pandas dataframe
In this step, we are creating function run_sql, which we will use to get SQL query output directly into pandas dataframe. We will also use this function to run DDL statements
```
import boto3
import time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
session = boto3.session.Session()
region = session.region_name
def run_sql(sql_text):
client = boto3.client("redshift-data")
res = client.execute_statement(Database=REDSHIFT_ENDPOINT.split('/')[1], DbUser=REDSHIFT_USER, Sql=sql_text,
ClusterIdentifier=REDSHIFT_ENDPOINT.split('.')[0])
query_id = res["Id"]
done = False
while not done:
time.sleep(1)
status_description = client.describe_statement(Id=query_id)
status = status_description["Status"]
if status == "FAILED":
raise Exception('SQL query failed:' + query_id + ": " + status_description["Error"])
elif status == "FINISHED":
if status_description['ResultRows']>0:
results = client.get_statement_result(Id=query_id)
column_labels = []
for i in range(len(results["ColumnMetadata"])): column_labels.append(results["ColumnMetadata"][i]['label'])
records = []
for record in results.get('Records'):
records.append([list(rec.values())[0] for rec in record])
df = pd.DataFrame(np.array(records), columns=column_labels)
return df
else:
return query_id
```
## Data Preparation Script
Data preparation script to be run on Redshift
we will create the table that will be used to run inference on
```
setup_script = """
DROP TABLE IF EXISTS public.rcf_taxi_data CASCADE;
CREATE TABLE public.rcf_taxi_data
(
ride_timestamp timestamp,
nbr_passengers int
);
COPY public.rcf_taxi_data
FROM 's3://sagemaker-sample-files/datasets/tabular/anomaly_benchmark_taxi/NAB_nyc_taxi.csv'
IAM_ROLE '{}' ignoreheader 1 csv delimiter ',';
"""
```
## Run data preparation script in Redshift
```
sql_stmt = setup_script.split(";")
for sql_text in sql_stmt[:-1]:
run_sql(sql_text.format(REDSHIFT_IAM_ROLE));
```
## Run Redshift ML Create Model statement using Sagemaker Endpoint for Remote Inference
```
SAGEMAKER_ENDPOINT = rcf_inference.endpoint
print(SAGEMAKER_ENDPOINT)
sql_text=("drop model if exists public.remote_random_cut_forest;\
CREATE MODEL public.remote_random_cut_forest\
FUNCTION remote_fn_rcf (int)\
RETURNS decimal(10,6)\
SAGEMAKER'{}'\
IAM_ROLE'{}'\
")
df=run_sql(sql_text.format(SAGEMAKER_ENDPOINT,REDSHIFT_IAM_ROLE))
print(df)
```
## Show Model
```
df = run_sql("SHOW MODEL public.remote_random_cut_forest")
df
```
## Computing Anomaly Scores
Now, let's compute and plot the anomaly scores from the entire taxi dataset.
```
df = run_sql("""
select ride_timestamp, nbr_passengers, public.remote_fn_rcf(nbr_passengers) as score
from public.rcf_taxi_data;
""");
df
```
Note that the anomaly score spikes where our eyeball-norm method suggests there is an anomalous data point as well as in some places where our eyeballs are not as accurate.
Below we print any data points with scores greater than 3 standard deviations (approx 99.9th percentile) from the mean score.
```
df = run_sql("""
with score_cutoff as
(select stddev(public.remote_fn_rcf(nbr_passengers)) as std, avg(public.remote_fn_rcf(nbr_passengers)) as mean, ( mean + 3 * std ) as score_cutoff_value
from public.rcf_taxi_data)
select ride_timestamp, nbr_passengers, public.remote_fn_rcf(nbr_passengers) as score
from public.rcf_taxi_data
where score > (select score_cutoff_value from score_cutoff)
""");
df
```
# Conclusion
---
We used Amazon SageMaker Random Cut Forest to detect anomalous datapoints in a taxi ridership dataset. In these data the anomalies occurred when ridership was uncharacteristically high or low. However, the RCF algorithm is also capable of detecting when, for example, data breaks periodicity or uncharacteristically changes global behavior.
We then used Redshift ML to demonstrate how you can do inference on unsupervised algorithms(such as Random Cut Forest). This allows you to democratize Machine learning by doing predictions with Redshift SQL Commands.
| github_jupyter |
# Midterm Exam 02
- This is a closed book exam
- You should only ever have a SINGLE browser tab open
- The exam lasts 75 minutes, and Sakai will not accept late submissions
- You may use the following:
- TAB completion
- SHIFT-TAB completion for function arguments
- help(func), `?func`, `func?` to get help on `func`
- To create a new cell, use `ESC-A` or `ESC-B`
Note that there are 5 questions, each worth 25 points. The maximum grade you can have is 100.
**Honor Code: By taking this exam, you agree to abide by the Duke Honor Code.**
----
**1**. (25 points)
The Collatz sequence is defined by the following rules for finding the next number
```
if the current number is even, divide by 2
if the current number is odd, multiply by 3 and add 1
if the current number is 1, stop
```
- Find the starting number and length of the longest Collatz sequence for starting integers in `range(1, 10001)` (15 points)
- Make a scatter plot of the sequence length against starting number for starting integers in `range(1, 10001)`. Use a size of 1 (s=1) in the argument to scatter function. (10 points)
Note: The Collatz sequence is only for positive integers. For example, if the starting number is 3, the collatz sequence is `[3, 10, 5, 16, 8, 4, 2, 1]`.
```
```
**2**. (25 points)
The Newton-Rephson algorithm finds zeros of a function using the update
$$
x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}
$$
- Use the Newton-Raphson algorithm to find all solutions to $x^3 = 1$. Use $2, -2+3i, -2-3i$ as the starting conditions. You can use a brute force function that always terminates after 100 iterations. (15 points)
- Find the solutions by finding the companion matrix and performing an eigendecomposition. (10 points)
Note: Python uses $j$ for the imaginary part, not $i$
```
```
**3**. (25 points)
You are given the following data
```python
A = np.array([[1, 8, 0, 7],
[0, 2, 9, 4],
[2, 8, 8, 3],
[4, 8, 6, 1],
[2, 1, 9, 6],
[0, 7, 0, 1],
[4, 0, 2, 4],
[1, 4, 9, 5],
[6, 2, 6, 6],
[9, 9, 6, 3]], dtype='float')
b = np.array([[2],
[5],
[0],
[0],
[6],
[7],
[2],
[6],
[7],
[9]], dtype='float')
```
- Using SVD directly (not via `lstsq`), find the least squares solution to $Ax = b$ (10 points)
- Use SVE to find the best rank 3 approximation of A (10 points)
- Calculate the approximation error in terms of the Frobenius norm (5 points)
```
```
**4**. (25 points)
We observe some data points $(x_i, y_i)$, and believe that an appropriate model for the data is that
$$
f(x) = b_0 + b_1 x + b_2 x^2
$$
with some added noise.
- Find optimal values of the parameters $\beta = (b_0, b_1, b_2)$ that minimize $\Vert y - f(x) \Vert^2$ using gradient descent and starting with an initial value of $\beta_0 = \begin{bmatrix}1 & 1 &1 \end{bmatrix}$. Use a learning rate of 0.0001 and 10,000 iterations (20 points)
- Plot the fitted quadratic together with the data points (5 points)
Remember to use column vectors for $x$ and $y$.
Data
```
x = np.array([[0],
[1],
[2],
[3],
[4],
[5],
[6],
[7],
[8],
[9]])
y = np.array([[ 4.70612107],
[ 4.63393704],
[ 6.49770138],
[ 12.11243273],
[ 20.51575619],
[ 34.23493694],
[ 53.1814074 ],
[ 74.20612958],
[101.24176654],
[131.85009012]])
```
```
```
**5**. (25 points)
Recall that the page rank of a node is given by the equation

and at steady state, we have the page rank vector $R$

where $d$ is the damping factor, $N$ is the number of nodes, $1$ is a vector of ones, and

where $L(p_j)$ is the number of outgoing links from node $p_j$.
Consider the graph

If $d = 0.9$ find the page rank of each node
- By solving a linear system (15 points)
- By eigendecomposition (10 points)
Note: The Markov matrix constructed as instructed does not follow the usual convention. Here the columns of our Markov matrix are probability vectors, and the page rank is considered to be a column vector of the steady state probabilities.
```
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Note: Parallel Coordinates Plots are available in version <b>2.0.6+</b><br>
Run `pip install plotly --upgrade` to update your Plotly version
```
import plotly
plotly.__version__
```
### Adding Dimensions
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Parcoords(
line = dict(color = 'blue'),
dimensions = list([
dict(range = [1,5],
constraintrange = [1,2],
label = 'A', values = [1,4]),
dict(range = [1.5,5],
tickvals = [1.5,3,4.5],
label = 'B', values = [3,1.5]),
dict(range = [1,5],
tickvals = [1,2,4,5],
label = 'C', values = [2,4],
ticktext = ['text 1', 'text 2', 'text 3', 'text 4']),
dict(range = [1,5],
label = 'D', values = [4,2])
])
)
]
py.iplot(data, filename = 'parcoord-dimensions')
```
Parallel coordinates are richly interactive by default. Drag the lines along the axes to filter regions and drag the axis names across the plot to rearrange variables.

### Basic Parallel Coordinates Plot
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/bcdunbar/datasets/master/iris.csv")
data = [
go.Parcoords(
line = dict(color = df['species_id'],
colorscale = [[0,'#D7C16B'],[0.5,'#23D8C3'],[1,'#F3F10F']]),
dimensions = list([
dict(range = [0,8],
constraintrange = [4,8],
label = 'Sepal Length', values = df['sepal_length']),
dict(range = [0,8],
label = 'Sepal Width', values = df['sepal_width']),
dict(range = [0,8],
label = 'Petal Length', values = df['petal_length']),
dict(range = [0,8],
label = 'Petal Width', values = df['petal_width'])
])
)
]
layout = go.Layout(
plot_bgcolor = '#E5E5E5',
paper_bgcolor = '#E5E5E5'
)
fig = go.Figure(data = data, layout = layout)
py.iplot(fig, filename = 'parcoords-basic')
```
### Advanced Parallel Coordinates Plot
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/bcdunbar/datasets/master/parcoords_data.csv")
data = [
go.Parcoords(
line = dict(color = df['colorVal'],
colorscale = 'Jet',
showscale = True,
reversescale = True,
cmin = -4000,
cmax = -100),
dimensions = list([
dict(range = [32000,227900],
constraintrange = [100000,150000],
label = 'Block Height', values = df['blockHeight']),
dict(range = [0,700000],
label = 'Block Width', values = df['blockWidth']),
dict(tickvals = [0,0.5,1,2,3],
ticktext = ['A','AB','B','Y','Z'],
label = 'Cyclinder Material', values = df['cycMaterial']),
dict(range = [-1,4],
tickvals = [0,1,2,3],
label = 'Block Material', values = df['blockMaterial']),
dict(range = [134,3154],
visible = True,
label = 'Total Weight', values = df['totalWeight']),
dict(range = [9,19984],
label = 'Assembly Penalty Weight', values = df['assemblyPW']),
dict(range = [49000,568000],
label = 'Height st Width', values = df['HstW']),
dict(range = [-28000,196430],
label = 'Min Height Width', values = df['minHW']),
dict(range = [98453,501789],
label = 'Min Width Diameter', values = df['minWD']),
dict(range = [1417,107154],
label = 'RF Block', values = df['rfBlock'])
])
)
]
py.iplot(data, filename = 'parcoords-advanced')
```
#### Reference
See https://plot.ly/python/reference/#parcoords for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'parcoords.ipynb', 'python/parallel-coordinates-plot/', 'Parallel Coordinates Plot | plotly',
'How to make parallel coorindates plots in Python with Plotly.',
title = 'Parallel Coordinates Plot | plotly',
name = 'Parallel Coordinates Plot',
has_thumbnail='true', thumbnail='thumbnail/parcoords.jpg',
language='python',
# page_type='example_index', // note this is only if you want the tutorial to appear on the main page: plot.ly/python
display_as='scientific', order=11.5,
ipynb= '~notebook_demo/142')
```
| github_jupyter |
# Introduction to XGBoost-Spark Cross Validation with GPU
The goal of this notebook is to show you how to levarage GPU to accelerate XGBoost spark cross validatoin for hyperparameter tuning. The best model for the given hyperparameters will be returned.
Here takes the application 'Taxi' as an example.
A few libraries are required for this notebook:
1. NumPy
2. cudf jar
2. xgboost4j jar
3. xgboost4j-spark jar
#### Import the Required Libraries
```
from ml.dmlc.xgboost4j.scala.spark import XGBoostRegressionModel, XGBoostRegressor
from ml.dmlc.xgboost4j.scala.spark.rapids import CrossValidator
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.tuning import ParamGridBuilder
from pyspark.sql import SparkSession
from pyspark.sql.types import FloatType, IntegerType, StructField, StructType
from time import time
import os
```
As shown above, here `CrossValidator` is imported from package `ml.dmlc.xgboost4j.scala.spark.rapids`, not the spark's `tuning.CrossValidator`.
#### Create a Spark Session
```
spark = SparkSession.builder.appName("taxi-cv-gpu-python").getOrCreate()
```
#### Specify the Data Schema and Load the Data
```
label = 'fare_amount'
schema = StructType([
StructField('vendor_id', FloatType()),
StructField('passenger_count', FloatType()),
StructField('trip_distance', FloatType()),
StructField('pickup_longitude', FloatType()),
StructField('pickup_latitude', FloatType()),
StructField('rate_code', FloatType()),
StructField('store_and_fwd', FloatType()),
StructField('dropoff_longitude', FloatType()),
StructField('dropoff_latitude', FloatType()),
StructField(label, FloatType()),
StructField('hour', FloatType()),
StructField('year', IntegerType()),
StructField('month', IntegerType()),
StructField('day', FloatType()),
StructField('day_of_week', FloatType()),
StructField('is_weekend', FloatType()),
])
features = [ x.name for x in schema if x.name != label ]
# You need to update them to your real paths!
dataRoot = os.getenv("DATA_ROOT", "/data")
train_data = spark.read.parquet(dataRoot + '/taxi/parquet/train')
trans_data = spark.read.parquet(dataRoot + '/taxi/parquet/eval')
```
#### Build a XGBoost-Spark CrossValidator
```
# First build a regressor of GPU version using *setFeaturesCols* to set feature columns
params = {
'eta': 0.05,
'maxDepth': 8,
'subsample': 0.8,
'gamma': 1.0,
'numRound': 100,
'numWorkers': 1,
'treeMethod': 'gpu_hist',
}
regressor = XGBoostRegressor(**params).setLabelCol(label).setFeaturesCols(features)
# Then build the evaluator and the hyperparameters
evaluator = (RegressionEvaluator()
.setLabelCol(label))
param_grid = (ParamGridBuilder()
.addGrid(regressor.maxDepth, [3, 6])
.addGrid(regressor.numRound, [100, 200])
.build())
# Finally the corss validator
cross_validator = (CrossValidator()
.setEstimator(regressor)
.setEvaluator(evaluator)
.setEstimatorParamMaps(param_grid)
.setNumFolds(3))
```
#### Start Cross Validation by Fitting Data to CrossValidator
```
def with_benchmark(phrase, action):
start = time()
result = action()
end = time()
print('{} takes {} seconds'.format(phrase, round(end - start, 2)))
return result
model = with_benchmark('Cross-Validation', lambda: cross_validator.fit(train_data)).bestModel
```
#### Transform On the Best Model
```
def transform():
result = model.transform(trans_data).cache()
result.foreachPartition(lambda _: None)
return result
result = with_benchmark('Transforming', transform)
result.select(label, 'prediction').show(5)
```
#### Evaluation
```
accuracy = with_benchmark(
'Evaluation',
lambda: RegressionEvaluator().setLabelCol(label).evaluate(result))
print('RMSE is ' + str(accuracy))
spark.stop()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Vectors/us_census_counties.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Vectors/us_census_counties.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Datasets/Vectors/us_census_counties.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Vectors/us_census_counties.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for the first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
Map.setCenter(-110, 40, 5)
states = ee.FeatureCollection('TIGER/2018/States')
# .filter(ee.Filter.eq('STUSPS', 'MN'))
# // Turn the strings into numbers
states = states.map(lambda f: f.set('STATEFP', ee.Number.parse(f.get('STATEFP'))))
state_image = ee.Image().float().paint(states, 'STATEFP')
visParams = {
'palette': ['purple', 'blue', 'green', 'yellow', 'orange', 'red'],
'min': 0,
'max': 50,
'opacity': 0.8,
};
counties = ee.FeatureCollection('TIGER/2016/Counties')
image = ee.Image().paint(states, 0, 2)
Map.setCenter(-99.844, 37.649, 5)
# Map.addLayer(image, {'palette': 'FF0000'}, 'TIGER/2018/States')
Map.addLayer(image, visParams, 'TIGER/2016/States');
Map.addLayer(ee.Image().paint(counties, 0, 1), {}, 'TIGER/2016/Counties')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# Analyzing patient data (extended version)
## Preliminaries
Load the required modules.
## Data set description
The data consists of two spreadsheets.
The first is data on patients gathered during an experimental trial. The patients' temperature is measured at one-hour intervals, and each hour, the patients receive a dose of a drug to reduce fever. The first column is a patient ID, the second is the dose, the third the datetime information, and the fourth the body temperature.
The second spreadsheet is metadata on the patients. The first column is the patient ID, which should be consistent with that in the first spreadsheet, the second column denotes the gender of the patient, the third an experimental condition.
## Experiment data
Read the data set as a pandas `DataFrame` from an Excel spreadsheet.
Let's explore the data set a bit.
The data contains 4 columns, and 62 rows, and there seems to be missing data for `dose` and `temperature` since these columns have only 61 entries.
Let's check the first few and last few rows.
A very rough statistical overview is also useful.
It seems that we have 9 patients, but lets verify that.
Let's check whether the number of measurements is equal for all 9 patients.
For all patients but one there are 7 measurements, for one there are only six. Let's compare the timestampts for two patients.
It is quite clear we will have to deal with missing data.
## Analyzing experiment data as a time series
We can now rearrange the data for a time series analysis by using the date as an index, and show the patient data as individual columns.
Note that the missing data for patient 6 has been represented as `NaN`.
Hm, something seems fishy with the data for patient 3, let's visualize it individually.
The temperature for patient 3 was not measured at 13:00:00, but the dose was recorded. Let's check for other missing data more systematically.
We should take a look at the measurements for 13:00:00 and 14:00:00. First for `13:00:00`:
This was the temperature we were aware of, but at 14:00:00, it seems the dose was not recorded for patient 4.
From the plots, it seems reasonable to do a linear interpolation for both missing data points.
Let's do the interpolation on the time series in place.
We should have no more missing data, let's verify.
Let's verify what happend for patient 6 with the missing data at the end of the experiment.
The value is the same for 16h as for 15h, which is reasonable.
## Derived data
Let's add a new column to the time series that represents the average temperature over all patients.
Let's also add a column for each patient that shows the cummulative dose for that patient.
Let's check the maximum temperature and the total dose per patient using a pivot table. The index is the patient ID, and the values the dose and temperature, aggregated as sum and maximum respectively.
Visualized as a scatter plot:
It seems possible that there is a linear relationship between the maximum temperature, and the total dose administered during the experiment.
## Metadata
There is a second file with metadata on the patients, let's read that as well. The file is `data/patient_metadata.xlsx`.
How many distinct genders and conditions do we have?
Gender and condition are categorical data, let's use `describe` on those columns.
Which patients are male, and have condition A?
## Merging dataframes
The patient IDs in the exerperimental and metadata should correspond, so let merge the dataframes. However, let's first check whether the patient IDs are the same in both dataframes.
So metadata is missing for patient 4, and there is metadata for patients that were not involved in the experiment (10 and 11). Since we are not interested in the latter, and should keep the data on patient 4 regardless of metadata, we do a left-join of the data of the experiment, and the metadata.
Since gender and condition are categorical data, we set the type appropriately.
How many patients involved in our experiment are male, and how many female?
Which patients had a temperature higher than $39.5^{\circ}$, and when? Let's also display the gender.
Suspiciously many males, let's see how many males versus females had a temperature (at any point).
## Gender differences?
There might be a gender influence here. Let's split our data set into male and female patients, again as a time series with the timestamp as index, but now with a multi-level column for gender and patient ID.
Note that now the data for patient 4 is missing, since the gender is unknown.
Let's compute the average temperature for male and female patients, compare.
These are two `Series`, we can concatenate these into a single `DataFrame`, en set the column names.
Let's make a plot of both average temperatures.
Looks like we are on to something, pity this is not real data.
## Conditions
However, we didn't take the patients' condition into account yet. Let's check how many patients have a specific condition.
What is the distribution of the condition with respect to the patients's gender?
Create a bar plot of this.
## From numbers to categories
Let's create an extra column in the `DataFrame` that is categorical, and represents the status of the patients in terms of fever.
A similar query to the one we did before can now be done on the `status` attribute.
The patients with high fever are given by:
| github_jupyter |
# Convolutional Neural Networks
(c) Deniz Yuret, 2018
* Objectives: See the effect of sparse and shared weights implemented by convolutional networks.
* Prerequisites: MLP models (04.mlp.ipynb), KnetArray, param, param0, dropout, relu, nll
* Knet: conv4, pool, mat (explained)
* Knet: dir, gpu, minibatch, KnetArray (used by mnist.jl)
* Knet: SGD, train!, Train, load, save (used by trainresults)
```
using Pkg
for p in ("Knet","Plots","ProgressMeter")
haskey(Pkg.installed(),p) || Pkg.add(p)
end
```
## Introduction to convolution
```
# Convolution operator in Knet
using Knet: conv4
@doc conv4
# Convolution in 1-D
@show w = reshape([1.0,2.0,3.0], (3,1,1,1))
@show x = reshape([1.0:7.0...], (7,1,1,1))
@show y = conv4(w, x); # size Y = X - W + 1 = 5 by default
# Padding
@show y2 = conv4(w, x, padding=(1,0)); # size Y = X + 2P - W + 1 = 7 with padding=1
# To preserve input size (Y=X) for a given W, what padding P should we use?
# Stride
@show y3 = conv4(w, x; padding=(1,0), stride=3); # size Y = 1 + floor((X+2P-W)/S)
# Mode
@show y4 = conv4(w, x, mode=0); # Default mode (convolution) inverts w
@show y5 = conv4(w, x, mode=1); # mode=1 (cross-correlation) does not invert w
# Convolution in more dimensions
x = reshape([1.0:9.0...], (3,3,1,1))
w = reshape([1.0:4.0...], (2,2,1,1))
y = conv4(w, x)
# Convolution with multiple channels, filters, and instances
# size X = [X1,X2,...,Xd,Cx,N] where d is the number of dimensions, Cx is channels, N is instances
x = reshape([1.0:18.0...], (3,3,2,1))
# size W = [W1,W2,...,Wd,Cx,Cy] where d is the number of dimensions, Cx is input channels, Cy is output channels
w = reshape([1.0:24.0...], (2,2,2,3));
# size Y = [Y1,Y2,...,Yd,Cy,N] where Yi = 1 + floor((Xi+2Pi-Wi)/Si), Cy is channels, N is instances
y = conv4(w,x)
```
See http://cs231n.github.io/assets/conv-demo/index.html for an animated example.
## Introduction to Pooling
```
# Pooling operator in Knet
using Knet: pool
@doc pool
# 1-D pooling example
@show x = reshape([1.0:6.0...], (6,1,1,1))
@show pool(x);
# Window size
@show pool(x; window=3); # size Y = floor(X/W)
# Padding
@show pool(x; padding=(1,0)); # size Y = floor((X+2P)/W)
# Stride
@show x = reshape([1.0:10.0...], (10,1,1,1));
@show pool(x; stride=4); # size Y = 1 + floor((X+2P-W)/S)
# Mode (using KnetArray here; not all modes are implemented on the CPU)
using Knet: KnetArray
x = KnetArray(reshape([1.0:6.0...], (6,1,1,1)))
@show x
@show pool(x; padding=(1,0), mode=0) # max pooling
@show pool(x; padding=(1,0), mode=1) # avg pooling
@show pool(x; padding=(1,0), mode=2); # avg pooling excluding padded values (is not implemented on CPU)
# More dimensions
x = reshape([1.0:16.0...], (4,4,1,1))
pool(x)
# Multiple channels and instances
x = reshape([1.0:32.0...], (4,4,2,1))
# each channel and each instance is pooled separately
pool(x) # size Y = (Y1,...,Yd,Cx,N) where Yi are spatial dims, Cx and N are identical to input X
```
## Experiment setup
```
# Load data (see 02.mnist.ipynb)
using Knet: Knet, KnetArray, gpu, minibatch
include(Knet.dir("data","mnist.jl")) # Load data
dtrn,dtst = mnistdata(); # dtrn and dtst = [ (x1,y1), (x2,y2), ... ] where xi,yi are minibatches of 100
(x,y) = first(dtst)
summary.((x,y))
# For running experiments
using Knet: SGD, train!, nll, zeroone
import ProgressMeter
function trainresults(file,model; o...)
if (print("Train from scratch? ");readline()[1]=='y')
results = Float64[]; updates = 0; prog = ProgressMeter.Progress(60000)
function callback(J)
if updates % 600 == 0
push!(results, nll(model,dtrn), nll(model,dtst), zeroone(model,dtrn), zeroone(model,dtst))
ProgressMeter.update!(prog, updates)
end
return (updates += 1) <= 60000
end
train!(model, dtrn; callback=callback, optimizer=SGD(lr=0.1), o...)
Knet.save(file,"results",reshape(results, (4,:)))
end
isfile(file) || download("http://people.csail.mit.edu/deniz/models/tutorial/$file",file)
results = Knet.load(file,"results")
println(minimum(results,dims=2))
return results
end
```
## A convolutional neural network model for MNIST
```
# Redefine Linear layer (See 03.lin.ipynb):
using Knet: param, param0
struct Linear; w; b; end
(f::Linear)(x) = (f.w * mat(x) .+ f.b)
mat(x)=reshape(x,:,size(x)[end]) # Reshapes 4-D tensor to 2-D matrix so we can use matmul
Linear(inputsize::Int,outputsize::Int) = Linear(param(outputsize,inputsize),param0(outputsize))
# Define a convolutional layer:
struct Conv; w; b; end
(f::Conv)(x) = pool(conv4(f.w,x) .+ f.b)
Conv(w1,w2,cx,cy) = Conv(param(w1,w2,cx,cy), param0(1,1,cy,1))
# Define a convolutional neural network:
struct CNN; layers; end
# Weight initialization for a multi-layer convolutional neural network
# h[i] is an integer for a fully connected layer, a triple of integers for convolution filters and tensor inputs
# use CNN(x,h1,h2,...,hn,y) for a n hidden layer model
function CNN(h...)
w = Any[]
x = h[1]
for i=2:length(h)
if isa(h[i],Tuple)
(x1,x2,cx) = x
(w1,w2,cy) = h[i]
push!(w, Conv(w1,w2,cx,cy))
x = ((x1-w1+1)÷2,(x2-w2+1)÷2,cy) # assuming conv4 with p=0, s=1 and pool with p=0,w=s=2
elseif isa(h[i],Integer)
push!(w, Linear(prod(x),h[i]))
x = h[i]
else
error("Unknown layer type: $(h[i])")
end
end
CNN(w)
end;
using Knet: dropout, relu
function (m::CNN)(x; pdrop=0)
for (i,layer) in enumerate(m.layers)
p = (i <= length(pdrop) ? pdrop[i] : pdrop[end])
x = dropout(x, p)
x = layer(x)
x = (layer == m.layers[end] ? x : relu.(x))
end
return x
end
lenet = CNN((28,28,1), (5,5,20), (5,5,50), 500, 10)
summary.(l.w for l in lenet.layers)
using Knet: nll
(x,y) = first(dtst)
nll(lenet,x,y)
```
## CNN vs MLP
```
using Plots; default(fmt=:png,ls=:auto)
ENV["COLUMNS"] = 92
@time cnn = trainresults("cnn.jld2", lenet; pdrop=(0,0,.3)); # 406s [8.83583e-5, 0.017289, 0.0, 0.0048]
mlp = Knet.load("mlp.jld2","results");
# Comparison to MLP shows faster convergence, better generalization
plot([mlp[1,:], mlp[2,:], cnn[1,:], cnn[2,:]],ylim=(0.0,0.1),
labels=[:trnMLP :tstMLP :trnCNN :tstCNN],xlabel="Epochs",ylabel="Loss")
plot([mlp[3,:], mlp[4,:], cnn[3,:], cnn[4,:]],ylim=(0.0,0.03),
labels=[:trnMLP :tstMLP :trnCNN :tstCNN],xlabel="Epochs",ylabel="Error")
```
## Convolution vs Matrix Multiplication
```
# Convolution and matrix multiplication can be implemented in terms of each other.
# Convolutional networks have no additional representational power, only statistical efficiency.
# Our original 1-D example
@show w = reshape([1.0,2.0,3.0], (3,1,1,1))
@show x = reshape([1.0:7.0...], (7,1,1,1))
@show y = conv4(w, x); # size Y = X - W + 1 = 5 by default
# Convolution as matrix multiplication (1)
# Turn w into a (Y,X) sparse matrix
w2 = Float64[3 2 1 0 0 0 0; 0 3 2 1 0 0 0; 0 0 3 2 1 0 0; 0 0 0 3 2 1 0; 0 0 0 0 3 2 1]
@show y2 = w2 * mat(x);
# Convolution as matrix multiplication (2)
# Turn x into a (W,Y) dense matrix (aka the im2col operation)
# This is used to speed up convolution with known efficient matmul algorithms
x3 = Float64[1 2 3 4 5; 2 3 4 5 6; 3 4 5 6 7]
@show w3 = [3.0 2.0 1.0]
@show y3 = w3 * x3;
# Matrix multiplication as convolution
# This could be used to make a fully connected network accept variable sized inputs.
w = reshape([1.0:6.0...], (2,3))
x = reshape([1.0:3.0...], (3,1))
y = w * x
# Consider w with size (Y,X)
# Treat each of the Y rows of w as a convolution filter
w2 = copy(reshape(Array(w)', (3,1,1,2)))
# Reshape x for convolution
x2 = reshape(x, (3,1,1,1))
# Use conv4 for matrix multiplication
y2 = conv4(w2, x2; mode=1)
# So there is no difference between the class of functions representable with an MLP vs CNN.
# Sparse connections and weight sharing give CNNs more generalization power with images.
# Number of parameters in MLP256: (256x784)+256+(10x256)+10 = 203530
# Number of parameters in LeNet: (5*5*1*20)+20+(5*5*20*50)+50+(500*800)+500+(10*500)+10 = 431080
```
| github_jupyter |
```
import copy
from functools import reduce
import pandas as pd
from typing import List, Tuple
import pandas as pd
from matplotlib import pyplot as plt
import mpl_finance as mpf
with open('./resources/ticks/last_file.csv', 'r', encoding='utf-8') as f:
prices = f.readlines()
with open('./resources/ticks/volume.csv', 'r', encoding='utf-8') as f:
volumes = f.readlines()
def make_linear(lines: List[str]) -> List[float]:
records = []
for line in lines:
records.extend(line.split(','))
return list(map(lambda x: float(x), filter(lambda y: not str.isspace(y), records)))
def chunk_tick(size: int, records: List[float]) -> List[List[float]]:
chunks = []
for i in range(0, len(records), size):
chunks.append(records[i:i+size])
return chunks
def chunk_volume(volume: float, records: List[float], volumes: List[float]) -> List[List[float]]:
chunks = []
current_chunk = []
current_volume = 0.0
volume_counter = copy.deepcopy(volumes)
i = 0
while i<len(records):
current_chunk.append(records[i])
next_volume = current_volume + volume_counter[i]
if next_volume>=volume:
volume_counter[i] = next_volume - volume
current_volume = 0
chunks.append(current_chunk)
current_chunk = []
if next_volume==volume:
i+=1
else:
current_volume = next_volume
i+=1
if len(current_chunk)!=0:
chunks.append(current_chunk)
return chunks
def chunk_dollar(dollar: float, records: List[float], volumes: List[float]) -> List[List[float]]:
dollars = list(map(lambda x: x[0]*x[1], zip(records, volumes)))
return chunk_volume(dollar, records, dollars)
def show_bar_chart(bar: List[Tuple[int, float, float, float, float]]):
dfcvs = pd.DataFrame(bar)
dfcvs.columns = ['时间','开盘','收盘','最高','最低']
data_mat=dfcvs.values
fig,ax=plt.subplots(figsize=(1200/72,480/72))
fig.subplots_adjust(bottom=0.1)
mpf.candlestick_ohlc(ax=ax,quotes=data_mat,colordown='#53c156', colorup='#ff1717',width=0.3,alpha=1)
def extract_bar_features(raw_bars: List[List[float]]) -> List[Tuple[int, float, float, float, float]]:
res = []
for i in range(0,len(raw_bars)):
raw_bar = raw_bars[i]
start = raw_bar[0]
end = raw_bar[-1]
minv = reduce(min, raw_bar, float('inf'))
maxv = reduce(max, raw_bar, float('-inf'))
res.append((i, start, end, maxv, minv))
return res
linear_prices = make_linear(prices)
linear_volumes = make_linear(volumes)
tick_bars = extract_bar_features(chunk_tick(100, linear_prices))
show_bar_chart(tick_bars)
volume_bars = extract_bar_features(chunk_volume(10e7, linear_prices, linear_volumes))
show_bar_chart(volume_bars)
dollar_bars = extract_bar_features(
chunk_dollar(3 * 10e10, linear_prices, linear_volumes))
show_bar_chart(dollar_bars)
```
| github_jupyter |
# Capture camera stream
```
%%html
<div>
<button id='button-record'>Start Recording</button>
</div>
<video muted autoplay loop controls style='visibility:hidden' id='player'></video>
<script src="https://www.WebRTC-Experiment.com/RecordRTC.js"></script>
<script>
function upload(blob) {
var reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = function() {
var base64data = reader.result;
base64data = base64data.replace(/data:.*base64,/i, '');
var oReq = new XMLHttpRequest();
var name = 'recording.webm';
var url = window.location.href.replace(/notebooks\/.*/i, "api/contents/"+name);
console.log("url: " + url);
oReq.open("PUT", url, true);
//oReq.setRequestHeader("Authorization", "token 120a04206f287316c647b74b18518f42c115c3d21e8ab9ca")
oReq.setRequestHeader("X-XSRFToken", "2|d85f43d5|0f5d0c7fa8cda08f3205f05f29262fd1|1590509612");
oReq.onload = console.log;
oReq.onerror = console.log;
payload = JSON.stringify({
'content':base64data,
'name': name,
'path': name,
'format': 'base64',
'type':'file'
})
oReq.send(payload);
}
}
function download(url) {
let a = document.createElement('a');
a.href = url;
a.download = 'recording.webm';
a.click();
}
recordbn = document.querySelector('#button-record');
recordbn.state = 0;
recordbn.addEventListener('click', (e)=>{
if (recordbn.state == 1) {
recordbn.textContent = "Start Recording";
recordbn.state = 0;
recordbn.recorder.stopRecording(function() {
let v = document.querySelector('#player');
v.src = v.srcObject = null;
v.muted = false;
videoBlob = recordbn.recorder.getBlob();
v.src = URL.createObjectURL(videoBlob);
recordbn.stream.stop();
recordbn.recorder.destroy();
recordbn.recorder = null;
download(v.src);
});
} else {
recordbn.textContent = "Stop Recording";
recordbn.state = 1;
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(async function(stream) {
let v = document.querySelector('#player');
v.srcObject = stream;
v.style.visibility = 'visible';
recordbn.recorder = RecordRTC(stream, {
type: 'video',
video: v
});
recordbn.recorder.startRecording();
recordbn.stream = stream;
});
}
});
</script>
```
| github_jupyter |
People always ask: "can you randomize several times and use the proportion of selection, instead of
just one randomization"?
Let's try to figure this out.
```
import numpy as np
import regreg.api as rr
import seaborn as sns
%matplotlib inline
%load_ext rpy2.ipython
import matplotlib.pyplot as plt
import scipy.stats
import statsmodels.api as sm
from selection.distributions.discrete_family import discrete_family
ntries, sigma, q = 21, 1, 0.3
def algorithm(Z, ntries=ntries, q=q):
proportion = 0
for _ in range(ntries):
proportion += ((Z + sigma * np.random.standard_normal() > 2) +
(Z + sigma * np.random.standard_normal() < -2)) > 0
proportion /= ntries
return proportion > q
Z = np.linspace(-8, 8, 1001)
def fit_algorithm(algorithm, B=5000, ntries=ntries, q=q, Zval=Z, link='logit'):
Z = np.random.standard_normal(B) * 2
Z = np.hstack([Z,
np.random.standard_normal(B),
np.random.standard_normal(B) * 3,
np.random.standard_normal(B) * 0.5])
Y = np.array([algorithm(z, ntries=ntries, q=q) for z in Z])
%R -i Y,Z,Zval
%R Z = as.numeric(Z*1)
if link == 'probit':
%R M2 = glm(Y ~ poly(Z, 2), family=binomial(link=probit))
else:
%R M2 = glm(Y ~ poly(Z, 2), family=binomial(link=logit))
%R W = predict(M2, newdata=data.frame(Z=Zval), type='link')
W = %R W
if link == 'probit':
return scipy.stats.norm.cdf(W)
else:
return np.exp(W) / (1 + np.exp(W))
def simulate(ntries=ntries, sigma=sigma, truth=0):
while True:
Z = np.random.standard_normal() + truth
if algorithm(Z, ntries, q=q):
return Z
Z = np.linspace(-8, 8, 1001)
W1 = fit_algorithm(algorithm, ntries=ntries, q=q, Zval=Z)
plt.plot(Z, np.log(W1))
selective_law1 = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
def pivot1(z, truth=0):
return 1 - selective_law1.cdf(truth, z)
P0 = []
for _ in range(1000):
P0.append((pivot1(simulate()),
1 - scipy.stats.norm.cdf(simulate())))
P0 = np.array(P0)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(P0[:,0])(U), 'c', label='fit')
plt.plot(U, sm.distributions.ECDF(P0[:,1])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
PA = []
truth = 1
for _ in range(1000):
PA.append((pivot1(simulate(truth=truth), truth=truth),
1 - scipy.stats.norm.cdf(simulate() - 1)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'c', label='fit')
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'y', label='naive')
plt.legend()
plt.plot([0, 1], [0, 1], 'k--')
Z0 = np.linspace(-2,2,501)
LU1 = []
for z in Z0:
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
LU1.append(selective_law.equal_tailed_interval(z))
LU1 = np.array(LU1)
plt.plot(Z0, LU1[:,0], 'c', label='fit')
plt.plot(Z0, LU1[:,1], 'c')
plt.legend()
coverage, ncover, truth = 0, 500, 0
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / ncover, np.mean(lengths), np.std(lengths)
coverage, ncover, truth = 0, 500, 2.5
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / ncover, np.mean(lengths), np.std(lengths)
coverage, ncover, truth = 0, 500, -1.
lengths = []
for _ in range(ncover):
z = simulate(truth=truth)
selective_law = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
L, U = selective_law.equal_tailed_interval(z)
coverage += (L < truth) * (U > truth)
lengths.append(U-L)
coverage / ncover, np.mean(lengths), np.std(lengths)
```
# Increasing number of tries
```
ntries, sigma, q = 41, 0.5, 0.65
Z = np.linspace(-8, 8, 1001)
def pivot(z, truth=0):
return 1 - selective_law.cdf(truth, z)
def pivot0(z, truth=0):
return 1 - selective_law0.cdf(truth, z)
def algorithm(Z, ntries=ntries, q=q):
proportion = 0
for _ in range(ntries):
proportion += ((Z + sigma * np.random.standard_normal() > 2) +
(Z + sigma * np.random.standard_normal() < -2)) > 0
proportion /= ntries
return proportion > q
W1 = fit_algorithm(algorithm, ntries=ntries, q=q, Zval=Z)
selective_law1 = discrete_family(Z, W1 * scipy.stats.norm.pdf(Z))
def pivot1(z, truth=0):
return 1 - selective_law1.cdf(truth, z)
pivot(simulate())
P0 = []
truth = 0.05
for _ in range(1000):
P0.append((pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth)),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
P0 = np.array(P0)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(P0[:,0])(U), 'c', label='fit')
plt.plot(U, sm.distributions.ECDF(P0[:,1])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = -1
PA = []
for _ in range(1000):
PA.append((pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = -2
PA = []
for _ in range(1000):
PA.append((pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
truth = 1
PA = []
for _ in range(1000):
PA.append((pivot1(simulate(ntries=ntries, sigma=sigma, truth=truth), truth=truth),
1-scipy.stats.norm.cdf(simulate(ntries=ntries, sigma=sigma, truth=truth) - truth)))
PA = np.array(PA)
U = np.linspace(0, 1, 101)
plt.plot(U, sm.distributions.ECDF(PA[:,0])(U), 'c', label='fit', linewidth=2)
plt.plot(U, sm.distributions.ECDF(PA[:,1])(U), 'y', label='naive')
plt.plot([0, 1], [0, 1], 'k--')
plt.legend()
```
##
| github_jupyter |
# **🛠 CenterNet Fixed For Google Colab**
[Docs](https://mehrdad-dev.ir/CenterNet-Fixed-For-Colab/)
[GitHub](https://github.com/mehrdad-dev/CenterNet-Fixed-For-Colab)
## **Clone CenterNet**
```
! git clone https://github.com/mehrdad-dev/CenterNet-Fixed-For-Colab.git
```
## **Install Conda**
```
! wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh
! chmod +x Miniconda3-py37_4.8.2-Linux-x86_64.sh
! bash ./Miniconda3-py37_4.8.2-Linux-x86_64.sh -b -f -p /usr/local
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
! conda install pytorch=1.4.0 torchvision -c pytorch
! conda install -c intel mkl=2021
```
## **Install Pakages**
```
! pwd
%cd /content/CenterNet-Fixed-For-Colab/
! pip install -r requirements.txt
```
## **Builds**
```
%cd src/lib/models/networks/DCNv2/
! sh make.sh
! pwd
%cd ../../../../
%cd lib/external
! python setup.py build_ext --inplace
```
## **Run models on your dataset & Save results**
```
%cd ../..
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
## **Hourglass**
```
! python demo.py ctdet --demo /content/drive/MyDrive/Data/veerasense/challaenge-data/ \
--load_model /content/drive/MyDrive/models/ctdet_coco_hg.pth --arch hourglass
! zip -r /content/ctdet_coco_hg.zip /content/CenterNet/src/cache/debug
!mv /content/CenterNet-Fixed-For-Colab/src/cache/debug/ /content/CenterNet-Fixed-For-Colab/src/cache/ctdet_coco_hg
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
## **dla 1x**
```
! python demo.py ctdet --demo /content/drive/MyDrive/Data/veerasense/challaenge-data/ \
--load_model /content/drive/MyDrive/models/ctdet_coco_dla_1x.pth
!zip -r /content/ctdet_coco_dla_1x.zip /content/CenterNet/src/cache/debug
!mv /content/CenterNet-Fixed-For-Colab/src/cache/debug/ /content/CenterNet-Fixed-For-Colab/src/cache/ctdet_coco_dla_1x
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
## **dla 2x**
```
! python demo.py ctdet --demo /content/drive/MyDrive/Data/veerasense/challaenge-data/ \
--load_model /content/drive/MyDrive/models/ctdet_coco_dla_2x.pth
!zip -r /content/ctdet_coco_dla_2x.zip /content/CenterNet/src/cache/debug
!mv /content/CenterNet-Fixed-For-Colab/src/cache/debug/ /content/CenterNet-Fixed-For-Colab/src/cache/ctdet_coco_dla_2x
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
## **resdcn 101**
```
! python demo.py ctdet --demo /content/drive/MyDrive/Data/veerasense/challaenge-data/ \
--load_model /content/drive/MyDrive/models/ctdet_coco_resdcn101.pth --arch resdcn_101
!zip -r /content/ctdet_coco_resdcn101.zip /content/CenterNet/src/cache/debug
!mv /content/CenterNet-Fixed-For-Colab/src/cache/debug/ /content/CenterNet-Fixed-For-Colab/src/cache/ctdet_coco_resdcn101
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
## **resdcn 18**
```
! python demo.py ctdet --demo /content/drive/MyDrive/Data/veerasense/challaenge-data/ \
--load_model /content/drive/MyDrive/models/ctdet_coco_resdcn18.pth --arch resdcn_18
!zip -r /content/ctdet_coco_resdcn18.zip /content/CenterNet/src/cache/debug
!mv /content/CenterNet-Fixed-For-Colab/src/cache/debug/ /content/CenterNet-Fixed-For-Colab/src/cache/ctdet_coco_resdcn18
!mkdir /content/CenterNet-Fixed-For-Colab/src/cache/debug
```
| github_jupyter |
<font color=gray>Oracle Cloud Infrastructure Data Science Demo Notebook
Copyright (c) 2021 Oracle, Inc.<br>
Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
</font>
# Validation of the CNN Model
```
%load_ext autoreload
%autoreload 2
import keras
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.utils import plot_model
from matplotlib import pyplot as plt
import numpy as np
import json
import urllib
from zipfile import ZipFile
import skimage as ski
import os
import pandas as pd
import glob
from numpy import random as random
import urllib
import tensorflow as tf
from sklearn.metrics import confusion_matrix
from skimage import transform
from seaborn import heatmap
from utilities import display_xray_image, evaluate_model_performance
path_to_train_dataset = f"./data/chest_xray/train/"
path_to_test_dataset = f"./data/chest_xray/test/"
model_artifact_path = f"./model_artifact"
model_file = f"xray_predictor4-march21.hdf5"
model_path = os.path.join(model_artifact_path, model_file)
# Pulling some statistics about the test dataset:
pneumonia_test_list = glob.glob(path_to_test_dataset+'PNEUMONIA/*')
normal_test_list = glob.glob(path_to_test_dataset+'NORMAL/*')
test_list = pneumonia_test_list + normal_test_list
print("Test sample size = {}, Pneumonia = {}, Normal = {}".format(len(test_list),
len(pneumonia_test_list),
len(normal_test_list)))
# Building out the dataframe that will contain all the metadata about the x-ray images
test_df = pd.DataFrame(data={"path":test_list})
test_df["observed_class"] = test_df["path"].apply(lambda x: 0 if "/NORMAL/" in x else 1 )
test_df["extension"] = test_df["path"].apply(lambda x: os.path.splitext(x)[1])
print(test_df.shape)
test_df.head()
display_xray_image(test_df['path'].iloc[0])
```
## Image Transformations
```
# Defining those image transformations:
def image_transformations(image_path, dims=(200, 300)):
"""
"""
# Resize the original image. Consistent with training dataset:
image = transform.resize(ski.io.imread(image_path), output_shape=dims)
# Take the first channel only:
image = image[:,:,0] if len(image.shape)>2 else image
return image
# Applying transformations to images and observed labels:
test_df['resized_image'] = test_df['path'].apply(lambda x: image_transformations(x))
# encoding the class as a numpy array:
test_df['y'] = test_df['observed_class'].apply(lambda x: np.array([0, 1])
if x==1 else np.array([1, 0]))
Xtest = test_df['resized_image'].values
Ytest = test_df['y'].values
Xtest = np.asarray([i.reshape(200,300,1) for i in Xtest])
Ytest = np.asarray([i.reshape(2) for i in Ytest])
print("Xtest shape: {}, Ytest shape: {}".format(Xtest.shape, Ytest.shape))
display_xray_image(test_df.iloc[0]['resized_image'])
```
# Evaluating the CNN model
```
model = keras.models.load_model(model_path)
evaluate_model_performance(model_path, Xtest, Ytest, test_df['observed_class'].values,
labels=["normal", "pneumonia"])
```
| github_jupyter |
# The inverted pendulum model of the human standing
Marcos Duarte
Despite the enormous complexity of the human body, part of the mechanical behavior of the human body during the standing still posture, namely the displacements of the center of gravity ($COG$) and center of pressure ($COP$) in the anterior-posterior direction, can be elegantly portraied by a physical-mathematical model of an inverted pendulum with rigid segments articulated by joints.
Using such a model, it's possible to estimate the COG vertical projection (COGv) from the COP displacement. The Python function `cogve.py` (code at the end of this text) performs this estimation. The function signature is:
```python
cogv = cogve(COP, freq, mass, height, show=False, ax=None)
```
Let's now derive the inverted pendulum model of the human standing posture implemented in this function.
## Derivation of the inverted pendulum model
In the most simple version of the model, the human body in the sagittal plane is reduced to a two-link body with a single inverted pendulum articulated by only one joint (representing the feet, with the rest of the body articulated by the ankle joint). Let's deduce the equations for such inverted pendulum model as the representation at the sagital plane of the human standing still posture. The inverted pendulum model and the correspondent free-body diagrams (FBDs) are shown in Figure 1.
<div><figure><img src="./../images/invpendulum.png" width=400 alt="onelink"/><figcaption><b>Figure 1.</b> <i>Model of a two-link inverted pendulum and the external forces acting on it for the representation at the sagital plane of the human standing still posture and the corresponding free-body diagrams. $COG$: body center of gravity; $COG_v$: $COG$ vertical projection (at the horizontal plane) in relation to the ankle joint; $COP$: body center of pressure in relation to the ankle joint; $GRF$: ground reaction force (typically measured by a force plate); $\alpha$: angle of the body in relation to the vertical direction; $m$: mass of the body minus feet; $g$: acceleration of gravity; $F_a$ and $T_a$: resultant force and torque at the ankle joint; $h$: height of the $COG$ in relation to the ankle joint; $m_f$ and $h_f$: mass and height of the feet.</i></figcaption></figure></div>
The equations of motion for each FBD of the feet and rest-of-body segments at the sagittal plane ($xy$ plane) can be expressed in the form of the Newton-Euler equations.
<br>
<div style="background-color:#FBFBEF;border:1px solid black;padding:10px;">
<b>The Newton-Euler equations</b>
<br />
The <a href="http://en.wikipedia.org/wiki/Newton%E2%80%93Euler_equations">Newton-Euler equations</a> are a formalism to describe the combined translational and rotational dynamics of a rigid body.
For a two-dimensional (at the $xy$ plane) movement, their general form are given by:
<br />
$$ \sum \mathbf{F} = m \mathbf{\ddot{r}}_{cm} $$
$$ \sum \mathbf{T}_z = I_{cm} \mathbf{\ddot{\alpha}}_z $$
Where the movement is considered around the center of mass ($cm$) of the body, $\mathbf{F}$ and $\mathbf{T}$ are, respectively, the forces and torques acting on the body, $\mathbf{\ddot{r}}$ and $\mathbf{\ddot{\alpha}}$ are, respectively, the linear and angular accelerations, and $I$ is the body moment of inertia around the $z$ axis passing through the body center of mass.
It can be convenient to describe the rotation of the body around other point than the center of mass. In such cases, we express the moment of inertia around a reference point $o$ instead of around the body center of mass and we will have an additional term to the equation for the torque:
<br />
$$ \sum \mathbf{T}_{z,O} = I_{o} \mathbf{\ddot{\alpha}}_z + \mathbf{r}_{cm,o}\times m \mathbf{\ddot{r}}_o $$
Where $\mathbf{r}_{cm,o}$ is the position vector of the center of mass in relation to the reference point $o$ and $\mathbf{\ddot{r}}_o$ is the linear acceleration of this reference point.
<a href="http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/FreeBodyDiagram.ipynb">See this notebook about free-body diagram</a>.
</div>
For the case of an inverted pendulum representing at the sagittal plane the body at the standing still posture, let's solve the Newton-Euler equations considering the rotation around the ankle joint (because it will simplify the problem) and let's adopt the simplifications that the feet don't move (including the ankle, so $\mathbf{\ddot{r}}_o=0$ in the Newton-Euler equation) and their mass are neglegible in relation to the mass of the rest of the body.
For the feet, we have:
$$ \begin{array}{l l}
-F_{ax} + GRF_x = 0 \\
\\
-F_{ay} + GRF_y = 0 \\
\\
-T_a + COP \cdot GRF_y + h_f \cdot GRF_x = 0
\end{array} $$
And for the rest of the body:
$$ \begin{array}{l l}
F_{ax} = m\ddot{x}_{cm} \\
\\
F_{ay} - mg = m\ddot{y}_{cm} \\
\\
T_a - COG_v \cdot mg = I_a \ddot{\alpha}
\end{array} $$
Where $I_a$ is the moment of inertia of the whole body around the ankle joint.
During the standing still posture, the $GRF$ horizontal component is typically much smaller than the $GRF$ vertical component and the torque of the former can be neglected. In addition, the magnitude of the $GRF$ vertical component is approximately constant and equal to the body weight.
Considering these approximations, the ankle joint torque using the equation for the feet is given by:
$$ T_a \approx COP \cdot mg $$
If now we substitute the ankle joint torque term in the equation for the torques calculated for the rest-of-body segment, we have:
$$ COP - COG_v \approx \frac{I_a}{mg} \ddot{\alpha} $$
That is, the angular acceleration of the body is proportional to the difference between $COP$ and $COG_v$ displacements (with respect to the ankle joint position).
We can continue with the deduction and now substitute the angular displacement by a term proportional to $COG_v$ if we use the following trignometric relation (see figure above): $sin \alpha=COG_v/h$. But, during the standing still posture $\alpha$ is typicall very small and we can approximate $ sin\alpha \approx \alpha $ and $\alpha \approx COG_v/h$. However, bear in mind that $\alpha$ is defined as counterclockwise positive while $COG_v$ is positive when pointing to the right direction. This means that in fact $\alpha \approx -COG_v/h$. As $h$ is constant, the second derivative of $\alpha$ with respect to time is simply the second derivative of $COG_v$ divided by $h$.
Finally, the last equation can be expressed in the following form:
$$ COG_v - COP \approx \frac{I_a}{mgh} \ddot{COG}_v $$
Or simply:
$$ COG_v - COP \approx k \, \ddot{COG}_v $$
Where $k = I_a/(mgh)$.
If the human body is represented as a rigid bar, its moment of inertia will be approximately equal to $1.33mh^2$, so $k \approx 1.33h/g$.
In turn, from the Newton-Euler equations, the horizontal acceleration of $COG$ is equal to the horizontal component of $GRF$ divided by the body mass and the equation above can be expressed as:
$$ COG_v - COP \approx \frac{k}{m} GRF_x $$
## Implications of the inverted pendulum model
These two last equations express a very simple relation between body segment parameters, $COG_v$, $COP$, body acceleration, and horizontal force.
Solely based on these equations it is possible to predict some interesting relations among the variables in these equations, which have been experimentally observed:
- $COG_v-COP$ is positively correlated with the horizontal ground reaction force in the anterior-posterior direction (Winter et al. 1998; Morasso and Schieppati 1999; Zatsiorsky and Duarte 2000);
- $COG_v$ behaves as a low-pass filtered version of the $COP$ signal and this fact has been used in a method to derive $COG_v$ from the $COP$ signal (Winter 1995; Caron et al. 1997; Morasso and Schieppati 1999). This method produces similar results as other methods (Lafond et al. 2004);
- For a continuously regulated inverted pendulum (like the standing still posture), the common frequencies of $COG_v$ and $COP$ signals are in phase (Morasso and Schieppati 1999);
- When the horizontal force is zero, $COG_v$ and $COP$ coincide and this fact has been used as a method to derive $COG_v$ from the $COP$ displacement and the horizontal $GRF$ (King and Zatsiorsky 1997; Zatsiorsky and Duarte 2000).
Note that the four predictions made above are based entirely on the mechanical derivation of the inverted pendulum model. Nothing has been said about what type of neural control is being used for the regulation of the standing posture. This means that the statements above are consequence of the mechanical nature of the modeled phenomenon.
Obviously, the most straightforward prediction of the single inverted pendulum model concerning the **kinematics of the segments** of the human body would be that we should observe merely the motion at the ankle joint and nothing in the other joints. This prediction has not been observed (see for example, Pinter et al. 2008 and Gunther et al. 2009). During standing still, we seem to use all our joints and this is task dependent.
However, one important point to consider is that even if the inverted pendulum fails as a suitable model of the **kinematics of the segments**, the inverted pendulum succeds as a model of the **kinematics of global body variables**, such as $COG_v$ and $COP$, and their relation to kinetic variables, the external forces acting on the body.
Certainly everyone agrees that the inverted pendulum model is insufficient to capture all the essential characteristics of the posture during standing. Nevertheless, the greatest power of the single inverted pendulum model is its simplicity, and it is somewhat surprising to note how much this simple model can capture the of the investigated phenomenon.
## Estimation of COGv from the COP signal
Based on the inverted pendulum model, it's possible to estimate the $COG_v$ displacement from the $COP$ displacement after some mathematical manipulation we show next.
Back to the relation between $COG_v$ and $COP$ displacements, it has the form:
$$ y(t) - x(t) = k\,\ddot{y}(t) $$
Where $y(t)$ stands for the $COG_v$ signal and $x(t)$ for the $COP$ signal, which are functions of time, and $k = I_a/(mgh)$.
The equation above is a linear ordinary differential equation of second order. This equation is solvable in the time domain, but if we transform it to the frequency domain using the Fourier transform, we will find a simpler relation between $COG_v$ and $COP$.
<br>
<div style="background-color:#FBFBEF;border:1px solid black;padding:10px;">
<b>The Fourier transform</b>
The <a href="http://en.wikipedia.org/wiki/Fourier_transform">Fourier transform</a> is a mathematical operation to transform a signal which is function of time, $g(t)$, into a signal which is function of frequency, $G(f)$, and it is defined by:
<br />
$$ \mathcal{F}[g(t)] = G(f) = \int_{-\infty}^{\infty} g(t) e^{-i2\pi ft} dt $$
Its inverse operation is:
<br />
$$ \mathcal{F}^{-1}[G(f)] = g(t) = \int_{-\infty}^{\infty} G(f) e^{i2\pi ft} df $$
The function $G(f)$ is the representation in the frequency domain of the time-domain signal, $g(t)$, and vice-versa. The functions $g(t)$ and $G(f)$ are referred to as a Fourier integral pair, or Fourier transform pair, or simply the Fourier pair.
<a href="http://www.thefouriertransform.com/transform/fourier.php">See here for an introduction to Fourier transform</a> and <a href="http://www.thefouriertransform.com/applications/differentialequations.php">see here for the use of Fourier transform to solve differential equations</a>.
</div>
<br>
Let's apply the Fourier transform to the differential equation with $COG_v$ and $COP$:
$$ Y(j\omega) - X(j\omega) = -k\,\omega^2Y(j\omega) $$
Where we defined $y(t) \Leftrightarrow Y(j\omega)$ and $x(t) \Leftrightarrow X(j\omega)$ as the Fourier pairs, $j$ is the imaginary unit, and $\omega$ is the angular frequency, $2\pi f$.
The reason why we use the Fourier transform is because we started with a second order differential equation and ended with the simple algebraic equation above. Rearranging the equation above:
$$ \frac{Y(j\omega)}{X(j\omega)} = \frac{\omega_0^2}{\omega_0^2 + \omega^2} $$
Where $ \omega_0 = 1/\sqrt{k}$.
If we imagine a system where the $COP$ is the input and the $COG_v$ the output, the right side of the equation above is known as the <a href="http://en.wikipedia.org/wiki/Transfer_function">transfer function</a> of such system, the ratio between the output and the input.
Analysing the transfer function given in the equation above, we see that it is of the type of a low-pass filter (and $\omega_0$ is the cutoff frequency); because of that, we can say that the $COGv$ signal is a low-pass filtered version of the $COP$ signal.
We can implement such low-pass filter in order to determine the $COG_v$ using the $COP$ signal. For that, we simply have to estimate the Fourier transform of the $COP$ signal, multiply by the transfer function (the right side of the equation above), and calculate the inverse Fourier transform of this result. The Python function `cogve.py` (code at the end of this text) estimates the $COG_v$ using the $COP$ data based on this algorithm. Let's test this function, first we have to import the necessary Python libraries and configure the emvironment:
```
# Import the necessary libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(1, r'./../functions')
from cogve import cogve
```
Let's use stabilographic data found in the internet:
```
import pandas as pd # use Pandas to read data from a website
fileUrl = 'http://www.udel.edu/biology/rosewc/kaap686/reserve/cop/copdata.txt'
COP = pd.read_table(fileUrl, skipinitialspace=True, sep=None, engine='python')
COP = COP.values / 10 # mm to cm
freq = 100
print('COP shape: ', COP.shape)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
cogv = cogve(COP[:, 0], freq=100, mass=70, height=175, ax=ax, show=True) # guess mass, height
```
## References
- Caron O, Faure B, et al. (1997) [Estimating the centre of gravity of the body on the basis of the centre of pressure in standing posture](http://www.ncbi.nlm.nih.gov/pubmed/9456386). J. Biomech. 30, 1169-1171.
- Lafond D, Duarte M, et al. (2004) [Comparison of three methods to estimate the center of mass during balance assessment](http://ebm.ufabc.edu.br/publications/md/JB03.pdf). J. Biomech. 37, 1421-1426.
- King D, Zatsiorsky VM (1997) [Extracting gravity line displacement from stabilographic recordings](http://www.sciencedirect.com/science/article/pii/S0966636296011010). Gait & Posture 6, 27-38.
- Morasso PG, Spada G, et al. (1999) [Computing the COM from the COP in postural sway movements](http://www.sciencedirect.com/science/article/pii/S0167945799000391). Human Movement Science 18, 759-767.
- Winter DA (1995) [A.B.C. (Anatomy, Biomechanics and Control) of Balance during Standing and Walking](https://books.google.com.br/books?id=0lSqQgAACAAJ&). Waterloo, Waterloo Biomechanics.
- Winter DA, Patla AE, et al. (1998) [Stiffness control of balance in quiet standing](http://www.ncbi.nlm.nih.gov/pubmed/9744933). J. Neurophysiol. 80, 1211-1221.
- Zatsiorsky VM, Duarte M (2000) [Rambling and trembling in quiet standing](http://ebm.ufabc.edu.br/publications/md/MC00.pdf). Motor Control 4, 185-200.
## Function cogve.py
```
# %load ./../functions/cogve.py
"""COGv estimation using COP data based on the inverted pendulum model."""
from __future__ import division, print_function
import numpy as np
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = "1.0.2"
__license__ = "MIT"
def cogve(COP, freq, mass, height, show=False, ax=None):
"""COGv estimation using COP data based on the inverted pendulum model.
This function estimates the center of gravity vertical projection (COGv)
displacement from the center of pressure (COP) displacement at the
anterior-posterior direction during quiet upright standing. COP and COGv
displacements are measurements useful to quantify the postural sway of a
person while standing.
The COGv displacement is estimated by low-pass filtering the COP
displacement in the frequency domain according to the person's moment
of rotational inertia as a single inverted pendulum [1]_.
Parameters
----------
COP : 1D array_like
center of pressure data [cm]
freq : float
sampling frequency of the COP data
mass : float
body mass of the subject [kg]
height : float
height of the subject [cm]
show : bool, optional (default = False)
True (1) plots data and results in a matplotlib figure
False (0) to not plot
ax : matplotlib.axes.Axes instance, optional (default = None)
Returns
-------
COGv : 1D array
center of gravity vertical projection data [cm]
References
----------
.. [1] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/IP_Model.ipynb
Examples
--------
>>> from cogve import cogve
>>> y = np.cumsum(np.random.randn(3000))/50
>>> cogv = cogve(y, freq=100, mass=70, height=170, show=True)
"""
from scipy.signal._arraytools import odd_ext
import scipy.fftpack
COP = np.asarray(COP)
height = height / 100 # cm to m
g = 9.8 # gravity acceleration in m/s2
# height of the COG w.r.t. ankle (McGinnis, 2005; Winter, 2005)
hcog = 0.56 * height - 0.039 * height
# body moment of inertia around the ankle
# (Breniere, 1996), (0.0572 for the ml direction)
I = mass * 0.0533 * height ** 2 + mass * hcog ** 2
# Newton-Euler equation of motion for the inverted pendulum
# COGv'' = w02*(COGv - COP)
# where w02 is the squared pendulum natural frequency
w02 = mass * g * hcog / I
# add (pad) data and remove mean to avoid problems at the extremities
COP = odd_ext(COP, n=freq)
COPm = np.mean(COP)
COP = COP - COPm
# COGv is estimated by filtering the COP data in the frequency domain
# using the transfer function for the inverted pendulum equation of motion
N = COP.size
COPfft = scipy.fftpack.fft(COP, n=N) / N # COP fft
w = 2 * np.pi * scipy.fftpack.fftfreq(n=N, d=1 / freq) # angular frequency
# transfer function
TF = w02 / (w02 + w ** 2)
COGv = np.real(scipy.fftpack.ifft(TF * COPfft) * N)
COGv = COGv[0: N]
# get back the mean and pad off data
COP, COGv = COP + COPm, COGv + COPm
COP, COGv = COP[freq: -freq], COGv[freq: -freq]
if show:
_plot(COP, COGv, freq, ax)
return COGv
def _plot(COP, COGv, freq, ax):
"""Plot results of the cogve function, see its help."""
try:
import matplotlib.pyplot as plt
except ImportError:
print('matplotlib is not available.')
else:
time = np.linspace(0, COP.size / freq, COP.size)
if ax is None:
_, ax = plt.subplots(1, 1)
ax.plot(time, COP, color=[0, 0, 1, .8], lw=2, label='COP')
ax.plot(time, COGv, color=[1, 0, 0, .8], lw=2, label='COGv')
ax.legend(fontsize=14, loc='best', framealpha=.5, numpoints=1)
ax.set_xlabel('Time [s]', fontsize=14)
ax.set_ylabel('Amplitude [cm]', fontsize=14)
ax.set_title('COGv estimation using the COP data', fontsize=16)
ax.set_xlim(time[0], time[-1])
plt.grid()
plt.show()
```
| github_jupyter |
# Overfitting demo
## Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
```
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
```
Create random values for x in interval [0,1)
```
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
```
Compute y
```
y = x.apply(lambda x: math.sin(4*x))
```
Add random Gaussian noise to y
```
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
```
### Put data into an SFrame to manipulate later
```
data = graphlab.SFrame({'X1':x,'Y':y})
data
```
### Create a function to plot the data, since we'll do it many times
```
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
```
## Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
```
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
```
Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
```
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
```
Define function to plot data and predictions made, since we are going to use it many times.
```
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
```
Create a function that prints the polynomial coefficients in a pretty way :)
```
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
```
## Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
```
model = polynomial_regression(data, deg=2)
```
Inspect learned parameters
```
print_coefficients(model)
```
Form and plot our predictions along a grid of x values:
```
plot_poly_predictions(data,model)
```
## Fit a degree-4 polynomial
```
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Fit a degree-16 polynomial
```
model = polynomial_regression(data, deg=16)
print_coefficients(model)
```
###Woah!!!! Those coefficients are *crazy*! On the order of 10^6.
```
plot_poly_predictions(data,model)
```
### Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
#
#
# Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
```
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
```
## Perform a ridge fit of a degree-16 polynomial using a *very* small penalty strength
```
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
```
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
```
## Let's look at fits for a sequence of increasing lambda values
```
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
data
```
## Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
```
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
data = polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
```
Run LOO cross validation for "num" values of lambda, on a log scale
```
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
```
Plot results of estimating LOO for each value of lambda
```
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
```
Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
```
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
```
#
#
#
#
# Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
```
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
```
## Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "l1_penalty"
```
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
```
Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution.
| github_jupyter |
# Warm-up
1. Review this code for 1 minute, then:
1. Identify how an "Electric-type" Pokemon object would get access to its base statistics
1. Attempt to write a method for `Electric` that will check its `HP` after every action
<img src='../assets/inherit_warmup.png' width=500 align='left' />
---
# Learning Objectives
1. Students will be able to visually identify `class` inheritance
1. Students will be able to write basic `class`es that inherit from others
1. Students will be able to use basic decorators to create `dataclasses`
---
# Object-Oriented Programming Seminar: Expanding Classes
The last major lesson in OOP is class inheritance. Class inheritance is the act of one object "gaining" all of the functionality of another object. [GeeksforGeeks](https://www.geeksforgeeks.org/inheritance-in-python/) states that the main purposes of class inheritance are:
> 1. Represents real-world relationships well
> 1. Provides reusability of code
> 1. It is transitive in nature
## The Big Idea
Any object in Python worth anything should exercise the use of inheritance because it allows for **extensibility**, **reusability**, and _clarity_. Just like a single function should do a single job, a single `class` should do a specific thing. However, we have already expanded the work of a single function before by using nested functions (a function that calls another function). Likewise, we can expand a `class` by "nesting" it with other `class`es.
<img src='../assets/nourdine-diouane-4YJkvZGDcyU-unsplash.jpg' width=700/>
---
# Last Class
For a quick reminder of where we left off last class
```
import class_demo as demo
%psource demo.Pileup
```
---
# Class Inheritance
Class inheritance is when one object takes/gives attributes and methods to another object upon its instantiation.
<img src='../assets/pokegeny.jpg' />
[Shelomi et al. 2012. A Phylogeny and Evolutionary History of the Pokémon. Annals of Improbable Research](../assets/Phylogeny-Pokemon.pdf)
## Salient Functions
```
def generate_random_integers(total, n):
"""Generates a list of n integers that sum up to a given number
Adapted from http://sunny.today/generate-random-integers-with-fixed-sum/
Args:
total (int): the total all the integers are to sum up to
n (int): the number of integers
Returns:
(list): a list if integers that sum approximately to total
"""
μ = total / n
var = int(0.25 * μ)
min_v = μ - var
max_v = μ + var
vals = [min_v] * n
diff = total - min_v * n
while diff > 0:
a = random.randint(0, n - 1)
if vals[a] >= max_v:
continue
vals[a] += 1
diff -= 1
return [int(val) for val in vals]
```
---
# Let's play with the data
```
import pandas as pd
# Read in the pokemon csv
pokedex = pd.read_csv('../datasets/pokemon.csv')
# Show just Pichu's data
pokedex[pokedex.Name == 'Pichu']
```
What is Pichu's type?
## Exploring class inheritance by doing something productive: making Pokemon©
```
# Base class of Pokemon
class Pokemon:
def __init__(self, level = 1, name = None, given_name = None):
self.level = level
self.given_name = given_name
pokedex = pd.read_csv('../datasets/pokemon.csv')
self.name = name.title() if name else None
if name is None:
self.base_hp, \
self.base_attack, \
self.base_defense, \
self.base_sAttack, \
self.base_sDefense, \
self.base_speed = generate_random_integers(random.randint(125, 400), 6)
elif pokedex.Name.str.contains(self.name).any():
self.base_hp, \
self.base_attack, \
self.base_defense, \
self.base_sAttack, \
self.base_sDefense, \
self.base_speed = pokedex.loc[pokedex.Name == self.name, [
'HP',
'Attack',
'Defense',
'Sp. Atk',
'Sp. Def',
'Speed'
]].values[0]
else:
raise ValueError('unregistered Pokemon')
self.current_hp = self.base_hp
self.exp = 0
def __str__(self):
return f'Pokemon(level = {self.level}, name = {self.given_name if self.given_name else self.name if self.name else "MISSINGNO"})'
def __repr__(self):
return f'Pokemon(level = {self.level}, name = {self.name}, given_name = {self.given_name})'
def stats(self):
return pd.Series([self.base_hp, self.base_attack, self.base_defense, self.base_sAttack, self.base_sDefense, self.base_speed],
index = ['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def', 'Speed'])
magikarp = Pokemon(name='Magikarp')
print(magikarp)
print(repr(magikarp))
magikarp.stats()
class Electric(Pokemon):
def __init__(self, level = 1, name = None, given_name = None):
Pokemon.__init__(self, level, name, given_name)
self.type = 'Electric'
self.weak_def = ('Ground')
self.half_def = ('Electric', 'Flying')
self.strong_att = ('Flying', 'Water')
self.half_att = ('Dragon', 'Electric', 'Grass')
self.no_att = ('Ground')
self.immune = ('Paralyze')
def __repr__(self):
return super().__repr__().replace('Pokemon', 'Electric')
def __str__(self):
return super().__str__().replace('Pokemon', 'Electric')
pichu = Electric(name='Pichu')
pichu.immune
```
---
# Workshop
In groups of 3-4 people:
* Identify the different "types" of Pokemon (not including Electric)
* Choose one "type" (cannot be Electric)
* Write your own "type" subclass
---
```
class Pichu(Electric):
def __init__(self, level = 1, name = 'Pichu', given_name = None):
Electric.__init__(self, level, name, given_name)
self.name = name.title()
def __repr__(self):
return super().__repr__().replace('Electric', 'Pichu')
def __str__(self):
return super().__str__().replace('Electric', 'Pichu')
def thunder_shock(self):
ability_type = 'Electric'
self.thunder_shock_pp = 30
power = 40
accuracy = 1
effect = ('Paralyze', .1)
return (ability_type, effect, accuracy * power * self.base_sAttack)
def charm(self):
ability_type = 'Fairy'
self.charm_pp = 20
power = None
accuracy = None
effect = ('Decrease_Attack', 1)
return (ability_type, effect, None)
def tail_whip(self):
if self.level >= 5:
ability_type = None
self.tail_whip_pp = 30
power = 1
accuracy = 1
effect = None
return (ability_type, effect, accuracy * power * self.base_attack)
else:
raise IndexError('Move not available yet')
def sweet_kiss(self):
if self.leve >= 10:
ability_type = 'Fairy'
self.sweet_kiss_pp = 10
power = None
accuracy = None
effect = ('Confusion', .75)
return (ability_type, effect, None)
else:
raise IndexError('Move not available yet')
def nasty_plot(self):
if self.level >= 13:
ability_type = 'Dark'
self.nasty_plot_pp = 20
power = None
accuracy = None
effect = ('Decrease_sAttack', 1)
return (ability_type, effect, None)
else:
raise IndexError('Move not available yet')
def thunder_wave(self):
if self.level >= 18:
ability_type = 'Electric'
self.thunder_wave_pp = 20
power = 40
accuracy = 0.9
effect = ('Paralyze', 1)
return(ability_type, effect, accuracy * power * self.base_sAttack)
else:
raise IndexError('Move not available yet')
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
```
# Embedding CPLEX in a ML Spark Pipeline
`Spark ML` provides a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.
In this notebook, we show how to embed CPLEX as a Spark _transformer_ class.
DOcplex provides transformer classes that take a matrix `X` of constraints and a vector `y` of costs and solves a linear problem using CPLEX.
Transformer classes share a `solve(X, Y, **params)` method which expects:
- an X matrix containing the constraints of the linear problem
- a Y vector containing the cost coefficients.
The transformer classes requires a Spark DataFrame for the 'X' matrix, and support various formats for the 'Y' vector:
- Python lists,
- numpy vector,
- pandas Series,
- Spark columns
The same formats are also supported to optionally specify upper bounds for decision variables.
## DOcplex transformer classes
There are two DOcplex transformer classes:
- __$CplexTransformer$__ expects to solve a linear problem in the classical form:
$$ minimize\ C^{t} x\\ s.t.\\
Ax <= B$$
Where $A$ is a (M,N) matrix describing the constraints and $B$ is a scalar vector of size M, containing the _right hand sides_ of the constraints, and $C$ is the _cost vector_ of size N. In this case the transformer expects a (M,N+1) matrix, where the last column contains the right hand sides.
- __$CplexRangeTransformer$__ expects to solve linear problem as a set of _range_ constraints:
$$ minimize\ C^{t} x\\ s.t.\\
m <= Ax <= M$$
Where $A$ is a (M,N) matrix describing the constraints, $m$ and $M$ are two scalar vectors of size M, containing the _minimum_ and _maximum_ values for the row expressions, and $C$ is the _cost vector_ of size N. In this case the transformer expects a (M,N+2) matrix, where the last two columns contains the minimum and maximum values (in this order).
```
try:
import numpy as np
except ImportError:
raise RuntimError('This notebook requires numpy')
```
In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
## The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
The __FOOD_NUTRIENTS__ data intentionally contains a missing value ($np.nan$) to illustrate the use of a pipeline involving a data cleansing stage.
```
# the baseline diet data as Python lists of tuples.
FOODS = [
("Roasted Chicken", 0.84, 0, 10),
("Spaghetti W/ Sauce", 0.78, 0, 10),
("Tomato,Red,Ripe,Raw", 0.27, 0, 10),
("Apple,Raw,W/Skin", .24, 0, 10),
("Grapes", 0.32, 0, 10),
("Chocolate Chip Cookies", 0.03, 0, 10),
("Lowfat Milk", 0.23, 0, 10),
("Raisin Brn", 0.34, 0, 10),
("Hotdog", 0.31, 0, 10)
]
NUTRIENTS = [
("Calories", 2000, 2500),
("Calcium", 800, 1600),
("Iron", 10, 30),
("Vit_A", 5000, 50000),
("Dietary_Fiber", 25, 100),
("Carbohydrates", 0, 300),
("Protein", 50, 100)
]
FOOD_NUTRIENTS = [
("Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0.0, 0.0, 42.2),
("Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2),
("Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1.0),
("Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21.0, 0.3),
("Grapes", 15.1, 3.4, 0.1, 24.0, 0.2, 4.1, 0.2),
("Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0.0, 9.3, 0.9),
("Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0.0, 11.7, 8.1),
("Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4.0, 27.9, 4.0),
("Hotdog", 242.1, 23.5, 2.3, 0.0, 0.0, 18.0, 10.4)
]
nb_foods = len(FOODS)
nb_nutrients = len(NUTRIENTS)
print('#foods={0}'.format(nb_foods))
print('#nutrients={0}'.format(nb_nutrients))
assert nb_foods == len(FOOD_NUTRIENTS)
```
### Creating a Spark session
```
try:
import findspark
findspark.init()
except ImportError:
# Ignore exception: the 'findspark' module is required when executing Spark in a Windows environment
pass
import pyspark # Only run after findspark.init() (if running in a Windows environment)
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
```
## Using the transformer with a Spark dataframe
In this section we show how to use a transformer with data stored in a Spark dataframe.
### Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
```
mat_fn = np.matrix([FOOD_NUTRIENTS[f][1:] for f in range(nb_foods)])
print('The food-nutrient matrix has shape: {0}'.format(mat_fn.shape))
```
Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the `FOODS` collection of tuples into columns
```
nutrient_mins = [NUTRIENTS[n][1] for n in range(nb_nutrients)]
nutrient_maxs = [NUTRIENTS[n][2] for n in range(nb_nutrients)]
food_names ,food_costs, food_mins, food_maxs = map(list, zip(*FOODS))
```
We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional `min` and `max` columns
```
# step 1. add two lines for nutrient mins, maxs
nf2 = np.append(mat_fn, np.matrix([nutrient_mins, nutrient_maxs]), axis=0)
mat_nf = nf2.transpose()
mat_nf.shape
```
### Populate a Spark dataframe with the matrix data
In this section we build a Spark dataframe matrix to be passed to the transformer.
Using a Spark dataframe will also allow us to chain multiple transformers in a pipeline.
```
from pyspark.sql import SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
columns = food_names + ['min', 'max']
food_nutrients_df = sqlContext.createDataFrame(mat_nf.tolist(), columns)
```
Let's display the dataframe schema and content
```
food_nutrients_df.printSchema()
food_nutrients_df.show()
```
### Solving the Diet problem with the $CplexRangeTransformer$ in a Pipeline
To use the transformer, create an instance and pass the following parameters to the `transform` method
- the `X` matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the `Y` cost vector (using __"y"__ parameter id)
- whether one wants to solve a minimization (`min`) or maximization (`max`) problem (using __"sense"__ parameter id)
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments:
- `ubs` denotes the upper bound for the column variables that are created. The expected size of this scalar vector is N (when matrix has size (M,N+2))
- `minCol` and `maxCol` are the names of the columns corresponding to the constraints min and max range in the `X` matrix
```
from docplex.mp.sparktrans.transformers import CplexRangeTransformer
from pyspark.ml import Pipeline
from pyspark.sql.functions import *
# Create the optimization transformer to calculate the optimal quantity for each food for a balanced diet.
cplexSolve = CplexRangeTransformer(minCol='min', maxCol='max', ubs=food_maxs)
# Make evaluation on input data. Additional parameters are specified using the 'params' dictionary
diet_df = cplexSolve.transform(food_nutrients_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'min'})
diet_df.orderBy(desc("value")).show()
```
### Example with CplexTransformer
To illustrate the usage of the __$CplexTransformer$__, let's remove the constraint on the minimum amount for nutrients, and reformulate the problem as a cost maximization.
First, let's define a new dataframe for the constraints matrix by removing the `min` column from the `food_nutrients_df` dataframe so that it is a well-formed input matrix for the __$CplexTransformer$__:
```
food_nutrients_LP_df = food_nutrients_df.select([item for item in food_nutrients_df.columns if item not in ['min']])
food_nutrients_LP_df.show()
from docplex.mp.sparktrans.transformers import CplexTransformer
# Create the optimization transformer to calculate the optimal quantity for each food for a balanced diet.
# Here, let's use the CplexTransformer by specifying only a maximum amount for each nutrient.
cplexSolve = CplexTransformer(rhsCol='max', ubs=food_maxs)
# Make evaluation on input data. Additional parameters are specified using the 'params' dictionary
# Since there is no lower range for decision variables, let's maximize cost instead! (otherwise, the result is all 0's)
diet_max_cost_df = cplexSolve.transform(food_nutrients_LP_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'max'})
diet_max_cost_df.orderBy(desc("value")).show()
%matplotlib inline
import matplotlib.pyplot as plt
def plot_radar_chart(labels, stats, **kwargs):
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
# close the plot
stats = np.concatenate((stats, [stats[0]]))
angles = np.concatenate((angles, [angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2, **kwargs)
ax.fill(angles, stats, alpha=0.30, **kwargs)
ax.set_thetagrids(angles * 180/np.pi, labels)
#ax.set_title([df.loc[386,"Name"]])
ax.grid(True)
diet = diet_df.toPandas()
plot_radar_chart(labels=diet['name'], stats=diet['value'], color='r')
diet_max_cost = diet_max_cost_df.toPandas()
plot_radar_chart(labels=diet_max_cost['name'], stats=diet_max_cost['value'], color='r')
```
| github_jupyter |
# CBOE VXN Index
In this notebook, we'll take a look at the CBOE VXN Index dataset, available on the [Quantopian Store](https://www.quantopian.com/store). This dataset spans 02 Feb 2001 through the current day. This data has a daily frequency. CBOE VXN measures market expectations of near-term volatility conveyed by NASDAQ-100 Index option prices
## Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
- <a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
- <a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
### Limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
<a id='interactive'></a>
#Interactive Overview
### Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as [Blaze](http://blaze.pydata.org). Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* [Query building for Blaze](http://blaze.readthedocs.io/en/latest/queries.html)
* [Pandas-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-pandas.html)
* [SQL-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-sql.html).
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
> `from odo import odo`
> `odo(expr, pandas.DataFrame)`
###To see how this data can be used in your algorithm, search for the `Pipeline Overview` section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
```
# For use in Quantopian Research, exploring interactively
from quantopian.interactive.data.quandl import cboe_vxn as dataset
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
```
Let's go over the columns:
- **open**: open price for VXN
- **high**: daily high for VXN
- **low**: daily low for VXN
- **close**: close price for VXN
- **asof_date**: the timeframe to which this data applies
- **timestamp**: this is our timestamp on when we registered the data.
We've done much of the data processing for you. Fields like `timestamp` are standardized across all our Store Datasets, so the datasets are easy to combine.
We can select columns and rows with ease. Below, we'll do a simple plot.
```
# Plotting this DataFrame
df = odo(dataset, pd.DataFrame)
df.head(5)
# So we can plot it, we'll set the index as the `asof_date`
df['asof_date'] = pd.to_datetime(df['asof_date'])
df = df.set_index(['asof_date'])
df.head(5)
import matplotlib.pyplot as plt
df['open_'].plot(label=str(dataset))
plt.ylabel(str(dataset))
plt.legend()
plt.title("Graphing %s since %s" % (str(dataset), min(df.index)))
```
<a id='pipeline'></a>
#Pipeline Overview
### Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
> `from quantopian.pipeline.data.quandl import cboe_vxn`
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
> `pipe.add(cboe_vxn.open_.latest, 'open')`
Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs.
```
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.quandl import cboe_vxn
```
Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
```
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
_print_fields(cboe_vxn)
print "---------------------------------------------------\n"
```
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
```
pipe = Pipeline()
pipe.add(cboe_vxn.open_.latest, 'open_vxn')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & cboe_vxn.open_.latest.notnan())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
```
Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester.
```
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms via the pipeline API
from quantopian.pipeline.data.quandl import cboe_vxn
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add the datasets available
pipe.add(cboe_vxn.open_.latest, 'vxn_open')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
```
Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our <a href='https://www.quantopian.com/posts/pipeline-factor-library-for-data'>data pipeline factor library</a>
| github_jupyter |
```
# look at tools/set_up_magics.ipynb
yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \n \'// setup cpp code highlighting\\n\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-cmake"] = {\\\'reg\\\':[/^%%cmake/]} ;\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-sql"] = {\\\'reg\\\':[/^%%sql/]} ;\'\n)\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE, STDOUT, check_output\nimport html\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\nimport time\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n parser.add_argument("--under-spoiler-threshold", type=int, default=None)\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if not line.startswith("%"):\n f.write(line_to_write)\n else:\n f.write(line_comment_start + " " + line_to_write)\n run_prefix = "%run "\n md_prefix = "%MD "\n comment_prefix = "%" + line_comment_start\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n elif line.startswith(md_prefix):\n cmds.append(\'#<MD>\' + line[len(md_prefix):].strip())\n elif line.startswith(comment_prefix):\n cmds.append(\'#\' + line[len(comment_prefix):].strip())\n else:\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n \n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n if cmd.startswith(\'#\'):\n if cmd.startswith(\'#<MD>\'):\n display(Markdown(cmd[5:]))\n else:\n display(Markdown("\\#\\#\\#\\# `%s`" % cmd[1:]))\n else:\n display(Markdown("Run: `%s`" % cmd))\n if args.under_spoiler_threshold:\n out = check_output(cmd, stderr=STDOUT, shell=True, universal_newlines=True)\n out = out[:-1] if out.endswith(\'\\n\') else out\n out = html.escape(out)\n if len(out.split(\'\\n\')) > args.under_spoiler_threshold:\n out = "<details> <summary> output </summary> <pre><code>%s</code></pre></details>" % out\n elif out:\n out = "<pre><code>%s</code></pre>" % out\n if out:\n display(HTML(out))\n else:\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef cmake(fname, cell):\n save_file(fname, cell, "#")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n fname = fname or "makefile"\n assert fname.endswith("makefile")\n save_file(fname, cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n line = line.strip() \n if line[0] == \'#\':\n display(Markdown(line[1:].strip()))\n else:\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def wait_stop(self, timeout):\n for i in range(int(timeout * 10)):\n wpid, status = os.waitpid(self.pid, os.WNOHANG)\n if wpid != 0:\n return True\n time.sleep(0.1)\n return False\n \n def close(self, timeout=3):\n self.inq_f.close()\n if not self.wait_stop(timeout):\n os.kill(self.get_pid(), signal.SIGKILL)\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END
```
# FUSE
<p><a href="https://www.youtube.com/watch?v=s7PEnBFX1AA&list=PLjzMm8llUm4AmU6i_hPU0NobgA4VsBowc&index=27" target="_blank">
<h3>Видеозапись семинара</h3>
</a></p>
[Ридинг Яковлева про работу с директориями, временем и еще несколькими вещами](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/posix_dirent_time)
<br>[Ридинг Яковлева про FUSE](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/fuse)
Сегодня в программе:
* <a href="#fs_posix" style="color:#856024"> Работа с файловой системой POSIX </a>
* <a href="#opendir" style="color:#856024"> Просмотр содержимого директории c фильтрацией по регулярке </a>
* <a href="#glob" style="color:#856024"> glob или история о том, как вы пишете *.cpp в терминале </a>
* <a href="#ftw" style="color:#856024"> Рекурсивный просмотр. Правда с помощью устаревшей функции. </a>
* <a href="#fs_stat" style="color:#856024"> Информация о файловой системе. </a>
* <a href="#fusepy" style="color:#856024"> Примонтируем json как read-only файловую систему. Python + fusepy </a>
* <a href="#fuse_с" style="color:#856024"> Файловая система с одним файлом на C </a>
[FUSE на wiki](https://ru.wikipedia.org/wiki/FUSE_(модуль_ядра))

https://habr.com/ru/post/315654/ - на питоне
https://engineering.facile.it/blog/eng/write-filesystem-fuse/
<a href="#hw" style="color:#856024">Комментарии к ДЗ</a>
## <a name="fs_posix"></a> Работа с файловой системой в POSIX
Заголовочные файлы, в которых есть функции для работы с файловой системой ([wiki-источник](https://en.wikipedia.org/wiki/C_POSIX_library)):
| Header file | Description |
|-------------|-------------|
| `<fcntl.h>` | File opening, locking and other operations |
| `<fnmatch.h>` | Filename matching |
| `<ftw.h>` | File tree traversal |
| `<sys/stat.h>` | File information (stat et al.) |
| `<sys/statvfs.h>` | File System information |
| `<dirent.h>` | Directories opening, traversing |
read, write, stat, fstat - это все было раньше
## <a name="opendir"></a> Просмотр содержимого директории с фильтрацией по регулярке
```
%%cpp traverse_dir.c
%run gcc -Wall -Werror -fsanitize=address traverse_dir.c -lpthread -o traverse_dir.exe
%run ./traverse_dir.exe ..
#include <stdio.h>
#include <dirent.h>
#include <assert.h>
#include <fnmatch.h>
int main(int argc, char** argv) {
assert(argc == 2);
const char* dir_path = argv[1];
DIR *pDir = opendir(dir_path);
if (pDir == NULL) {
fprintf(stderr, "Cannot open directory '%s'\n", dir_path);
return 1;
}
int limit = 4;
for (struct dirent *pDirent; (pDirent = readdir(pDir)) != NULL && limit > 0;) {
// + Регулярочки
if (fnmatch("sem2*", pDirent->d_name, 0) == 0) {
printf("%s\n", pDirent->d_name);
--limit;
}
}
closedir(pDir);
return 0;
}
```
## <a name="glob"></a> glob или история о том, как вы пишете *.cpp в терминале
Это не совсем про файловую систему, но тем не менее интересно
glob хорошо сочетается с exec, пример тут http://man7.org/linux/man-pages/man3/glob.3.html
```
%%cpp traverse_dir.c
%run gcc -Wall -Werror -fsanitize=address traverse_dir.c -lpthread -o traverse_dir.exe
%run ./traverse_dir.exe .. | head -n 5
#include <stdio.h>
#include <assert.h>
#include <glob.h>
int main() {
glob_t globbuf = {0};
glob("*.c", 0, NULL, &globbuf);
glob("../*/*.c", GLOB_APPEND, NULL, &globbuf);
for (char** path = globbuf.gl_pathv; *path; ++path) {
printf("%s\n", *path);;
}
globfree(&globbuf);
return 0;
}
import glob
glob.glob("../*/*.c")[:4]
```
## <a name="ftw"></a> Рекурсивный просмотр. Правда с помощью устаревшей функции.
```
%%cpp traverse_dir_2.c
%run gcc -Wall -Werror -fsanitize=address traverse_dir_2.c -lpthread -o traverse_dir_2.exe
%run ./traverse_dir_2.exe .
#include <stdio.h>
#include <ftw.h>
#include <assert.h>
int limit = 4;
int callback(const char* fpath, const struct stat* sb, int typeflag) {
printf("%s %ld\n", fpath, sb->st_size);
return (--limit == 0);
}
int main(int argc, char** argv) {
assert(argc == 2);
const char* dir_path = argv[1];
ftw(dir_path, callback, 0);
return 0;
}
```
## <a name="fs_stat"></a> Информация о файловой системе
```
%%cpp fs_stat.c
%run gcc -Wall -Werror -fsanitize=address fs_stat.c -lpthread -o fs_stat.exe
%run ./fs_stat.exe /home
%run ./fs_stat.exe /dev/shm
%run ./fs_stat.exe /dev
#include <stdio.h>
#include <sys/statvfs.h>
#include <assert.h>
int main(int argc, char** argv) {
assert(argc == 2);
const char* dir_path = argv[1];
struct statvfs stat;
statvfs(dir_path, &stat);
printf("Free 1K-blocks %lu/%lu", stat.f_bavail * stat.f_bsize / 1024, stat.f_blocks * stat.f_bsize / 1024);
return 0;
}
!df
```
# FUSE
Важные опции
* `-f` - запуск в синхронном режиме (без этой опции будет создан демон, а сама программа почти сразу завершится)
* `-s` - запуск в однопоточном режиме.
В этом месте что-нибудь про демонизацию стоит расскзать, наверное.
## <a name="fusepy"></a> Python + fusepy
Установк: `pip2 install --user fusepy`
```
%%writefile fuse_json.py
from __future__ import print_function
import logging
import os
import json
from errno import EIO, ENOENT, EROFS
from stat import S_IFDIR, S_IFREG
from sys import argv, exit
from time import time
from fuse import FUSE, FuseOSError, LoggingMixIn, Operations
NOW = time()
DIR_ATTRS = dict(st_mode=(S_IFDIR | 0o555), st_nlink=2)
FILE_ATTRS = dict(st_mode=(S_IFREG | 0o444), st_nlink=1)
def find_json_path(j, path):
for part in path.split('/'):
if len(part) > 0:
if part == '__json__':
return json.dumps(j)
if part not in j:
return None
j = j[part]
return j
class FuseOperations(LoggingMixIn, Operations):
def __init__(self, j):
self.j = j
self.fd = 0
def open(self, path, flags):
self.fd += 1
return self.fd
def read(self, path, size, offset, fh):
logging.debug("Read %r %r %r", path, size, offset)
node = find_json_path(self.j, path)
if not isinstance(node, str):
raise FuseOSError(EIO)
return node[offset:offset + size]
def readdir(self, path, fh):
logging.debug("Readdir %r %r", path, fh)
node = find_json_path(self.j, path)
if node is None:
raise FuseOSError(EROFS)
return ['.', '..', '__json__'] + list(node.keys())
def getattr(self, path, fh=None):
node = find_json_path(self.j, path)
if isinstance(node, dict):
return DIR_ATTRS
elif isinstance(node, str):
attrs = dict(FILE_ATTRS)
attrs["st_size"] = len(node)
return attrs
else:
raise FuseOSError(ENOENT)
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
j = {
'a': 'b',
'c': {
'c1': '234'
}
}
FUSE(FuseOperations(j), "./fuse_json", foreground=True)
!mkdir fuse_json 2>&1 | grep -v "File exists" || true
a = TInteractiveLauncher("python2 fuse_json.py example.txt fuse_json 2>&1")
!ls fuse_json
!tree fuse_json
!cat fuse_json/__json__ && echo
!cat fuse_json/c/__json__ && echo
%%bash
echo -n -e "\n" > new_line
exec 2>&1 ; set -o xtrace
tree fuse_json --noreport
cat fuse_json/__json__ new_line
cat fuse_json/a new_line
cat fuse_json/c/__json__ new_line
!fusermount -u fuse_json
a.close()
```
`sudo apt install tree`
```
%%bash
tree fuse_json --noreport\
```
## <a name="fuse_c"></a> fuse + с
Надо поставить `libfuse3-dev`. Если по каким-то причинам не получится поставить fuse3, но получается fuse2, то в ноутбуке прошлого года показано, как можно писать совместимый код.
```
!mkdir fuse_c_example 2>&1 | grep -v "File exists" || true
```
Либо, если следовать скрипту ниже, то может помочь такой CMake
```
%%cmake fuse_c_example/CMakeLists.txt
cmake_minimum_required(VERSION 3.15)
project(fuse-example)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fsanitize=address -fsanitize=leak -g")
find_package(PkgConfig REQUIRED)
pkg_check_modules(FUSE REQUIRED fuse3)
include_directories(${FUSE_INCLUDE_DIRS})
add_executable(fuse-example main.c)
target_link_libraries(fuse-example ${FUSE_LIBRARIES})
```
Чтобы пользователь мог пользоваться вашим модулем Fuse, нужно добавить основные операции для взаимодействия. Они реализуются в виде колбэков, которые Fuse будет вызывать при выполнении определённого действия пользователем.
В C/C++ это реализуется путём заполнения структурки [fuse_operations](http://libfuse.github.io/doxygen/structfuse__operations.html).
```
%%cpp fuse_c_example/main.c
%run mkdir fuse_c_example/build 2>&1 | grep -v "File exists"
%run cd fuse_c_example/build && cmake .. > /dev/null && make
#include <string.h>
#include <errno.h>
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#define FUSE_USE_VERSION 30
#include <fuse.h>
typedef struct {
char* filename;
char* filecontent;
char* log;
} my_options_t;
my_options_t my_options;
void print_cwd() {
if (my_options.log) {
FILE* f = fopen(my_options.log, "at");
char buffer[1000];
getcwd(buffer, sizeof(buffer));
fprintf(f, "Current working dir: %s\n", buffer);
fclose(f);
}
}
// Самый важный колбэк. Вызывается первым при любом другом колбэке.
// Заполняет структуру stbuf.
int getattr_callback(const char* path, struct stat* stbuf, struct fuse_file_info *fi) {
(void) fi;
if (strcmp(path, "/") == 0) {
// st_mode(тип файла, а также права доступа)
// st_nlink(количество ссылок на файл)
// Интересный факт, что количество ссылок у папки = 2 + n, где n -- количество подпапок.
*stbuf = (struct stat) {.st_nlink = 2, .st_mode = S_IFDIR | 0755};
return 0;
}
if (path[0] == '/' && strcmp(path + 1, my_options.filename) == 0) {
*stbuf = (struct stat) {.st_nlink = 2, .st_mode = S_IFREG | 0777, .st_size = (__off_t)strlen(my_options.filecontent)};
return 0;
}
return -ENOENT; // При ошибке, вместо errno возвращаем (-errno).
}
// filler(buf, filename, stat, flags) -- заполняет информацию о файле и вставляет её в buf.
int readdir_callback(
const char* path, void* buf,
fuse_fill_dir_t filler,
off_t offset,
struct fuse_file_info* fi,
enum fuse_readdir_flags flags
) {
(void) offset; (void) fi; (void)flags; // unused variables
filler(buf, ".", NULL, 0, (enum fuse_fill_dir_flags)0);
filler(buf, "..", NULL, 0, (enum fuse_fill_dir_flags)0);
filler(buf, my_options.filename, NULL, 0, (enum fuse_fill_dir_flags)0);
return 0;
}
// Вызывается после успешной обработки open.
int read_callback(const char* path, char* buf, size_t size, off_t offset, struct fuse_file_info* fi) {
// "/"
if (strcmp(path, "/") == 0) {
return -EISDIR;
}
print_cwd();
// "/my_file"
if (path[0] == '/' && strcmp(path + 1, my_options.filename) == 0) {
size_t len = strlen(my_options.filecontent);
if (offset >= len) {
return 0;
}
size = (offset + size <= len) ? size : (len - offset);
memcpy(buf, my_options.filecontent + offset, size);
return size;
}
return -EIO;
}
// Структура с колбэками.
struct fuse_operations fuse_example_operations = {
.getattr = getattr_callback,
.read = read_callback,
.readdir = readdir_callback,
};
// typedef struct {
// char* filename;
// char* filecontent;
// char* log;
// } my_options_t;
struct fuse_opt opt_specs[] = {
{ "--file-name %s", offsetof(my_options_t, filename), 0 },
{ "--file-content %s", offsetof(my_options_t, filecontent), 0 },
{ "--log %s", offsetof(my_options_t, log), 0 },
FUSE_OPT_END // Структурка заполненная нулями. В общем такой типичный zero-terminated массив
};
int main(int argc, char** argv) {
struct fuse_args args = FUSE_ARGS_INIT(argc, argv);
/*
* ВАЖНО: заполняемые поля должны быть инициализированы нулями.
* (В противном случае fuse3 может делать что-то очень плохое. TODO)
*/
fuse_opt_parse(&args, &my_options, opt_specs, NULL);
print_cwd();
int ret = fuse_main(args.argc, args.argv, &fuse_example_operations, NULL);
fuse_opt_free_args(&args);
return ret;
}
```
Запустим в синхронном режиме (программа работает, пока `fusermount -u` не будет сделан)
```
!mkdir fuse_c 2>&1 | grep -v "File exists" || true
!fusermount -u fuse_c
!truncate --size=0 err.txt || true
a = TInteractiveLauncher("fuse_c_example/build/fuse-example fuse_c -f "
"--file-name my_file --file-content 'My file content\n' --log `pwd`/err.txt")
%%bash
exec 2>&1 ; set -o xtrace
tree fuse_c --noreport
cat fuse_c/my_file
!fusermount -u fuse_c
a.close()
%%bash
tree fuse_c --noreport
cat err.txt
```
А теперь в асинхронном (в режиме демона, в параметрах запуска нет `-f`):
```
!mkdir fuse_c 2>&1 | grep -v "File exists" || true
!fusermount -u fuse_c
!truncate --size=0 err.txt || true
a = TInteractiveLauncher("fuse_c_example/build/fuse-example fuse_c "
"--file-name my_file --file-content 'My file content\n' --log `pwd`/err.txt")
%%bash
exec 2>&1 ; set -o xtrace
tree fuse_c --noreport
cat fuse_c/my_file
fusermount -u fuse_c
a.close()
%%bash
tree fuse_c --noreport
cat err.txt
```
Парам-пам-пам, изменилась текущая директория! Учиытвайте это в ДЗ
# <a name="hw"></a> Комментарии к ДЗ
* Пример входных данных в первой задаче:
```
2
a.txt 3
b.txt 5
AaAbBbBb
```
* В ejudge fuse запускается без опции `-f` поэтому текущая директория будет меняться и относительные пути могут становиться невалидными. Рекомендую: `man 3 realpath`
1) В задачах на fuse основная цель -- реализовать 3 метода(read, readdir, getattr).
Для этого может понадобиться сохранить свои данные в какую-то глобальную переменную и доставать их оттуда в вызовах колбэка.
2) В 23-1 Чтобы не усложнять себе жизнь, можно ходить по папкам при каждом вызове.
Тогда задача сводится к поиску конкретного файла в каждой папке из условия и выборе из этих файлов последнего.
Либо, в случае readdir, можно вызвать opendir/readdir/closedir к каждому пути и сформировать словарик из уникальных файлов в папках.
| github_jupyter |
# WIP: Comparing ICESat-2 Altimetry Elevations with DEM
This notebook compares elevations from ICESat-2 to those from a DEM.
Note that this notebook was created for a specific event using not-publicly available files.
Thus, it is provided as an example workflow but needs to be updated to use a public DEM and icepyx data read-in capabilities.
### Setup
#### The Notebook was run on ICESat2 Hackweek 2019 pangeo image
#### For full functionality,
- Please install [icepyx](https://github.com/icesat2py/icepyx), [topolib](https://github.com/ICESAT-2HackWeek/topohack), [contextily](https://github.com/darribas/contextily) using `git clone xxxxx`, `pip install -e .` workflow (see below; **you must restart your kernel after installing the packages**)
- Download [NASA ASP](https://github.com/NeoGeographyToolkit/StereoPipeline) tar ball and unzip, we execute the commands from the notebook, using the path to the untared bin folder for the given commands.
```
%%bash
cd ~
# git clone https://github.com/icesat2py/icepyx.git
# git clone https://github.com/ICESAT-2HackWeek/topohack.git
# git clone https://github.com/darribas/contextily.git
cd contextily
pip install -e .
cd ../topohack
pip install -e .
cd ../icepyx
pip install -e .
%cd ~
#needs to be wherever icepyx, contextily, and topolib are installed in the previous step (ideally $HOME)
# %pwd
```
### ICESat-2 product being explored : [ATL08](https://nsidc.org/data/atl08)
- Along track heights for canopy (land and vegitation) and terrain
- Terrain heights provided are aggregated over every 100 m along track interval, output contains "h_te_best_fit: height from best fit algorithm for all photons in the range", median height and others. Here we use h_te_best_fit.
- See this preliminary introduction and quality assessment [paper](https://www.mdpi.com/2072-4292/11/14/1721) for more detail
## Import packages, including icepyx
```
import icepyx as ipx
import os
import shutil
import h5py
import xarray as xr
# depedencies
import getpass
#from topolib.subsetDat import subsetBBox;
from topolib import icesat2_data
import glob
import rasterio
from topolib import gda_lib
from topolib import dwnldArctic
import numpy as np
import geopandas as gpd
from multiprocessing import Pool
import contextily as ctx
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
%cd ~/icepyx/doc/examples/
```
## Preprocess #1
- Download using icepyx
### Create an ICESat-2 data object with the desired search parameters
- See the ICESat-2 DAAC Data Access notebook for more details on downloading data from the NSIDC
```
region_a = ipx.Query('ATL08', [-73.9, 10.7, -73.4, 11.1], ['2018-12-01','2019-09-01'], \
start_time='00:00:00', end_time='23:59:59')
```
## Finding and downloading data
In order to download any data from NSIDC, we must first authenticate ourselves using a valid Earthdata login (available for free on their website). This will create a valid token to interface with the DAAC as well as start an active logged-in session to enable data download. The token is attached to the data object and stored, but the session must be passed to the download function. Then we can order the granules.
### Log in to Earthdata
```
earthdata_uid = 'Jessica.scheick'
email = 'jessica.scheick@maine.edu'
region_a.earthdata_login(earthdata_uid, email)
#search for available granules
region_a.avail_granules()
region_a.granules.avail
```
### Place the order
```
region_a.order_granules(subset=False)
#region_a.order_granules(verbose=True)
#view a short list of order IDs
region_a.granules.orderIDs
```
### Download the order
Finally, we can download our order to a specified directory (which needs to have a full path but doesn't have to point to an existing directory) and the download status will be printed as the program runs. Additional information is again available by using the optional boolean keyword 'verbose'.
```
wd=%pwd
path = wd + '/download'
region_a.download_granules(path)
```
### Clean up the download folder by removing individual order folders:
```
#Clean up Outputs folder by removing individual granule folders
for root, dirs, files in os.walk(path, topdown=False):
for file in files:
try:
shutil.move(os.path.join(root, file), path)
except OSError:
pass
for root, dirs, files in os.walk(path):
for name in dirs:
os.rmdir(os.path.join(root, name))
```
## Preprocess #2
- Convert data into geopandas dataframe, which allows for doing basing geospatial opertaions
```
# glob to list of files (run block of code creating wd and path variables if starting processing here)
ATL08_list = sorted(glob.glob(path+'/*.h5'))
```
## Examine content of 1 ATLO8 hdf file
```
filename = ATL08_list[5]
with h5py.File(filename, 'r') as f:
# List all groups
pairs=[1, 2, 3]
beams=['l','r']
print("Keys: %s" % f.keys())
a_group_key = list(f.keys())[0]
#
ATL08_list
# dict containing data entries to retrive
dataset_dict = {'land_segments':['delta_time','longitude','latitude','atl06_quality_summary','quality','terrain_flg'], 'land_segments/terrain':['h_te_best_fit']}
#gda_lib.ATL08_to_dict(ATL08_list[0],dataset_dict)
## the data can be converted to geopandas dataframe, see ATL08_2_gdf function in topolib gda_lib
temp_gdf = gda_lib.ATL08_2_gdf(ATL08_list[0],dataset_dict)
temp_gdf.head()
%matplotlib inline
temp_gdf.plot()
len(temp_gdf)
colombia_crs = {'init':'epsg:32618'}
plot_web = {'init':'epsg:3857'}
temp_gdf.keys()
```
## Convert the list of hdf5 files into more familiar Pandas Dataframe
```
gdf_list = [(gda_lib.ATL08_2_gdf(x,dataset_dict)) for x in ATL08_list]
gdf_colombia = gda_lib.concat_gdf(gdf_list)
```
## Preprocess #3
- Visualise data footprints
```
fig,ax = plt.subplots(figsize=(10,10))
temp_web = gdf_colombia.to_crs(plot_web)
clim = np.percentile(temp_web['h_te_best_fit'].values,(2,98))
temp_web.plot('h_te_best_fit',ax=ax,s=3,legend=True,cmap='inferno',vmin=clim[0],vmax=clim[1])
ctx.add_basemap(ax=ax)
ax.set_xticks([])
ax.set_yticks([])
```
## We will use the TANDEM-X Global DEM for our comparison. The resolution of the globally avaialable product is 90 m, with *horizontal* and *vertical* accuracy better than 2 to 3 m.
- TANDEM-X DEM for the region was downloaded and preprocessed, filtered using scripts from the [tandemx](https://github.com/dshean/tandemx) repository
```
dem_file = os.path.join(wd,'supporting_files/TDM1_DEM_90m_colombia_DEM_masked_aea.tif')
hs_file = os.path.splitext(dem_file)[0]+'_hs.tif'
dem_ds = rasterio.open(dem_file)
! gdaldem hillshade $dem_file $hs_file
hs_ds = rasterio.open(hs_file)
def gdf_on_raster(gdf,ds,ax,hs_ds=None,cmap='inferno'):
gdf = gdf.to_crs(ds.crs)
xmin,ymin,xmax,ymax = ds.bounds
ndv = gda_lib.get_ndv(ds)
img = ds.read(1)
img = np.ma.masked_less_equal(img,ndv)
clim = np.nanpercentile(img,(2,98))
if hs_ds:
hs = hs_ds.read(1)
ndv = gda_lib.get_ndv(hs_ds)
hs = np.ma.masked_less_equal(hs,ndv)
ax.imshow(hs,cmap='gray',extent=[xmin,xmax,ymin,ymax])
im = ax.imshow(img,alpha=0.6,cmap=cmap,extent=[xmin,xmax,ymin,ymax])
print(clim)
else:
im = ax.imshow(img,cmap=cmap,vmin=clim[0],vmax=clim[1],extent=[xmin,xmax,ymin,ymax])
gdf.plot('p_b',ax=ax,s=1)
plt.colorbar(im,ax=ax,extend='both',label='Elevation (m)')
xmin,ymin,xmax,ymax = dem_ds.bounds
## Filter points based on DEM extent
gdf_colombia['x_atc'] = gdf_colombia['delta_time']
gdf_colombia_dem_extent = gdf_colombia.to_crs(dem_ds.crs).cx[xmin:xmax,ymin:ymax]
fig,ax = plt.subplots(figsize=(10,5))
gdf_on_raster(gdf_colombia_dem_extent,dem_ds,ax,hs_ds)
```
## Section 1
- This contains demonstration of elevation profile along 1 track, which has 6 beams
```
### Picking out 1 track
### check with different ATLO8 inputs
test_track = ATL08_list[3]
print(test_track)
test_gdf = gda_lib.ATL08_2_gdf(test_track,dataset_dict).to_crs(dem_ds.crs).cx[xmin:xmax,ymin:ymax]
fig,ax = plt.subplots(figsize=(10,5))
gdf_on_raster(test_gdf,dem_ds,ax,hs_ds)
## Working with track from 20190105 to show how we can use this to plot elevation values along the collect just by using ICESat-2
np.unique(test_gdf['p_b'].values)
# Limit analysis to 1 pair beam combination
mask = test_gdf['p_b']== '1.0_0.0'
test_gdf_pb = test_gdf[mask]
fig,ax = plt.subplots(figsize=(5,4))
plot_var = test_gdf_pb['h_te_best_fit'].values
ax.scatter(np.arange(len(plot_var)),plot_var,s=1)
ax.set_xlabel('Along-track id')
ax.set_ylabel('ATL08-Terrain Height')
ax.grid('--')
#or do it for all the 6 tracks
track_identifier = list(np.unique(test_gdf['p_b'].values))
fig,axa = plt.subplots(3,2,figsize=(10,10))
ax = axa.ravel()
for idx,track in enumerate(track_identifier):
mask = test_gdf['p_b']== track
test_gdf_pb = test_gdf[mask]
plot_var = test_gdf_pb['h_te_best_fit'].values
ax[idx].scatter(np.arange(len(plot_var)),plot_var,s=1)
ax[idx].set_xlabel('Along-track id')
ax[idx].set_ylabel('ATL08-Terrain Height')
ax[idx].grid('--')
ax[idx].set_title('Track: {} Beam: {}'.format(track.split('_',15)[0],track.split('_',15)[1]))
plt.tight_layout()
```
## Section 2:
- Compare ICESat-2 Elevation with that of reference DEM (in this case TANDEM-X)
### Sample elevations from DEM at ATLO8-locations using nearest neighbour algorithm
```
del_time,elev = gda_lib.sample_near_nbor(dem_ds,gdf_colombia_dem_extent)
gdf_colombia_dem_extent['dem_z'] = elev
```
### Plot elevation differences (ICESat-2 minus TANDEM-X) as a function of elevation
```
gdf_colombia_dem_extent['z_diff'] = gdf_colombia_dem_extent['h_te_best_fit'] - gdf_colombia_dem_extent['dem_z']
fig,ax = plt.subplots(figsize=(5,4))
# Sort elevation values
gdf_colombia_dem_extent.sort_values(by='dem_z',inplace=True)
gdf_colombia_dem_extent_filt = gdf_colombia_dem_extent[gdf_colombia_dem_extent['z_diff']<1000]
ax.scatter(gdf_colombia_dem_extent_filt.dem_z.values,gdf_colombia_dem_extent_filt.z_diff.values,s=1)
ax.set_ylim(-50,50)
ax.set_xlabel('Elevation (TANDEM-X) (m)')
ax.set_ylabel('Elevation difference (m)')
```
- The difference above might be noise or real signal (" the dates of ICESAT-2 footprints are between December to March 2018-2019, while TANDEM-X contains a mosaic of elevations between 2012-2014")
- It's hard to make out anything from the above plot, let's try a box plot
```
dem_bins = list(np.arange(0,5500,500))
# mask out differences larger than 100 m ?
filt_lim = (-100,100)
mask = (gdf_colombia_dem_extent['z_diff']<=100) & (gdf_colombia_dem_extent['z_diff']>=-100)
gdf_colombia_dem_extent_filt_box = gdf_colombia_dem_extent[mask]
gdf_colombia_dem_extent_filt_box['bin'] = pd.cut(gdf_colombia_dem_extent_filt_box['dem_z'],bins=dem_bins)
fig,ax = plt.subplots(figsize=(5,4))
gdf_colombia_dem_extent_filt_box.boxplot(column='z_diff',by='bin',ax=ax)
ax.set_xlabel('Elevation (TANDEM-X) (m)')
ax.set_xticklabels(dem_bins)
#ax.set_ylabel('Elevation difference (m)')
ax.set_title('')
ax.set_ylabel('ICESat-2 minus TANDEM-X (m)')
#plt.tight_layout()
```
- The x labels in the plot are lower intervals of boxplots, we see that the median differences are close to zero for most elevation ranges with a maximum difference of -10 m. Also, we see a constant negative bias in all the elevation difference. This might be due to a bias present between the 2 sources. This bias maybe due to offset between the 2 datasets which might come down after coregistration.
## Section 3
- Application of ICESat-2 as control surface for DEMs coregistration
- Or, to find offsets and align ICESat-2 tracks to a control surface
## Going fancy, include only if you want to :)
### Application of ICESat-2 as control for DEM co-registration ?
- Can use point cloud alignment techniques to align DEMs to points, for now as a starting point we can use the transformation matrix to inform on the horizontal and vertical offset between ICESat-2 tracks and DEMs
- We will be using a flavor of Iterative Closest Point alignment algorithm, implemented in [Ames Stereo Pipeline](https://github.com/NeoGeographyToolkit/StereoPipeline)
```
gdf_colombia_dem_extent.keys()
### Save the geodataframe in the specified way as expected by Ames Stereo Pipline
icesat2_pc = '/home/jovyan/icesat2/icesat2_colombia_pc.csv'
gdf_colombia_dem_extent[['latitude','longitude','h_te_best_fit']].to_csv(icesat2_pc,header=False,index=None)
gdf_colombia_dem_extent.head()
### Save the geodataframe in the specified way as expected by Ames Stereo Pipline
icesat2_pc = '/home/jovyan/icesat2/icesat2_colombia_pc.csv'
pc_rename_dict = {'latitude':'lat','longitude':'lon','h_te_best_fit':'height_above_datum'}
gdf_colombia_dem_extent = gdf_colombia_dem_extent.rename(columns=pc_rename_dict)
#gdf_colombia_dem_extent['height_above_datum'] = gdf_colombia_dem_extent['h_te_best_fit']
gdf_colombia_dem_extent[['lon','lat','height_above_datum']].to_csv(icesat2_pc,header=True,index=None)
!exportPATH="/home/jovyan/icesat2/StereoPipeline/bin:$PATH"
! ls
align_fol = '/home/jovyan/icesat2/align/run'
#max-displacement is set to 10, given ICESat-2 reported operational accuracy
pc_align_opts="--csv-format '1:lon 2:lat 3:height_above_datum' --max-displacement 10 --save-transformed-source-points --alignment-method point-to-point --datum WGS84"
!/home/jovyan/icesat2/StereoPipeline/bin/pc_align $pc_align_opts $icesat2_pc $dem_file -o $align_fol
```
- Alignment results suggest that there is an offset of ~5.4 m between the ICESat-2 points and TANDEM-X DEM, so that could have contributed to the offsets which we see above
```
##Lets rerun the analysis with the new DEM to see if the alignment improved anything or not
## Regrid the transformed pointcloud into DEM at 90 m posting
!/home/jovyan/icesat2/StereoPipeline/bin/point2dem --tr 90 --t_srs EPSG:32618 $align_fol-trans_source.tif
gdf_colombia_dem_extent = gdf_colombia_dem_extent.loc[:,~gdf_colombia_dem_extent.columns.duplicated()]
gdf_colombia_dem_extent['height_above_datum'].values[5]
trans_dem_file = '/home/jovyan/icesat2/align/run-trans_source-DEM.tif'
trans_dem_ds = rasterio.open(trans_dem_file)
del_time,elev = gda_lib.sample_near_nbor(trans_dem_ds,gdf_colombia_dem_extent)
gdf_colombia_dem_extent['trans_dem_z'] = elev
dem_bins = list(np.arange(0,5500,500))
# mask out differences larger than 100 m ?
filt_lim = (-100,100)
gdf_colombia_dem_extent['trans_z_diff'] = gdf_colombia_dem_extent.height_above_datum - gdf_colombia_dem_extent.trans_dem_z
mask = (gdf_colombia_dem_extent['trans_z_diff']<=100) & (gdf_colombia_dem_extent['trans_z_diff']>=-100)
gdf_colombia_dem_extent_filt_box = gdf_colombia_dem_extent[mask]
gdf_colombia_dem_extent_filt_box['bin'] = pd.cut(gdf_colombia_dem_extent_filt_box['dem_z'],bins=dem_bins)
fig,ax = plt.subplots(figsize=(5,4))
gdf_colombia_dem_extent_filt_box.boxplot(column='trans_z_diff',by='bin',ax=ax)
ax.set_xlabel('Elevation (TANDEM-X) (m)')
ax.set_xticklabels(dem_bins)
ax.set_title('')
ax.set_ylabel('ICESat-2 minus TANDEM-X DEM after coregistration (m)')
```
- We see that after coregistration, the bias reduces to an extent. Note that this is a very preliminary analysis, results will be better after filtering the ATL08 points based on quality metrics and finding truly static surfaces (snow free during acquisiton time of ICESat-2 points)
#### Credits
* notebook by: [Jessica Scheick](https://github.com/JessicaS11) and [Shashank Bhushan](https://github.com/ShashankBice)
| github_jupyter |
```
from imctools.converters import ome2analysis
from imctools.converters import ome2histocat
from imctools.converters import mcdfolder2imcfolder
from imctools.converters import exportacquisitioncsv
import sys
print(sys.path)
print(sys.executable)
import os
import pathlib
import shutil
import re
```
# The IMC preprocessing pipeline for multiplexed image analysis
This is a pipeline to segment IMC data using Ilastik pixel classification as well as CellProfiler.
To run install the conda `imctools` envrionment found in `Setup/conda_imctools.yml`.
-> Install conda
-> On a conda console type: `conda env create -f setup/conda_imctools.yml`
Start a Jupyter instance in this environment to run this Jupyter Notebook.
This notebook will automatically download example data.
This dataset are zipped input_data_folders_path_inputs of the `.mcd` and all `.txt` files corresponding to one acquisitions session.
This is my recomended data format as it preserves and contains all original metadata and enforces a consistent naming scheme.
Note that the `description` image name can be found in the `..._Acquisition_meta.csv` generated together with the ome tiffs
as well as in the `cpinp` folder later in the script.
After analysis the `Image.csv` metadata file generated in Cellprofiller will also contain the `Description` as well as other important metadata for each
image, such as acquisition frequency, time, location etc.
For working with `.txt` files, please look at the older examples.
For any feedback please contact: Vito, vito.zanotelli@uzh.ch
```
# the input_data_folders_path_inputs with the ziped acquisition files for the analysis
folders_path_inputs = ['../example_data']
# part that all considered files need to have in common
input_file_regexp = '.*.zip'
# output for OME tiffs
folder_path_base = '../analysis'
# pannel
file_path_csv_panel = '../config/example_panel.csv'
csv_panel_metal = 'Metal Tag'
csv_panel_ilastik = 'ilastik'
csv_panel_full = 'full'
folder_path_base = pathlib.Path(folder_path_base)
folders_path_inputs = [pathlib.Path(f) for f in folders_path_inputs]
# parameters for resizing the images for ilastik
folder_path_analysis = folder_path_base / 'tiffs'
folder_path_ilastik= folder_path_base / 'ilastik'
folder_path_ome= folder_path_base / 'ometiff'
folder_path_cp = folder_path_base / 'cpout'
folder_path_cp_input = folder_path_base / 'cpinp'
folder_path_histocat = folder_path_base / 'histocat'
# Other output
file_path_cp_csv = folder_path_cp / 'panel.csv'
file_path_full_channels_csv = folder_path_cp_input / 'full_channelmeta.csv'
file_path_prob_channels_csv = folder_path_cp_input / 'probab_channelmeta_manual.csv'
suffix_full = '_full'
suffix_ilastik = '_ilastik'
suffix_ilastik_scale = '_s2'
suffix_mask = '_mask.tiff'
suffix_probablities = '_Probabilities'
failed_images = list()
```
Generate all the input_data_folders_path_inputs if necessary
```
for fol in [folder_path_base, folder_path_analysis, folder_path_ilastik,
folder_path_ome, folder_path_cp, folder_path_histocat,
folder_path_cp_input]:
if not fol.exists():
fol.mkdir(parents=True)
## This will download the example data - remove if you work with your own data!
import urllib.request
fol_example = folders_path_inputs[0]
fol_example.mkdir(exist_ok=True)
urls = [('20170905_Fluidigmworkshopfinal_SEAJa.zip',
'https://www.dropbox.com/s/awyq9p7n7dexgyt/20170905_Fluidigmworkshopfinal_SEAJa.zip?dl=1') ,
('20170906_FluidigmONfinal_SE.zip',
'https://www.dropbox.com/s/0pdt1ke4b07v7zd/20170906_FluidigmONfinal_SE.zip?dl=1')]
for fn, url in urls:
fn = fol_example / fn
if not fn.exists():
urllib.request.urlretrieve(url, fn)
```
Convert mcd containing input_data_folders_path_inputs into imc zip input_data_folders_path_inputs
```
%%time
failed_images = list()
re_fn = re.compile(input_file_regexp)
for fol in folders_path_inputs:
for fn in fol.glob('*'):
if re_fn.match(fn.name):
mcdfolder2imcfolder.mcdfolder_to_imcfolder(fn, output_folder=folder_path_ome,
create_zip=False)
```
Generate a csv with all the acquisition metadata
```
exportacquisitioncsv.export_acquisition_csv(folder_path_ome, output_folder=folder_path_cp_input)
```
Export a copy of the panel to the output folder
```
shutil.copy(file_path_csv_panel, file_path_cp_csv)
```
Convert ome.tiffs to a HistoCAT compatible format, e.g. to do some visualization and channel checking.
-> Only required if HistoCAT is used as an image browser
```
%%time
for fol in folder_path_ome.iterdir():
if fol.is_dir():
ome2histocat.omefolder_to_histocatfolder(fol, folder_path_histocat)
```
Generate the analysis stacks
```
list_analysis_stacks =[
(csv_panel_ilastik, suffix_ilastik, 0),
(csv_panel_full, suffix_full, 0)]
%%time
ome2analysis.omefolder_to_analysisfolder(folder_path_ome, folder_path_analysis, panel_csv_file=file_path_csv_panel,
analysis_stacks=(list_analysis_stacks), metalcolumn=csv_panel_metal)
```
Copy one csv containing the channel order of the full stack in to the cellprofiler input folder
```
fn = next(folder_path_analysis.glob(f'*{suffix_full}.csv'))
shutil.copy(fn, file_path_full_channels_csv)
```
Generate channel metadata for the probability stack
```
probab_meta = ["CellCenter", "CellBorder", "Background"]
with open(file_path_prob_channels_csv, 'w') as f:
f.write('\n'.join(probab_meta))
```
# Next steps
This concludes the conversion of the IMC rawdata into usable TIFFs.
The pipelines can be found in the `cp4_pipeline` folder in this repository. They were tested in `cellprofiler 4.0.6).
The next steps are:
### A) Cellprofiler: 1_prepare_ilastik
In this module we prepare the data for Ilastik pixel classification, by first removing strong outlier pixels, then scaling the images 2x and then taking random 500x500 crops to do the train the pixel classifier.
Note: for large datasets 250x250 crops or smaler should suffice!
The following parts of this module need to be adapted:
1) File list: choose all files in the `tiff` subfolder
2) Default Output Folder: Choose the `ilastik` subfolder
No further parts need to be adapted.
In our 16 core computer this step takes ca 5 min for the example dataset.
### B) Ilatik: Train a pixel classifier
This uses the random crops generated in the last step.
1) Make a new `pixel classification project`. -> An example project that works with the example data can be found in the 'analysis' subfolder.
2) Add the `.h5` random crops: Raw data -> Add Seperate Images -> Select all `.h5` images in the `ilastik` subfolder.
3) Proceed to `Feature Selection`
4) Select suitable features (or just everything >= 1 pixels)
5) Proceed to the classification:
- Add 3 labels:
- 1: Nuclei
- 2: Cytoplasma/membrane
- 3: Background
- -> For large datasets adding the labels can take a while
- Start labeling:
- The box next to `Input Data` can change the channels. What each channel corresponds to can be seen when looking in any of the `..._ilastik.csv` files in the `tiff` folder. The 0 channel correspond to the sum of all channels, very usefull to label the background.
- Use window leveling change the contrast. Right click on the `Input Data` -> `Adjust Thresholds` is also very usefull
- Label opiniated: If you see in the nucleus channel that two nuclei are stuck together but have a faint dip in intensity in between, label this as 2: Cytoplasma. Encyrcle nuclei with Cytoplasma
- Diseable `Live Update` for performance
- Frequently check the `Uncertainties`: This indicates which pixels the classifier profits most if they are labeled. A well trained classifier has low uncertainty within class regions (e.g. Nuclei) and high uncertainty at class borders (e.g. between nuclei and cytoplasma).
- If you think the classifier is well trained, export the probabilities:
- Export Settings -> Source: Probabilities -> Choose Export Image Settings:
- Convert to datatype: Unsigned Integer 16 bit
- Renormalize: check
- Format: Tiff
- File: leave default
- Export all: This generates `_Probabilities.tiff` in the `ilastik` folder. They can be checked using any image viewer
- To generate uncertainty maps (good to identify regions that need training),
run the `Convert probabilities to uncertainties` section `#For training` below. This will put uncertainties in the uncertainty folder.
-> Well trained classifiers have low uncertainty (transparent) everywhere but at class borders which should be white.
- Optional: Train again regions with high uncertainty, then proceed.
- Batch processing: -> Select raw data files -> select all `_s2.h5` files in the `tiff` folder. (sort by filetype, select all `H5` files).
-> This step takes a while and is computationally intensive!
-> Ca 15 min on 10 cores on the example data
- Optional: use the below probability to uncertainty `#For the data` to convert all proabilities to uncertainties, check if there are any regions of high uncertainty and optionally crop the corresponding image part in imagej and add it to the training data.
- Note: store the `ilastik` folder with all the random crops and the trained classifier for reproducibility reasons.
- A trained
### C) Cellprofiler: 2_segment_ilastik
This step will segment the probabilities into masks.
Things to adapt:
1) File list: choose again all files from the `tiffs` folder
2) It is important to check the `IdentifyPrimaryObjects` step, if the segmentation settings are suitable!
This might vary strongly between cell/tissue/training and needs attention! Use the test mode and try various settings.
Also note the `smooth` step immediately before: This can be also removed, I just happen get good results with this additional step.
3) Also the `MeasureObjectSizeShape` combined with `FilterObjects` is just some personal preference of mine, feel free to change
4) `IdentifySecondaryObjects`: Here th mask is expanded to the full cell.
5) `Rescale objects`: note that our segmentation was done on 2x upscaled images, this scales the masks down again. Note that potentially also the nuclei mask could be scaled down and further exported and used.
6) The `Default Output Path` does not need to be adapted for this module.
Note1: Seperating mask generation from mask measurement adds modularity and is thus highly recommended, as generating masks is one of the most resource intensive steps.
### D) Cellprofiler: 3_measure_mask
This step is not necessary for `HistoCat` only analysis. If `HistoCat` should be used, use the `Generate the histocat folder with masks` section below.
#### 3_measure_mask_basic
This module measures without considering spillover correction.
1) File list: choose again all files from the `tiffs` folder
2) View Output settings: set the `Default output folder` to the `cpout` folder and the
`Default input folder` to the `cpint` folder.
3) Metadata: update - this will automatically merge the mcd metadata .csv generated earlier in the script with your images.
4) Names and types: click update
5) `Measure Object Intensity Multichannel`: Adapt the channel numbers. Check the `_full.csv` files in the `tiffs` folder to see how many channels the stack have and adapt accordingly.
6) `Measure Image Intensity Multichannel`: Adapt the channel numbers. Check the `_full.csv` files in the `tiffs` folder to see how many channels the stack have and adapt accordingly.
Notes:
- In this pipeline all the intesities are scaled by `1/(2**16)`
- The mapping between channel number c1, c2, c3 corresponds to the position in the `_full.csv`s found in the `tiffs` folder.
- The original acquisition description, acquisition frequencies etc can be found in the `Image.csv` output as `Metdata_...` columns.
- This outputs a lot of measurements that are acutally of little interest - usually we only look at `meanintensity` per channel and cell.
To reduce the outputs, select in `Export To Spreadsheet` -> `Select Measurements to Export` -> Only the measurements you want (usually all Image measurements and only the `MeanIntensity` fullstack measurements).
- The `FullStack` can also be not measured, as it is almost identical to the `FullStackFiltered`.
#### 3_measure_mask_compensated
This will also have a spillover corrections step - stay tuned!
### E) Pipeline output
The pipeline output is all in the `cpout` folder.
Files and folders:
- Image.csv: Image level metadata
- var_Image.csv: Metadata for the colums in Image.csv.
This contains also metadata from the IMC such as acquisition coordinates.
- {object}.csv: eg cell.csv, contains cell slice level measurements
- var_{object}.csv: eg var_cell.csv: contains metadata for the object measurements
- panel.csv: a copy of the panel used for the input
- Object relationships.csv: Object neighbourhood and other relationships
- Experiment.csv: Metadata about the actual measurement run (eg pipeline used,...)
## Generate the histocat folder with masks
```
%%time
for fol in folder_path_ome.glob('*'):
ome2histocat.omefolder_to_histocatfolder(fol, folder_path_histocat,
mask_folder=folder_path_analysis, mask_suffix=suffix_mask, dtype='uint16')
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from chmp.bayes import Model, sample_latent, sample_prior
from chmp.ds import mpl_set, get_color_cycle, colormap, define
from chmp.experiment import Loop
from chmp.ml import get_variables, Sample
```
# Implicit Variational Inference using GAN-style training
Based on:
- D Tran, R Ranganath, DM Blei, ["Hierarchical Implicit Models and Likelihood-Free Variational Inference"](https://arxiv.org/abs/1702.08896), 2017.
- F Huszár, ["Variational Inference using Implicit Models"](http://www.inference.vc/variational-inference-using-implicit-models/).
$$
\begin{align}
\mathcal{L}_q
&= \mathbb{E}_{q(z)} \log \frac{p(x, z)}{q(z)} \\
&= \mathbb{E}_{q(z)} \log p(x|z) + \mathbb{E}_{q(z)} \log \frac{p(z)}{q(z)} \\
&\approx \mathbb{E}_{q(z)} \log p(x|z) + \mathbb{E}_{q(z)} r(z)
\end{align}
$$
Where $r$ *maximizes*:
$$
\mathcal{L}_r =
\mathbb{E}_{p(z)} \big[ \log \sigma(r(z)) \big] +
\mathbb{E}_{q(z)} \big[ \log\big( 1 - \sigma(r(z)) \big) \big]
$$
At optimiality $r^\star(z) = \log p(z) - \log q(z)$.
## Example: Gaussian
Target density:
$$
p(x) = \mathcal{N}(x | \mu_p = 5, \sigma_p = 2)
$$
Generators:
$$
x = \mu_q + \sigma_p \epsilon \sim q(x) \\
\epsilon \sim \mathcal{N}(0, 1)
$$
Log-ratio approximation:
$$
r(x) = b_0 + b_1 x + b_2 x^2
$$
Note, that $r(x)$ is able to fit the log ratio exactly.
```
tf.reset_default_graph()
n_samples = 10
p_ = tf.random_normal((n_samples,), mean=5.0, stddev=2.0)
def sample_q():
with tf.variable_scope('q', reuse=tf.AUTO_REUSE):
mu_ = tf.get_variable('mu', initializer=0.0, dtype=tf.float32)
sigma_ = tf.nn.softplus(tf.get_variable('sigma', initializer=1.0, dtype=tf.float32))
return mu_ + sigma_ * tf.random_normal((n_samples,))
def apply_r(x):
with tf.variable_scope('r', reuse=tf.AUTO_REUSE):
b0_ = tf.get_variable('b', initializer=0.0, dtype=tf.float32)
b1_ = tf.get_variable('b1', initializer=1.0, dtype=tf.float32)
b2_ = tf.get_variable('b2', initializer=0.5, dtype=tf.float32)
return b0_ + b1_ * x + b2_ * (x ** 2.0)
q_ = sample_q()
ratio_loss_ = -tf.reduce_mean(
tf.log(tf.sigmoid(apply_r(p_))) +
tf.log(1 - tf.sigmoid(apply_r(sample_q())))
)
kl_loss_ = -tf.reduce_mean(apply_r(sample_q()))
train_q_ = tf.train.AdamOptimizer(0.01).minimize(kl_loss_, var_list=get_variables('q/'))
train_r_ = tf.train.AdamOptimizer(0.05).minimize(ratio_loss_, var_list=get_variables('r/'))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(10_000):
sess.run(train_r_)
sess.run(train_q_)
q = [sess.run(q_) for _ in range(100)]
p = [sess.run(p_) for _ in range(100)]
c0, c1 = get_color_cycle(2)
plt.hist(np.reshape(q, -1), bins=21, normed=True, alpha=0.3, label='x ~ q', color=c1)
plt.hist(np.reshape(p, -1), bins=21, normed=True, alpha=0.3, label='x ~ p', color=c0)
t = np.linspace(-3, 13, 100)
plt.plot(
t, np.exp(-0.5 * (t - 5.) ** 2.0 / (4.)) / np.sqrt(2. * np.pi * 4.),
label='p(x)', color=c0,
)
mpl_set(xlabel='x', ylabel='Frequency', legend=True)
pass
```
## Mixture of Gaussians
```
tf.reset_default_graph()
n_samples = 10
u_ = tf.cast(tf.random_uniform([n_samples]) > 0.4, tf.float32)
p_ = (
u_ * tf.random_normal([n_samples], mean=-2, stddev=1.0) +
(1 - u_) * tf.random_normal([n_samples], mean=4.0, stddev=0.5)
)
def sample_q():
with tf.variable_scope('q', reuse=tf.AUTO_REUSE):
mu_ = tf.get_variable('mu', initializer=np.random.uniform(-5., 5.0, size=(4,)).astype(np.float32), dtype=tf.float32)
sigma_ = tf.nn.softplus(tf.get_variable('sigma', initializer=5.0 * np.ones(4, dtype=np.float32), dtype=tf.float32))
theta_ = tf.get_variable('theta', initializer=0.1 * np.ones(4, dtype=np.float32), dtype=tf.float32)
u_ = tf.contrib.distributions.RelaxedOneHotCategorical(0.1, logits=theta_).sample([n_samples])
mu_ = tf.expand_dims(mu_, axis=0)
sigma_ = tf.expand_dims(sigma_, axis=0)
return tf.reduce_sum(
u_ * (mu_ + sigma_ * tf.random_normal((n_samples, 4))),
axis=-1,
)
b_init_ = np.random.uniform(-1e-3, +1e-3, size=(3, 10)).astype(np.float32)
mu_init_ = np.random.uniform(-10., 10., size=(10,)).astype(np.float32)
sigma_init_ = (0.01 + np.random.normal(size=(10,)) ** 2.0).astype(np.float32)
def apply_r(x):
with tf.variable_scope('r', reuse=tf.AUTO_REUSE):
b0_ = tf.get_variable('b0', initializer=[1., 0.01, 0.01], dtype=tf.float32)
b_ = tf.get_variable('b', initializer=b_init_, dtype=tf.float32)
mu_ = tf.get_variable('mu', initializer=mu_init_, dtype=tf.float32)
sigma_ = tf.get_variable('sigma', initializer=sigma_init_, dtype=tf.float32)
x1 = tf.expand_dims(x, axis=-1)
x2 = tf.expand_dims(x1, axis=-1)
b_ = tf.expand_dims(b_, axis=0)
n = tf.expand_dims(tf.expand_dims(tf.range(3, dtype=tf.float32), axis=0), axis=-1)
mu_ = tf.expand_dims(mu_, axis=0)
sigma_ = tf.expand_dims(sigma_, axis=0)
base = b0_[0] + b0_[1] * x + b0_[2] * x ** 2.
delta = tf.reduce_mean(
tf.reduce_sum(b_ * (x2 - mu_) ** n, axis=1) * (tf.exp(-0.5 * (x1 - mu_) ** 2.0 / sigma_ ** 2.0) / sigma_),
axis=1,
)
return base + delta
q_ = sample_q()
ratio_loss_ = -tf.reduce_mean(
tf.log(tf.sigmoid(apply_r(p_))) +
tf.log(1 - tf.sigmoid(apply_r(sample_q())))
)
kl_loss_ = -tf.reduce_mean(apply_r(sample_q()))
train_q_ = tf.train.AdamOptimizer(0.01).minimize(kl_loss_, var_list=get_variables('q/'))
train_r_ = tf.train.AdamOptimizer(0.1).minimize(ratio_loss_, var_list=get_variables('r/'))
t_ = tf.linspace(-10., 10., 100)
r_ = apply_r(t_)
ratios = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for loop, iteration in Loop.over(range(10_000)): #10_000):
if iteration % 200 == 0:
ratios.append(sess.run([t_, r_]))
sess.run(tf.verify_tensor_all_finite(q_, 'nans detected'))
sess.run(train_r_)
sess.run(train_q_)
print(f'{loop}'.ljust(120), end='\r')
q = [sess.run(q_) for _ in range(100)]
p = [sess.run(p_) for _ in range(100)]
ratios = np.asarray(ratios)
c0, c1 = get_color_cycle(2)
plt.figure(figsize=(16, 4))
plt.subplot(1, 3, 1)
plt.hist(np.reshape(q, -1), range=(-10, +10), bins=31, normed=True, alpha=0.3, label='x ~ q', color=c1)
plt.hist(np.reshape(p, -1), range=(-10, +10), bins=31, normed=True, alpha=0.3, label='x ~ p', color=c0)
t = np.linspace(-10, 10, 100)
plt.plot(
t,
0.6 * np.exp(-0.5 * (t + 2.) ** 2.0) / np.sqrt(2. * np.pi) +
0.4 * np.exp(-0.5 * ((t - 4) / 0.5) ** 2.0) / np.sqrt(2.0 * np.pi * 0.5 ** 2.0),
label='p(x)', color=c0,
)
mpl_set(xlabel='x', ylabel='Frequency', legend=True)
plt.subplot(1, 3, 2)
plt.plot(np.sort(np.reshape(p, -1)), np.linspace(0, 1, np.size(p)), label='p', color=c0)
plt.plot(np.sort(np.reshape(q, -1)), np.linspace(0, 1, np.size(q)), label='q', color=c1)
plt.axvline(x=-2, color='0.5')
plt.axvline(x=4, color='0.5')
mpl_set(xlabel='x', ylabel='Emperical CDF', legend=True)
plt.subplot(1, 3, 3)
for color, ratio in zip(colormap(range(len(ratios))), ratios):
plt.plot(ratio[0], ratio[1], color=color, alpha=0.2)
plt.plot(ratios[-1, 0], ratios[-1, 1], color='0.2', label='final')
plt.axhline(y=0, c='0.5')
mpl_set(xlabel='x', ylabel='Estimated log ratio r(x)', legend=True)
pass
```
## Linear regression
```
n_features = 10
n_samples = 100
np.random.seed(1234)
w = np.random.normal(scale=4, size=n_features)
x = np.random.uniform(-10, +10, size=(n_samples, n_features))
y = x @ np.random.normal(w, scale=0.2)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.bar(np.arange(len(w)), w)
plt.subplot(1, 2, 2)
plt.hist(y - x @ w)
pass
def build_implicit_losses(model, discriminator):
model_scope = model.build(ensure_loss=False, latent_strategy=sample_prior)
inference_scope = model.build(ensure_loss=False, latent_strategy=sample_latent)
if model_scope['loss'] is not None or inference_scope['loss'] is not None:
raise RuntimeError('cannot handle custom losses ...')
latent = {*model_scope['latent'], *inference_scope['latent']}
observed = {*model_scope['observed'], *inference_scope['observed']} & {*model_scope['p'], *inference_scope['p']}
ratio_loss = 0
for k in latent:
ratio_loss += tf.reduce_mean(
tf.log(tf.sigmoid(discriminator[k](model_scope['latent'][k]))) +
tf.log(1 - tf.sigmoid(discriminator[k](inference_scope['latent'][k])))
)
kl_loss = 0
for k in observed:
px = inference_scope['p'][k]
x = inference_scope['observed'][k]
kl_loss += tf.reduce_mean(px.log_prob(x))
for k in latent:
kl_loss += tf.reduce_mean(discriminator[k](inference_scope['latent'][k]))
return -kl_loss, -ratio_loss
tf.reset_default_graph()
def discriminator(weights):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE):
x = tf.layers.dense(weights, 10, activation=tf.nn.relu, name='layer1')
x = tf.layers.dense(x, 10, activation=tf.nn.relu, name='layer2')
x = tf.layers.dense(x, 1, name='layer3')
return x
with Model() as model:
@model.observe
def _(s):
s.x = tf.placeholder(name='x', dtype=tf.float32, shape=(n_samples, n_features))
s.y = tf.placeholder(name='y', dtype=tf.float32, shape=n_samples)
@model.define
def _(s):
s.p.z = tf.distributions.Normal(loc=tf.zeros([n_samples, n_features]), scale=2.0)
s.p.y = tf.distributions.Normal(tf.reduce_sum(s.x * s.z, axis=1), 1.0)
@model.inference
def _(s):
pz = tf.distributions.Normal(
tf.get_variable('mean', n_features, dtype=tf.float32),
tf.nn.softplus(tf.get_variable('scale', n_features, dtype=tf.float32)),
)
s.q.z = Sample(pz.sample(n_samples))
kl_loss_, ratio_loss_ = build_implicit_losses(model, dict(z=discriminator))
x_, y_, z_ = model.get('x', 'y', 'z')
# NOTE: supplying var_list is important
train_r_ = tf.train.AdamOptimizer(0.01).minimize(ratio_loss_, var_list=get_variables('discriminator'))
train_q_ = tf.train.AdamOptimizer(0.001).minimize(kl_loss_, var_list=get_variables('inference'))
losses = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for loop, iteration in Loop.over(range(20_000)):
sess.run([train_r_])
sess.run(train_q_, {y_: y, x_: x})
if iteration % 50 == 0:
_, kl_loss, ratio_loss = sess.run([
tf.verify_tensor_all_finite(z_, 'nans detected'), kl_loss_, ratio_loss_
], {y_: y, x_: x})
losses.append((kl_loss, ratio_loss))
print(f'{loop}'.ljust(120), end='\r')
z = sess.run(z_)
losses = np.asarray(losses)
@define
def _():
c0, c1 = get_color_cycle(2)
plt.figure(figsize=(16, 4))
plt.subplot(1, 3, 1)
plt.bar(np.arange(n_features) - 0.2, np.mean(z, axis=0), width=0.4, label='mean inferred')
plt.bar(np.arange(n_features) + 0.2, w, width=0.4, label='true')
plt.axhline(y=0, color='0.5', alpha=0.5, lw=0.75)
mpl_set(legend=True, xlabel='Feature', ylabel='Weight')
plt.subplot(1, 3, 2)
plt.violinplot(z - w[None, :])
plt.axhline(y=0, color='0.5', alpha=0.5, lw=0.75)
mpl_set(xlabel='Feature', ylabel='Residual weight (inferred - true)')
plt.subplot(1, 3, 3)
plt.plot(losses[:, 0], color=c0)
plt.yscale('log')
plt.twinx()
plt.plot(losses[:, 1], color=c1)
plt.yscale('log')
```
# Implicit Variational Inference using Denoisers
Based on F Huszár, ["Variational Inference using Implicit Models IV"](http://www.inference.vc/variational-inference-using-implicit-models-part-iv-denoisers-instead-of-discriminators/) (2017).
Train a denoiser $F(z) = z - r(z)$ that takes a corrupted sample from the posteriro $z + \epsilon$ to the original value $z$. It is trained via the loss
$$
\begin{align}
\mathcal{L}_r
&= \int\mathrm{d}z\; \mathrm{d}\epsilon; q(z) f_\sigma(\epsilon)\; |F(z + \epsilon) - z|^2 \\
&= \int\mathrm{d}z\; \mathrm{d}\epsilon; q(z) f_\sigma(\epsilon)\; |r(z + \epsilon) - \epsilon|^2
\end{align}
$$
With the zero-centered Gaussian $f_\sigma(\epsilon)$. The optimal $r$ will behave asymptotically as:
$$
\begin{align}
F(z) &\approx z + \sigma^2 \frac{\partial}{\partial z} \log q(z) \\
r(z) &\approx -\sigma^2 \frac{\partial}{\partial z} \log q(z)
\end{align}
$$
Reparametrize $q(z)$ as $z = t(\eta) \sim q(z), \eta \sim p(\eta)$.
Using these results, we can rewrite the gradient VI objective as:
$$
\begin{align}
\frac{\partial}{\partial \phi} \mathcal{L}_q
&= \frac{\partial}{\partial \phi} \mathbb{E}_{q(z)} \log \frac{p(x, z)}{q(z)} \\
&= \mathbb{E}_{p(\eta)} \left[
\frac{\partial}{\partial \phi} \log p(x, t(\eta)) +
\frac{1}{\sigma^2} r(t(\eta)) \cdot \frac{\partial}{\partial \phi} t(\eta)
\right]
\end{align}
$$
```
n_features = 10
n_samples = 100
np.random.seed(1234)
w = np.random.normal(scale=4, size=n_features)
x = np.random.uniform(-10, +10, size=(n_samples, n_features))
y = x @ np.random.normal(w, scale=0.2)
def build_denoiser_losses(model, denoiser, sigma=0.1):
scope = model.build(ensure_loss=False, latent_strategy=sample_latent)
if scope['loss'] is not None:
raise RuntimeError('cannot handle custom losses ...')
denoiser_loss = 0
for k in scope['latent']:
z = scope['latent'][k]
eps = tf.random_normal(tf.shape(z), stddev=sigma)
denoised = denoiser[k](z + eps) - eps
denoiser_loss += tf.reduce_mean(denoised * denoised)
kl_loss = 0
for k in {*scope['observed']} & {*scope['p']}:
px = scope['p'][k]
x = scope['observed'][k]
kl_loss += tf.reduce_mean(px.log_prob(x))
for k in scope['latent']:
pz = scope['p'][k]
z = scope['latent'][k]
kl_loss += tf.reduce_mean(pz.log_prob(z))
kl_loss += tf.reduce_mean(tf.stop_gradient(denoiser[k](z)) * z) / (sigma ** 2.0)
return -kl_loss, denoiser_loss
tf.reset_default_graph()
def denoiser(weights):
with tf.variable_scope('denoiser', reuse=tf.AUTO_REUSE):
x = tf.layers.dense(weights, 20, activation=tf.nn.relu, name='layer1')
x = tf.layers.dense(weights, 20, activation=tf.nn.relu, name='layer2')
x = tf.layers.dense(x, n_features, name='layer3')
return x
with Model() as model:
@model.observe
def _(s):
s.x = tf.placeholder(name='x', dtype=tf.float32, shape=(n_samples, n_features))
s.y = tf.placeholder(name='y', dtype=tf.float32, shape=n_samples)
@model.define
def _(s):
s.p.z = tf.distributions.Normal(loc=tf.zeros([n_samples, n_features]), scale=2.0)
s.p.y = tf.distributions.Normal(tf.reduce_sum(s.x * s.z, axis=1), 1.0)
@model.inference
def _(s):
pz = tf.distributions.Normal(
tf.get_variable('mean', n_features, dtype=tf.float32),
tf.nn.softplus(tf.get_variable('scale', n_features, dtype=tf.float32)),
)
s.q.z = Sample(pz.sample(n_samples))
kl_loss_, denoiser_loss_ = build_denoiser_losses(model, dict(z=denoiser))
x_, y_, z_ = model.get('x', 'y', 'z')
# NOTE: supplying var_list is important
train_r_ = tf.train.AdamOptimizer(0.01).minimize(denoiser_loss_, var_list=get_variables('denoiser'))
train_q_ = tf.train.AdamOptimizer(0.001).minimize(kl_loss_, var_list=get_variables('inference'))
losses = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for loop, iteration in Loop.over(range(20_000)):
sess.run([train_r_])
sess.run(train_q_, {y_: y, x_: x})
if iteration % 50 == 0:
_, kl_loss, denoiser_loss = sess.run([
tf.verify_tensor_all_finite(z_, 'nans detected'), kl_loss_, denoiser_loss_
], {y_: y, x_: x})
losses.append((kl_loss, denoiser_loss))
print(f'{loop}'.ljust(120), end='\r')
z = sess.run(z_)
losses = np.asarray(losses)
@define
def _():
c0, c1 = get_color_cycle(2)
plt.figure(figsize=(16, 4))
plt.subplot(1, 3, 1)
plt.bar(np.arange(n_features) - 0.2, np.mean(z, axis=0), width=0.4, label='mean inferred')
plt.bar(np.arange(n_features) + 0.2, w, width=0.4, label='true')
plt.axhline(y=0, color='0.5', alpha=0.5, lw=0.75)
mpl_set(legend=True, xlabel='Feature', ylabel='Weight')
plt.subplot(1, 3, 2)
plt.violinplot(z - w[None, :])
plt.axhline(y=0, color='0.5', alpha=0.5, lw=0.75)
mpl_set(xlabel='Feature', ylabel='Residual weight (inferred - true)')
plt.subplot(1, 3, 3)
plt.plot(losses[:, 0], color=c0)
plt.yscale('log')
plt.twinx()
plt.plot(losses[:, 1], color=c1)
plt.yscale('log')
```
# GAN-like inference for entropy only
Approximate the entropy term as a ratio estimation problem:
$$
\begin{eqnarray}
\mathcal{L}_q
&=& \mathbb{E}_{q(z)} \left[ \log \frac{p(x|z) p(z)}{q(z)} \right] \\
&=& \mathbb{E}_{q(z)} \left[ \log p(x|z) + \log \frac{p(z)}{t(z)} + \log \frac{t(z)}{q(z)} \right] \\
&\approx& \mathbb{E}_{q(z)} \left[ \log p(x|z) + \log \frac{p(z)}{t(z)} + \log r(z) \right]
\end{eqnarray}
$$
Determine $r$ from the contrastive criterion:
$$
\begin{eqnarray}
\mathcal{L}_r
&=& \mathbb{E}_{t(z)} \big[ \log \sigma(r(z)) \big] +
\mathbb{E}_{q(z)} \big[ \log\big( 1 - \sigma(r(z)) \big) \big]
\end{eqnarray}
$$
Maximum likelihood solution for $t$, e.g., a Gaussian approximation to the posterior,
$$
\begin{eqnarray}
\mathcal{L}_t
&=& \mathbb{E}_{q(z)} \log t(z)
\end{eqnarray}
$$
```
# https://github.com/LMescheder/AdversarialVariationalBayes/blob/master/notebooks/Stan-example.ipynb
# copied from https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started
# NOTE the distribution is bimodal, since tua can be pos. or neg..
# Accordingly eta will change sign too
model_code = """
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real tau;
real eta[J];
}
transformed parameters {
real theta[J];
for (j in 1:J) {
theta[j] = mu + tau * eta[j];
}
}
model {
y ~ normal(theta, sigma);
eta ~ normal(0, 1);
mu ~ normal(0, 1);
tau ~ normal(0, 1);
}
"""
data = dict(
J=8,
y=[2.8, 0.8, -0.3, 0.7, -0.1, 0.1, 1.8, 1.2],
sigma=[0.8, 0.5, 0.8, 0.6, 0.5, .6, 0.5, 0.4],
)
import pystan
m = pystan.StanModel(model_code=model_code)
# use settings from Adversarial Variational Bayes:
# Unifying Variational Autoencoders and Generative Adversarial Networks
trace = m.sampling(data=data, iter=500_000, thin=50)
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.hist2d(trace['mu'], trace['tau'], bins=(31, 31), range=((-1, +3), (-3, +3)))
mpl_set(xlabel='mu', ylabel='tau')
plt.subplot(1, 2, 2)
plt.hist2d(trace['tau'], trace['eta'][:, 0], bins=(31, 31), range=((-3, +3), (-3, +3)))
mpl_set(xlabel='tau', ylabel='eta_0')
pass
def gaussian_reference(items):
with tf.variable_scope('reference', reuse=tf.AUTO_REUSE):
return {
k: tf.distributions.Normal(
loc=tf.get_variable(name=f'{k}_loc', initializer=tf.zeros_like(v)),
scale=tf.nn.softplus(tf.get_variable(name=f'{k}_scale', initializer=tf.ones_like(v))),
)
for k, v in items.items()
}
def build_approx_entropy_losses(model, discriminator, reference=gaussian_reference):
scope = model.build(ensure_loss=False, latent_strategy=sample_latent)
# TODO: make this configurable
reference_dists = reference(scope['latent'])
reference_samples = {k: p.sample() for k, p in reference_dists.items()}
if scope['loss'] is not None:
raise RuntimeError('cannot handle custom losses ...')
latent = {*scope['latent']}
observed = {*scope['observed']} & {*scope['p']}
ratio_loss = 0
for k in latent:
ratio_loss += tf.reduce_mean(
tf.log(tf.sigmoid(discriminator(reference_samples))) +
tf.log(1 - tf.sigmoid(discriminator(scope['latent'])))
)
kl_loss = 0
for k in observed:
px = scope['p'][k]
x = scope['observed'][k]
kl_loss += tf.reduce_mean(px.log_prob(x))
for k in latent:
x = scope['latent'][k]
px = scope['p'][k]
tx = reference_dists[k]
kl_loss += tf.reduce_mean(px.log_prob(x)) - tf.reduce_mean(tx.log_prob(x))
# add the entropy term approximation relative to the reference
kl_loss += tf.reduce_mean(discriminator(scope['latent']))
reference_loss = 0
for k in reference_dists:
reference_loss += tf.reduce_mean(reference_dists[k].log_prob(scope['latent'][k]))
return -kl_loss, -ratio_loss, -reference_loss
tf.reset_default_graph()
def discriminator(items):
with tf.variable_scope('discriminator', reuse=tf.AUTO_REUSE):
x = tf.concat([[items['mu'], items['tau']], items['eta']], axis=0)
x = tf.reshape(x, [1, 10])
x = tf.layers.dense(x, 10, activation=tf.nn.relu, name='layer1')
x = tf.layers.dense(x, 10, activation=tf.nn.relu, name='layer2')
x = tf.layers.dense(x, 1, name='layer3')
return x
with Model() as model:
@model.observe
def _(s):
s.y = tf.placeholder(name='y', dtype=tf.float32, shape=[1, 8])
s.sigma = tf.placeholder(name='sigma', dtype=tf.float32, shape=[1, 8])
@model.define
def _(s):
s.p.eta = tf.distributions.Normal(loc=tf.zeros([8]), scale=1.0)
s.p.mu = tf.distributions.Normal(loc=0.0, scale=1.0)
s.p.tau = tf.distributions.Normal(loc=0.0, scale=1.0)
s.p.y = tf.distributions.Normal(
loc=s.mu + s.tau * s.eta,
scale=s.sigma,
)
@model.inference
def _(s):
x = tf.random_normal(shape=[10, 1])
x = tf.layers.dense(x, 10, activation=tf.nn.relu)
u = tf.random_normal(shape=[10, 1])
u = tf.layers.dense(u, 10, activation=tf.nn.sigmoid)
s.q.eta = Sample(u[:8, 0] * x[:8, 0])
s.q.mu = Sample(u[8, 0] * x[8, 0])
s.q.tau = Sample(u[9, 0] * x[9, 0])
kl_loss_, ratio_loss_, reference_loss_ = build_approx_entropy_losses(model, discriminator)
y_, sigma_, eta_, mu_, tau_ = model.get('y', 'sigma', 'eta', 'mu', 'tau')
# NOTE: supplying var_list is important
train_r_ = tf.train.AdamOptimizer(0.01).minimize(ratio_loss_, var_list=get_variables('discriminator'))
train_q_ = tf.train.AdamOptimizer(0.001).minimize(kl_loss_, var_list=get_variables('inference'))
train_reference_ = tf.train.AdamOptimizer(0.001).minimize(reference_loss_, var_list=get_variables('reference'))
```
# Misc
## Cauchy Reparametrization
```
gamma = 0.5
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
u = np.random.uniform(size=10_000)
x = gamma * np.tan(np.pi * (u - 0.5))
_x = np.linspace(-gamma * 10, gamma * 10, 200)
_p = 1.0 / (np.pi * gamma * (1 + (_x / gamma) ** 2.0))
plt.hist(x, range=(-gamma * 10, gamma * 10), bins=31, normed=True, alpha=0.3)
plt.plot(_x, _p)
mpl_set(xlabel='x', ylabel='p(x)', title='Cauchy')
plt.subplot(1, 2, 2)
u = np.random.uniform(size=10_000)
x = gamma * np.tan(np.pi * 0.5 * u)
_x = np.linspace(0, gamma * 10, 200)
_p = 2.0 / (np.pi * gamma * (1 + (_x / gamma) ** 2.0))
plt.hist(x, range=(0, gamma * 10), bins=30, normed=True, alpha=0.3)
plt.plot(_x, _p)
mpl_set(xlabel='x', ylabel='p(x)', title='Half Cauchy')
```
## Ratio Estimation Derivation
Note: $r = r(z)$
$$
\begin{eqnarray}
\sigma(r) + \sigma(-r)
&=& \frac{1}{1 + e^{-r}} + \frac{1}{1 + e^{+r}}
\\
&=& \frac{1 + e^{-r} + 1 + e^{+r} }{(1 + e^{-r}) (1 + e^{+r})}
\\
&=& 1
\\
\frac{\partial}{\partial r} \log \sigma(r)
&=& \frac{e^{-r}}{1 + e^{-r}}
\\
&=& \sigma(-r)
\\
\frac{\partial}{\partial r} \log 1 - \sigma(r)
&=& \frac{\partial}{\partial r} \log \sigma(-r)
\\
&=& -\sigma(r)
\end{eqnarray}
$$
$$
\begin{eqnarray}
\frac{\partial}{\partial r} \int \mathrm{d}z\; \left[
p(z) \log \sigma(r) + q(z) \log \big( 1 - \sigma(r) \big)
\right] &=& 0
\\
\int \mathrm{d}z\; \left[ p(z) \sigma(-r) - q(z) \sigma(r) \right] &=& 0
\\
\int \mathrm{d}z\; \left[ p(z) \sigma(-r) - q(z) (1 - \sigma(-r))\right] &=& 0
\\
p(z) \sigma(-r) - q(z) (1 - \sigma(-r)) &=& 0
\\
(p(z) + q(z)) \sigma(-r) &=& q(z)
\\
\frac{1}{1 + e^r} &=& \frac{q(z)}{p(z) + q(z)}
\\
e^r &=& \frac{p(z)}{q(z)}
\\
r &=& \log \frac{p(z)}{q(z)}
\end{eqnarray}
$$
| github_jupyter |

This notebook shows how **CKG** can be used to download data from the Proteomics Identifications Database - PRIDE - (https://www.ebi.ac.uk/pride/) and quickly formated to start analyzing them with the functionality in the analytics core.
```
import os
import ckg.ckg_utils as ckg_utils
from ckg.graphdb_builder import builder_utils
from ckg.graphdb_builder.experiments.parsers import proteomicsParser
from ckg.analytics_core.analytics import analytics
```
##### CKG path
```
ckg_location = '/Users/albertosantos/Development/Clinical_Proteomics_Department/ClinicalKnowledgeGraph(CKG)/code'
```
#### Define where the data should be downloaded
```
analysis_dir = os.path.join(ckg_location, '/data/tmp/Deshmukh2019')
ckg_utils.checkDirectory(analysis_dir)
```
##### Specify the PRIDE identifier and file to be downloaded
```
pxd_id = 'PXD008541'
file_name='SearchEngineResults_secretome.zip.rar'
```
##### Download data
We can use functionality in graphdb_builder to directly download data files from EBI's PRIDE database (https://www.ebi.ac.uk/pride/). For that you just need to specify the PRIDE identifier for the project (PXD...) and the name of the file to download. In this case, the project identifier is **PXD008541** and the file we will use is **SearchEngineResults_secretome.zip.rar**, a RAR compressed file with the output files from MaxQuant.
```
builder_utils.download_PRIDE_data(pxd_id=pxd_id,
file_name=file_name,
to=analysis_dir)
```
## Read Data In
### Decompress File
```
builder_utils.unrar(filepath=os.path.join(analysis_dir, file_name), to=analysis_dir)
```
The list of files within the compressed folder can be listed using the listDirectoryFiles functionality in gaphdb_builder.
```
builder_utils.listDirectoryFiles(analysis_dir)
```
We use the proteinGroups file that contains the proteomics data processed using MaxQuant software.
```
proteinGroups_file = os.path.join(analysis_dir, 'proteinGroups.txt')
```
### Parse Contents
CKG has parsers for MaxQuant and Spectronaut output files. The default configuration needed to parse these files needs to be updated with the name of the columns containing the protein quantifications for each sample. Also, the default configuration can be adapted to the experiment by selected specific filters or removing non-used columns. For example, in this study the output file did not have columns: Score, Q-value, so we removed them from the configuration and the column 'Potential contaminant' was renamed to 'Contaminant' so we changed the name in the filters.
```
#d = pd.read_csv(proteinGroups_file, sep='\t')
#d.columns.tolist()
columns = ['LFQ intensity BAT_NE1',
'LFQ intensity BAT_NE2',
'LFQ intensity BAT_NE3',
'LFQ intensity BAT_NE4',
'LFQ intensity BAT_NE5',
'LFQ intensity BAT_woNE1',
'LFQ intensity BAT_woNE2',
'LFQ intensity BAT_woNE3',
'LFQ intensity BAT_woNE4',
'LFQ intensity BAT_woNE5',
'LFQ intensity WAT_NE1',
'LFQ intensity WAT_NE2',
'LFQ intensity WAT_NE3',
'LFQ intensity WAT_NE4',
'LFQ intensity WAT_NE5',
'LFQ intensity WAT_woNE1',
'LFQ intensity WAT_woNE2',
'LFQ intensity WAT_woNE3',
'LFQ intensity WAT_woNE4',
'LFQ intensity WAT_woNE5',
'Contaminant']
configuration = proteomicsParser.update_configuration(data_type='proteins',
processing_tool='maxquant',
value_col='LFQ intensity',
columns=columns,
drop_cols=['Score', 'Q-value', 'Potential contaminant'],
filters=['Reverse', 'Only identified by site', 'Contaminant'])
configuration
```
When we parse the data, we obtain a matrix in an edge list following CKG's graph format: sample, protein, realtionship_type, value, protein_group_id, is_razor
```
data = proteomicsParser.parser_from_file(proteinGroups_file, configuration=configuration, data_type='proteins', is_standard=False)[('proteins', 'w')]
data.head()
data.columns = ['sample', 'identifier', 'relationship', 'LFQ intensity', 'id', 'is_razor']
data.head()
data.shape
data = data[data.is_razor]
data.shape
```
We can use the sample names to extract the group information: BAT_NE, WAT_NE, BAT_woNE, WAT_woNE
With this last column, we obtain the **original dataframe** used as starting point in CKG' analysis pipelines.
```
data['group'] = data['sample'].apply(lambda x: re.sub('\d', '', x))
data.head()
original = data[['group', 'sample', 'identifier', 'LFQ intensity']]
```
##### --> the original dataframe is the starting point in CKG's proteomics analysis.
## Data Preparation
In order to prepare the data we follow the steps:
1) Filtering based on missing values
2) Imputation of missing values using a mixed model estrategy: KNN and MinProb
These steps will generate the **processed dataframe**, a complete matrix that can be used in the exploratory and statistical analysis.
```
processed_data = analytics.get_proteomics_measurements_ready(original,
index_cols=['group', 'sample'],
drop_cols=['sample'],
group='group',
identifier='identifier',
extra_identifier=None,
imputation=True,
method='mixed',
knn_cutoff=0.4,
missing_method='at_least_x',
missing_per_group=True,
min_valid=3,
value_col='LFQ intensity',
shift=1.8,
nstd=0.3)
processed_data.head()
```
| github_jupyter |
# Two showcases of fraud detection models
The prospective fraud detection model will comprise of the order of ten separate fraud detection models. This document is meant to describe in some detail two such models to get an idea of how these models work, what they do with the data and what they result in.
Both models use modern analytics techniques, as will the majority, but not all of the other models. The deliberate choice to develop two very different models has been made. Because in fraud detection for healthcare insurance we often don't have a large set of fraudulent claims or healthcare providers who have commited fraud before, many of the machine learning models are of the so-called unsupervised kind. In these, we can not train a model to recognize fraud, as we can't show it what fraud looks like. As such, models can only be used to mine the data for deviant patterns of any kind. Some of these deviant patterns are specifically looked for and based on business knowledge or clearly defined risks (for example, excessive costs per patient compared to similar care providers), and some may be "data driven", meaning that large deviations are detected independent of an explanation of what may be wrong with that specific customer. The two models in this report are both unsupervised machine learning models. One has a clearly defined definition of the financial risk, the other is just looking at billing patterns to see what's in there.
```
# Initialize the necessary software
import numpy as np
import numpy.random as rn
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
Axes3D
import seaborn as sns
import datetime as dt
import pickle, os, time, sys, itertools, string
%matplotlib inline
sns.set_context('talk')
sns.set_style('white')
rn.seed(42)
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
import IPython.core.display as di
# This line will hide code by default when the notebook is exported as HTML
di.display_html('<script>jQuery(function() {if (jQuery("body.notebook_app").length == 0) { jQuery(".input_area").toggle(); jQuery(".prompt").toggle();}});</script>', raw=True)
# This line will add a button to toggle visibility of code blocks, for use with the HTML export version
# di.display_html('''<button onclick="jQuery('.input_area').toggle();jQuery('.prompt').toggle();">Toggle code</button>''', raw=True)
# Filter warnings (mostly deprecation anyway...)
import warnings
warnings.filterwarnings('ignore')
```
As a reminder, the data set looks like this, with one line per procedure:
```
# Load the data
datafile = './data/mock_healthcare.pickle'
df = open(datafile, 'rb')
data = pickle.load(df)
df.close()
data.head()
```
## Outliers in cost per patient
The first model will look at the average cost per patient (patients of the provider are members of the insurance companies and both terms will be used here) that providers claim. This is nothing else than the total billed amount, divided by the number of unique members for whom this was claimed. The build-up of that number is a combination of prizes of treatments, the number of treatments per person, the mix of treatments typically given, etc. and may therefore show a large variation.
The GUI will give the user the opportunity to pick an option for the grouping of providers when doing this model. The reason for that is that in practice an investigator would want to compare providers only to comparable providers, e.g. of the same specialism. Sometimes this is not known on beforehand (the "Prov_specialism" column will then be missing, empty or meaningless). In such cases you can determine the ourliers of the whole distribution of all available providers, or you can decide to let an algorithm decide the peer groups based on the mix of procedure codes they have billed. This will remove any outliers in that distribution, as the algorithm will not know to which group to compare these providers. The clustering of providers can not be checked on the fly, so this option should be used with caution (outliers tat you want to find may well be excluded based on their billing pattern, for the method, see the second example below).
The other option that can be set in the GUI is the method for determining an outlier. Three options are available:
- Above 90th percentile: this will generally result in many outliers. The upper 10%, roughly, of providers will be flagged. One reason to not use a very struct limit, but one that results in many flags is that this is very obvioulsy a financial risk, that any insurance company will want to have a handle on.
- Above 95th percentile: a common way of determining outliers is to take everyone above the 95th percentile.
- Statistical outlier: this is a method that is based on the actual distribution function of the providers. Whereas the method "above 95th percentile" makes a lot of sense for well-behaved near-normal distributions, this method does not assume anythin about the underlying distribution. The upper limit of acceptable values depends on the interquartile distance (the difference between the 25th percentil and 75th percentile). This results in points that are certainly deviant, but often results in only a few outliers.
In the example below we use the known specialisms, and flag every provider that is above the 95th percentile of its own reference group. In the plot there is a panel for each group (the difference between these distributions shows the necessity of using reference groups). The solid red line is the upper acceptabe limit, and every provider above this gets flagged. The highest occuring cost per member is, for convenience, indicated with the dotted red line.
```
# Option to do overall, by known reference group, or by determined reference group
option_group = 'Per specialism' # 'Overall', 'Per specialism', 'By determined peer group'
option_outlier = 'Above 90th percentile' # 'Above 90th percentile', 'Statistical outlier', 'Above 95th percentile'
if option_outlier == 'Above 90th percentile': percentile = 90
elif option_outlier == 'Above 95th percentile': percentile = 95
elif option_outlier == 'Statistical outlier': pass # Outside upper inner fence = q3+ 1.5*(q3-q1)
else: print("Not yet implemented!")
if option_group == 'By determined peer group':
# Groups need to be determined
piv_proc = pd.pivot_table(data, values='Paid_amt', index='Provider_ID', columns='Procedure_code', aggfunc='sum')
piv_proc.replace(np.nan, 0, inplace=True)
fractional_proc = piv_proc.div(piv_proc.sum(axis=1), axis=0)
from sklearn.decomposition import PCA
pca=PCA()
manifolds = pca.fit_transform(fractional_proc)[:,:3]
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(manifolds)
results = DBSCAN().fit(X)
#This is one label per provider, in order, so I have to join these to the dataframe
tojoin = pd.DataFrame({'IDs':piv_proc.index, 'refgroup':results.labels_})
data = data.merge(tojoin, how='left', left_on='Provider_ID', right_on='IDs')
elif option_group == 'Overall': data['refgroup'] = np.zeros(len(data.index))
elif option_group == 'Per specialism': data['refgroup'] = data.Prov_specialism
else:
print("The option for reference groups is not recognized! Overall is used.")
refgroup = np.zeros(len(data.index))
data['refgroup'] = refgroup
refgroups = np.unique(data.refgroup)
outliers = []
score = []
money = []
plt.figure(figsize=(12,8))
plt.subplots_adjust(hspace=0.000)
number_of_subplots=len(refgroups)
maxval = 0
minval = 1e9
for iref, ref in enumerate(refgroups):
if ref == -1 and option_group == 'By determined peer group': continue # These are outliers determined by DBSCAN
thisgroup = data[data.refgroup == ref]
prov_pat = thisgroup.groupby(['Provider_ID', 'Member_ID'])
cost_all_patients = pd.DataFrame(prov_pat.Paid_amt.sum())
cost_all_patients.reset_index(inplace=True)
per_prov = cost_all_patients.groupby('Provider_ID')
cost_per_member = per_prov.Paid_amt.mean()
number_patients = per_prov.Paid_amt.count()
if cost_per_member.max() > maxval: maxval = cost_per_member.max()
if cost_per_member.min() < minval: minval = cost_per_member.min()
ax = plt.subplot(number_of_subplots, 1, iref+1)
ax.set_yticks([])
if option_group == 'Per specialism': plabel = str('Reference group ')+str(ref)
elif option_group == 'By determined peer group': plabel = str('Reference group'+str(ref+1))
elif option_group == 'Overall': plabel = ''
else: plabel = ''
sns.distplot(cost_per_member, kde=False, label=plabel)
if not plabel == '': ax.legend(loc='best')
if iref == number_of_subplots-1:
ax.set_xlabel("Cost per member")
else: ax.set_xticks([])
if iref == 0: ax.set_title("Distribution, with outliers between red solid and dotted lines")
# Overplot an outlier with a line.
if option_outlier in ['Above 90th percentile', 'Above 95th percentile']:
limval = np.percentile(cost_per_member, percentile)
elif option_outlier == 'Statistical outlier':
q1, q3 = np.percentile(cost_per_member, [25, 75])
limval = q3 + 1.5 * (q3-q1)
else:
print("Outlier option not yet implemented!", option_outlier, "Using 90th percentile")
limval = np.percentile(cost_per_member, 90)
median = np.median(cost_per_member)
ylims = ax.get_ylim()
ax.plot([limval, limval], ylims, color='red')
ax.plot([cost_per_member.max(), cost_per_member.max()], ylims, 'r:' )
toomuch = cost_per_member[cost_per_member > limval]
scoring_entities = toomuch.index
outliers.extend(list(scoring_entities))
score.extend(list((toomuch - limval) / np.abs(limval - median)))
npats = number_patients[scoring_entities].values
money.extend(list((toomuch-limval)*npats))
ax.set_ylim([ylims[0], ylims[1]])
# Adjust all axis ranges
for axi in range(1, number_of_subplots+1):
ax = plt.subplot(number_of_subplots, 1, axi)
ax.set_xlim([.9*minval, 1.1*maxval])
if axi == 3:
ys = ax.get_ylim()
ax.plot([toomuch.values[-1], toomuch.values[-1]], [ys[0],ys[1]], "k-")
ax.text( 1.01* toomuch.values[-1], .6*ys[1], toomuch.index[-1])
# ax.set_ylim([ys[0], ys[1]])
```
For all providers between the red vertical lines we calculate a score and a corresponding monetary amount. The score is based on how far above the limit you are, corrected for the width of the distribution (so being equally far above the limit of a narrower distribution is more severe than for a narrower distribution). The monetary amount is equal to the difference in cost per member between the upper limit (vertical red solid line) and the cost per member of the flagged provider, multiplied with the number of members the particular has billed for.
For this particular example we arrive at a list with the following properties. For other settings this may be different!
```
print("The total number of providers that is flagged is", len(outliers))
print("Scores vary from", np.round(np.min(score), 3), "to", np.round(np.max(score), 2), "with an average of", np.round(np.mean(score), 2))
```
These scores are, in all models, constructed such that their normalization is roughly equal: the same number for the score should roughly correspond to an equally "severe" outlier. This ensures that the weights (that can be set in the GUI) can actually be regarded as actual relative weights of the different model ingredients.
## Billing patterns
In the data, we have included a total number of 18 different procedure codes. Which of these is billed (and, presumably, corresponds to the actual treatment patients have received) depends strongly on the specialism, but even within specialisms there is an appreciable spread. Some of these procedure codes correspond to fixed price treatments, whereas in other cases there is a free (perhaps hourly) billing scheme. Therefore, the fraction of the revenue of a provider that is due to any particular procedure code can have a large range of values.
By billing pattern we mean different fractions of a provider's revenue that is due to all the different procedures. These may include zeros for procedures the provider never does and add up to 1 (the full revenue is due to all the billed procedure codes together). A provider can thus be specified by a point in the 18-dimensional space, where in all dimensions the coordinate is between zero and one.
Finding groups in 18-dimensional spaces is not a trivial task, ask it can not be visualized easily. Even for an algorithm, clustering in so many dimensions is hard for a variety of reasons that we won't go into in detail here.
The different specialisms are expected to follow a pattern that within a specialism is fairly similar, whereas the different specialisms differ from one another. What we do here is that we use dimensionality reduction techniques to represent the 18-dimensional space of the billing patterns into a three-dimensional one, in order to be able to visualize it more easily. We find clusters in that three-dimensional representation as well, and flag all providers that are not in a dense group. The total number of dense groups that is expected is equal to the number of specialisms. This is not guaranteed, as some specialisms may look alike (in the full 18-dimensional space, or in its three-dimensional representation), or because sub-groups within a specialism have different billing patterns as well.
In the images below, the members of the same clusters have the same color. The outliers are overplotted with slightly larger symbols in red. These providers are too far from the cluster centers where all other providers reside, which implies that their mox of procedure codes is far off the norm for any of the known specialisms.
```
# Create a pivot table with amounts per procedure code for all providers, then normalize
piv_proc = pd.pivot_table(data, values='Paid_amt', index='Provider_ID', columns='Procedure_code', aggfunc='sum')
piv_proc.replace(np.nan, 0, inplace=True)
fractional_proc = piv_proc.div(piv_proc.sum(axis=1), axis=0)
# Create a lookup for the specialism
prov_spec = data.loc[:,['Provider_ID', 'Prov_specialism']].drop_duplicates()
prov_spec.set_index('Provider_ID', inplace=True)
specs = np.array(prov_spec.values)
from sklearn.decomposition import PCA
pca=PCA()
pcas = pca.fit_transform(fractional_proc)
from sklearn.cluster import DBSCAN
scanner = DBSCAN(eps=2)
results = scanner.fit(pcas[:,:2])
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(pcas[:,:3])
scanner = DBSCAN(eps=0.5)
results = scanner.fit(X)
fig=plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(pcas[:, 0], pcas[:, 1], pcas[:, 2], c=results.labels_, cmap=plt.cm.Spectral)
ax.view_init(40, -122)
ax.scatter(pcas[results.labels_ == -1, 0], pcas[results.labels_ == -1, 1], pcas[results.labels_ == -1, 2], c='red', s=50)
ax.scatter(pcas[results.labels_ == -1, 0][0], pcas[results.labels_ == -1, 1][0], pcas[results.labels_ == -1, 2][0], c='red', s=300)
ax.text(.2, .1, .1, piv_proc.index[results.labels_ == -1][0], color='red')
ax.set_xticks([])
ax.set_yticks([])
zt = ax.set_zticks([])
# plt.savefig('billing_pattern_3D.png', dpi=400)
```
The three-dimensional representation above is already clear enough, but for completeness, we also show two projections of this distribution below. Numbers along these axes would not mean much.
```
plt.figure(figsize=(12, 8))
plt.subplots_adjust(wspace=0.000)
ax = plt.subplot(121)
plt.scatter(pcas[:,0], pcas[:,1], c=results.labels_, s=30, cmap=plt.cm.Spectral)
plt.scatter(pcas[results.labels_ == -1,0], pcas[results.labels_ == -1,1], c='red', s=50)
plt.scatter(pcas[results.labels_ == -1,0][0], pcas[results.labels_ == -1,1][0], c='red', s=300)
xs, ys = ax.get_xlim(), ax.get_ylim()
ax.text(.7*xs[1], .9*ys[1], piv_proc.index[results.labels_ == -1][0], color='red')
ax.set_xticks([])
ax.set_yticks([])
ax = plt.subplot(122)
plt.scatter(pcas[:,2], pcas[:,1], c=results.labels_, s=30, cmap=plt.cm.Spectral)
plt.scatter(pcas[results.labels_ == -1,2], pcas[results.labels_ == -1,1], c='red', s=50)
plt.scatter(pcas[results.labels_ == -1,2][0], pcas[results.labels_ == -1,1][0], c='red', s=300)
ax.set_xticks([])
ticks = ax.set_yticks([])
print("The number of outliers based on their billing pattern is", len(results.labels_[results.labels_ == -1]))
```
The outliers here get a score that depends on their distance to the nearest cluster center, relative to the cluster size (so scores are lower if an outlier is closer to a group, and for the same distance to the cluster center the score is lower if the nearest group is more diffuse). A monetary score is not very meaningful for this model, so all corresponding monetary amounts will be set to zero.
## Where does this all go?
In the overall model, the user can set all model parameters that are useful for a user to vary. These models are then run with those particular settings. When all models are run, the resulting scores and corresponding monetary amounts are collected and a "hit list" of providers is constructed based on the (weighted) total score and (weighted) monetary amount. This list should, after proper tweaking of model weights and model options, represent a fair assessment of the fraud risk of this database of (healthcare) bills.
| github_jupyter |
# 206 Optimizers
View more, visit my tutorial page: https://morvanzhou.github.io/tutorials/
My Youtube Channel: https://www.youtube.com/user/MorvanZhou
Dependencies:
* torch: 0.1.11
* matplotlib
```
import torch
import torch.utils.data as Data
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt
%matplotlib inline
torch.manual_seed(1) # reproducible
LR = 0.01
BATCH_SIZE = 32
EPOCH = 12
```
### Generate some fake data
```
# fake dataset
x = torch.unsqueeze(torch.linspace(-1, 1, 1000), dim=1)
y = x.pow(2) + 0.1*torch.normal(torch.zeros(*x.size()))
# plot dataset
plt.scatter(x.numpy(), y.numpy())
plt.show()
```
### Put dataset into torch dataset
```
torch_dataset = Data.TensorDataset(data_tensor=x, target_tensor=y)
loader = Data.DataLoader(
dataset=torch_dataset,
batch_size=BATCH_SIZE,
shuffle=True, num_workers=2,)
```
### Default network
```
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(1, 20) # hidden layer
self.predict = torch.nn.Linear(20, 1) # output layer
def forward(self, x):
x = F.relu(self.hidden(x)) # activation function for hidden layer
x = self.predict(x) # linear output
return x
```
### Different nets
```
net_SGD = Net()
net_Momentum = Net()
net_RMSprop = Net()
net_Adam = Net()
nets = [net_SGD, net_Momentum, net_RMSprop, net_Adam]
```
### Different optimizers
```
opt_SGD = torch.optim.SGD(net_SGD.parameters(), lr=LR)
opt_Momentum = torch.optim.SGD(net_Momentum.parameters(), lr=LR, momentum=0.8)
opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(), lr=LR, alpha=0.9)
opt_Adam = torch.optim.Adam(net_Adam.parameters(), lr=LR, betas=(0.9, 0.99))
optimizers = [opt_SGD, opt_Momentum, opt_RMSprop, opt_Adam]
loss_func = torch.nn.MSELoss()
losses_his = [[], [], [], []] # record loss
# training
for epoch in range(EPOCH):
print('Epoch: ', epoch)
for step, (batch_x, batch_y) in enumerate(loader): # for each training step
b_x = Variable(batch_x)
b_y = Variable(batch_y)
for net, opt, l_his in zip(nets, optimizers, losses_his):
output = net(b_x) # get output for every net
loss = loss_func(output, b_y) # compute loss for every net
opt.zero_grad() # clear gradients for next train
loss.backward() # backpropagation, compute gradients
opt.step() # apply gradients
l_his.append(loss.data[0]) # loss recoder
labels = ['SGD', 'Momentum', 'RMSprop', 'Adam']
for i, l_his in enumerate(losses_his):
plt.plot(l_his, label=labels[i])
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 0.2))
plt.show()
```
| github_jupyter |
# Project 5: NLP on Financial Statements
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
## Packages
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing are `project_helper` and `project_tests`. These are custom packages built to help you solve the problems. The `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import nltk
import numpy as np
import pandas as pd
import pickle
import pprint
import project_helper
import project_tests
from tqdm import tqdm
```
### Download NLP Corpora
You'll need two corpora to run this project: the stopwords corpus for removing stopwords and wordnet for lemmatizing.
```
nltk.download('stopwords')
nltk.download('wordnet')
```
## Get 10ks
We'll be running NLP analysis on 10-k documents. To do that, we first need to download the documents. For this project, we'll download 10-ks for a few companies. To lookup documents for these companies, we'll use their CIK. If you would like to run this against other stocks, we've provided the dict `additional_cik` for more stocks. However, the more stocks you try, the long it will take to run.
```
cik_lookup = {
'AMZN': '0001018724',
'BMY': '0000014272',
'CNP': '0001130310',
'CVX': '0000093410',
'FL': '0000850209',
'FRT': '0000034903',
'HON': '0000773840'}
additional_cik = {
'AEP': '0000004904',
'AXP': '0000004962',
'BA': '0000012927',
'BK': '0001390777',
'CAT': '0000018230',
'DE': '0000315189',
'DIS': '0001001039',
'DTE': '0000936340',
'ED': '0001047862',
'EMR': '0000032604',
'ETN': '0001551182',
'GE': '0000040545',
'IBM': '0000051143',
'IP': '0000051434',
'JNJ': '0000200406',
'KO': '0000021344',
'LLY': '0000059478',
'MCD': '0000063908',
'MO': '0000764180',
'MRK': '0000310158',
'MRO': '0000101778',
'PCG': '0001004980',
'PEP': '0000077476',
'PFE': '0000078003',
'PG': '0000080424',
'PNR': '0000077360',
'SYY': '0000096021',
'TXN': '0000097476',
'UTX': '0000101829',
'WFC': '0000072971',
'WMT': '0000104169',
'WY': '0000106535',
'XOM': '0000034088'}
```
### Get list of 10-ks
The SEC has a limit on the number of calls you can make to the website per second. In order to avoid hiding that limit, we've created the `SecAPI` class. This will cache data from the SEC and prevent you from going over the limit.
```
sec_api = project_helper.SecAPI()
```
With the class constructed, let's pull a list of filled 10-ks from the SEC for each company.
```
from bs4 import BeautifulSoup
def get_sec_data(cik, doc_type, start=0, count=60):
newest_pricing_data = pd.to_datetime('2018-01-01')
rss_url = 'https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany' \
'&CIK={}&type={}&start={}&count={}&owner=exclude&output=atom' \
.format(cik, doc_type, start, count)
sec_data = sec_api.get(rss_url)
feed = BeautifulSoup(sec_data.encode('ascii'), 'xml').feed
entries = [
(
entry.content.find('filing-href').getText(),
entry.content.find('filing-type').getText(),
entry.content.find('filing-date').getText())
for entry in feed.find_all('entry', recursive=False)
if pd.to_datetime(entry.content.find('filing-date').getText()) <= newest_pricing_data]
return entries
```
Let's pull the list using the `get_sec_data` function, then display some of the results. For displaying some of the data, we'll use Amazon as an example.
```
example_ticker = 'AMZN'
sec_data = {}
for ticker, cik in cik_lookup.items():
sec_data[ticker] = get_sec_data(cik, '10-K')
pprint.pprint(sec_data[example_ticker][:5])
```
### Download 10-ks
As you see, this is a list of urls. These urls point to a file that contains metadata related to each filling. Since we don't care about the metadata, we'll pull the filling by replacing the url with the filling url.
```
raw_fillings_by_ticker = {}
for ticker, data in sec_data.items():
raw_fillings_by_ticker[ticker] = {}
for index_url, file_type, file_date in tqdm(data, desc='Downloading {} Fillings'.format(ticker), unit='filling'):
if (file_type == '10-K'):
file_url = index_url.replace('-index.htm', '.txt').replace('.txtl', '.txt')
raw_fillings_by_ticker[ticker][file_date] = sec_api.get(file_url)
print('Example Document:\n\n{}...'.format(next(iter(raw_fillings_by_ticker[example_ticker].values()))[:1000]))
```
### Get Documents
With theses fillings downloaded, we want to break them into their associated documents. These documents are sectioned off in the fillings with the tags `<DOCUMENT>` for the start of each document and `</DOCUMENT>` for the end of each document. There's no overlap with these documents, so each `</DOCUMENT>` tag should come after the `<DOCUMENT>` with no `<DOCUMENT>` tag in between.
Implement `get_documents` to return a list of these documents from a filling. Make sure not to include the tag in the returned document text.
```
import re
def get_documents(text):
"""
Extract the documents from the text
Parameters
----------
text : str
The text with the document strings inside
Returns
-------
extracted_docs : list of str
The document strings found in `text`
"""
# TODO: Implement
# extracting document from the test
extract_doc = []
# regresing for the tags
start = re.compile(r'<DOCUMENT>')
end = re.compile(r'</DOCUMENT>')
# isolating the document
document_start = [x.end() for x in re.finditer(start, text)]
document_end = [x.start() for x in re.finditer(end, text)]
# appending document body :
for doc_start, doc_end in zip(document_start, document_end):
extract_doc.append(text[doc_start:doc_end])
return extract_doc
project_tests.test_get_documents(get_documents)
```
With the `get_documents` function implemented, let's extract all the documents.
```
filling_documents_by_ticker = {}
for ticker, raw_fillings in raw_fillings_by_ticker.items():
filling_documents_by_ticker[ticker] = {}
for file_date, filling in tqdm(raw_fillings.items(), desc='Getting Documents from {} Fillings'.format(ticker), unit='filling'):
filling_documents_by_ticker[ticker][file_date] = get_documents(filling)
print('\n\n'.join([
'Document {} Filed on {}:\n{}...'.format(doc_i, file_date, doc[:200])
for file_date, docs in filling_documents_by_ticker[example_ticker].items()
for doc_i, doc in enumerate(docs)][:3]))
```
### Get Document Types
Now that we have all the documents, we want to find the 10-k form in this 10-k filing. Implement the `get_document_type` function to return the type of document given. The document type is located on a line with the `<TYPE>` tag. For example, a form of type "TEST" would have the line `<TYPE>TEST`. Make sure to return the type as lowercase, so this example would be returned as "test".
```
def get_document_type(doc):
"""
Return the document type lowercased
Parameters
----------
doc : str
The document string
Returns
-------
doc_type : str
The document type lowercased
"""
# TODO: Implement
# RETURNING THE LOWER CASED WORD
text_case = re.compile(r'(?<=<TYPE>)\w+[^\n]+')
document_lower = re.search(text_case,doc).group(0).lower()
return document_lower
project_tests.test_get_document_type(get_document_type)
```
With the `get_document_type` function, we'll filter out all non 10-k documents.
```
ten_ks_by_ticker = {}
for ticker, filling_documents in filling_documents_by_ticker.items():
ten_ks_by_ticker[ticker] = []
for file_date, documents in filling_documents.items():
for document in documents:
if get_document_type(document) == '10-k':
ten_ks_by_ticker[ticker].append({
'cik': cik_lookup[ticker],
'file': document,
'file_date': file_date})
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['cik', 'file', 'file_date'])
```
## Preprocess the Data
### Clean Up
As you can see, the text for the documents are very messy. To clean this up, we'll remove the html and lowercase all the text.
```
def remove_html_tags(text):
text = BeautifulSoup(text, 'html.parser').get_text()
return text
def clean_text(text):
text = text.lower()
text = remove_html_tags(text)
return text
```
Using the `clean_text` function, we'll clean up all the documents.
```
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Cleaning {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_clean'] = clean_text(ten_k['file'])
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_clean'])
```
### Lemmatize
With the text cleaned up, it's time to distill the verbs down. Implement the `lemmatize_words` function to lemmatize verbs in the list of words provided.
```
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def lemmatize_words(words):
"""
Lemmatize words
Parameters
----------
words : list of str
List of words
Returns
-------
lemmatized_words : list of str
List of lemmatized words
"""
# TODO: Implement
#assigning lemmatizer
wnl = WordNetLemmatizer()
#getting the lemmatized words
l_words = [wnl.lemmatize(w, 'v') for w in words]
return l_words
project_tests.test_lemmatize_words(lemmatize_words)
```
With the `lemmatize_words` function implemented, let's lemmatize all the data.
```
word_pattern = re.compile('\w+')
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Lemmatize {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = lemmatize_words(word_pattern.findall(ten_k['file_clean']))
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_lemma'])
```
### Remove Stopwords
```
from nltk.corpus import stopwords
lemma_english_stopwords = lemmatize_words(stopwords.words('english'))
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Remove Stop Words for {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = [word for word in ten_k['file_lemma'] if word not in lemma_english_stopwords]
print('Stop Words Removed')
```
## Analysis on 10ks
### Loughran McDonald Sentiment Word Lists
We'll be using the Loughran and McDonald sentiment word lists. These word lists cover the following sentiment:
- Negative
- Positive
- Uncertainty
- Litigious
- Constraining
- Superfluous
- Modal
This will allow us to do the sentiment analysis on the 10-ks. Let's first load these word lists. We'll be looking into a few of these sentiments.
```
import os
sentiments = ['negative', 'positive', 'uncertainty', 'litigious', 'constraining', 'interesting']
sentiment_df = pd.read_csv(os.path.join('..', '..', 'data', 'project_5_loughran_mcdonald', 'loughran_mcdonald_master_dic_2016.csv'))
sentiment_df.columns = [column.lower() for column in sentiment_df.columns] # Lowercase the columns for ease of use
# Remove unused information
sentiment_df = sentiment_df[sentiments + ['word']]
sentiment_df[sentiments] = sentiment_df[sentiments].astype(bool)
sentiment_df = sentiment_df[(sentiment_df[sentiments]).any(1)]
# Apply the same preprocessing to these words as the 10-k words
sentiment_df['word'] = lemmatize_words(sentiment_df['word'].str.lower())
sentiment_df = sentiment_df.drop_duplicates('word')
sentiment_df.head()
```
### Bag of Words
using the sentiment word lists, let's generate sentiment bag of words from the 10-k documents. Implement `get_bag_of_words` to generate a bag of words that counts the number of sentiment words in each doc. You can ignore words that are not in `sentiment_words`.
```
from collections import defaultdict, Counter
from sklearn.feature_extraction.text import CountVectorizer
def get_bag_of_words(sentiment_words, docs):
"""
Generate a bag of words from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
# TODO: Implement
#removing words not in sentiment words
vectorizer = CountVectorizer(vocabulary=sentiment_words.values)
# buxilding the word matri
words = vectorizer.fit_transform(docs)
# coverting it to array
bag_of_words = words.toarray()
return bag_of_words
project_tests.test_get_bag_of_words(get_bag_of_words)
```
Using the `get_bag_of_words` function, we'll generate a bag of words for all the documents.
```
sentiment_bow_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_bow_ten_ks[ticker] = {
sentiment: get_bag_of_words(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_bow_ten_ks[example_ticker]], sentiments)
```
### Jaccard Similarity
Using the bag of words, let's calculate the jaccard similarity on the bag of words and plot it over time. Implement `get_jaccard_similarity` to return the jaccard similarities between each tick in time. Since the input, `bag_of_words_matrix`, is a bag of words for each time period in order, you just need to compute the jaccard similarities for each neighboring bag of words. Make sure to turn the bag of words into a boolean array when calculating the jaccard similarity.
```
from sklearn.metrics import jaccard_similarity_score
def get_jaccard_similarity(bag_of_words_matrix):
"""
Get jaccard similarities for neighboring documents
Parameters
----------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
jaccard_similarities : list of float
Jaccard similarities for neighboring documents
"""
# TODO: Implement
bag_of_words = bag_of_words_matrix.astype(bool)
# computing jaccard similarities
jaccard_similarity =[jaccard_similarity_score(u,v) for u, v in zip(bag_of_words, bag_of_words[1:])]
return jaccard_similarity
project_tests.test_get_jaccard_similarity(get_jaccard_similarity)
```
Using the `get_jaccard_similarity` function, let's plot the similarities over time.
```
# Get dates for the universe
file_dates = {
ticker: [ten_k['file_date'] for ten_k in ten_ks]
for ticker, ten_ks in ten_ks_by_ticker.items()}
jaccard_similarities = {
ticker: {
sentiment_name: get_jaccard_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_bow_ten_ks.items()}
project_helper.plot_similarities(
[jaccard_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Jaccard Similarities for {} Sentiment'.format(example_ticker),
sentiments)
```
### TFIDF
using the sentiment word lists, let's generate sentiment TFIDF from the 10-k documents. Implement `get_tfidf` to generate TFIDF from each document, using sentiment words as the terms. You can ignore words that are not in `sentiment_words`.
```
from sklearn.feature_extraction.text import TfidfVectorizer
def get_tfidf(sentiment_words, docs):
"""
Generate TFIDF values from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
# TODO: Implement
#filtering out the words not in array
vectorizer = TfidfVectorizer(vocabulary=sentiment_words.values)
# building tfidf matrix
tfidf = vectorizer.fit_transform(docs)
# making array
x = tfidf.toarray()
return x
project_tests.test_get_tfidf(get_tfidf)
```
Using the `get_tfidf` function, let's generate the TFIDF values for all the documents.
```
sentiment_tfidf_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_tfidf_ten_ks[ticker] = {
sentiment: get_tfidf(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_tfidf_ten_ks[example_ticker]], sentiments)
```
### Cosine Similarity
Using the TFIDF values, we'll calculate the cosine similarity and plot it over time. Implement `get_cosine_similarity` to return the cosine similarities between each tick in time. Since the input, `tfidf_matrix`, is a TFIDF vector for each time period in order, you just need to computer the cosine similarities for each neighboring vector.
```
from sklearn.metrics.pairwise import cosine_similarity
def get_cosine_similarity(tfidf_matrix):
"""
Get cosine similarities for each neighboring TFIDF vector/document
Parameters
----------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
cosine_similarities : list of float
Cosine similarities for neighboring documents
"""
# TODO: Implement
cos_similarities = list(np.diag(cosine_similarity(tfidf_matrix, tfidf_matrix), k=1))
return cos_similarities
project_tests.test_get_cosine_similarity(get_cosine_similarity)
```
Let's plot the cosine similarities over time.
```
cosine_similarities = {
ticker: {
sentiment_name: get_cosine_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_tfidf_ten_ks.items()}
project_helper.plot_similarities(
[cosine_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Cosine Similarities for {} Sentiment'.format(example_ticker),
sentiments)
```
## Evaluate Alpha Factors
Just like we did in project 4, let's evaluate the alpha factors. For this section, we'll just be looking at the cosine similarities, but it can be applied to the jaccard similarities as well.
### Price Data
Let's get yearly pricing to run the factor against, since 10-Ks are produced annually.
```
pricing = pd.read_csv('../../data/project_5_yr/yr-quotemedia.csv', parse_dates=['date'])
pricing = pricing.pivot(index='date', columns='ticker', values='adj_close')
pricing
```
### Dict to DataFrame
The alphalens library uses dataframes, so we we'll need to turn our dictionary into a dataframe.
```
cosine_similarities_df_dict = {'date': [], 'ticker': [], 'sentiment': [], 'value': []}
for ticker, ten_k_sentiments in cosine_similarities.items():
for sentiment_name, sentiment_values in ten_k_sentiments.items():
for sentiment_values, sentiment_value in enumerate(sentiment_values):
cosine_similarities_df_dict['ticker'].append(ticker)
cosine_similarities_df_dict['sentiment'].append(sentiment_name)
cosine_similarities_df_dict['value'].append(sentiment_value)
cosine_similarities_df_dict['date'].append(file_dates[ticker][1:][sentiment_values])
cosine_similarities_df = pd.DataFrame(cosine_similarities_df_dict)
cosine_similarities_df['date'] = pd.DatetimeIndex(cosine_similarities_df['date']).year
cosine_similarities_df['date'] = pd.to_datetime(cosine_similarities_df['date'], format='%Y')
cosine_similarities_df.head()
```
### Alphalens Format
In order to use a lot of the alphalens functions, we need to aligned the indices and convert the time to unix timestamp. In this next cell, we'll do just that.
```
import alphalens as al
factor_data = {}
skipped_sentiments = []
for sentiment in sentiments:
cs_df = cosine_similarities_df[(cosine_similarities_df['sentiment'] == sentiment)]
cs_df = cs_df.pivot(index='date', columns='ticker', values='value')
try:
data = al.utils.get_clean_factor_and_forward_returns(cs_df.stack(), pricing, quantiles=5, bins=None, periods=[1])
factor_data[sentiment] = data
except:
skipped_sentiments.append(sentiment)
if skipped_sentiments:
print('\nSkipped the following sentiments:\n{}'.format('\n'.join(skipped_sentiments)))
factor_data[sentiments[0]].head()
```
### Alphalens Format with Unix Time
Alphalen's `factor_rank_autocorrelation` and `mean_return_by_quantile` functions require unix timestamps to work, so we'll also create factor dataframes with unix time.
```
unixt_factor_data = {
factor: data.set_index(pd.MultiIndex.from_tuples(
[(x.timestamp(), y) for x, y in data.index.values],
names=['date', 'asset']))
for factor, data in factor_data.items()}
```
### Factor Returns
Let's view the factor returns over time. We should be seeing it generally move up and to the right.
```
ls_factor_returns = pd.DataFrame()
for factor_name, data in factor_data.items():
ls_factor_returns[factor_name] = al.performance.factor_returns(data).iloc[:, 0]
(1 + ls_factor_returns).cumprod().plot()
```
### Basis Points Per Day per Quantile
It is not enough to look just at the factor weighted return. A good alpha is also monotonic in quantiles. Let's looks the basis points for the factor returns.
```
qr_factor_returns = pd.DataFrame()
for factor_name, data in unixt_factor_data.items():
qr_factor_returns[factor_name] = al.performance.mean_return_by_quantile(data)[0].iloc[:, 0]
(10000*qr_factor_returns).plot.bar(
subplots=True,
sharey=True,
layout=(5,3),
figsize=(14, 14),
legend=False)
```
### Turnover Analysis
Without doing a full and formal backtest, we can analyze how stable the alphas are over time. Stability in this sense means that from period to period, the alpha ranks do not change much. Since trading is costly, we always prefer, all other things being equal, that the ranks do not change significantly per period. We can measure this with the **Factor Rank Autocorrelation (FRA)**.
```
ls_FRA = pd.DataFrame()
for factor, data in unixt_factor_data.items():
ls_FRA[factor] = al.performance.factor_rank_autocorrelation(data)
ls_FRA.plot(title="Factor Rank Autocorrelation")
```
### Sharpe Ratio of the Alphas
The last analysis we'll do on the factors will be sharpe ratio. Let's see what the sharpe ratio for the factors are. Generally, a Sharpe Ratio of near 1.0 or higher is an acceptable single alpha for this universe.
```
daily_annualization_factor = np.sqrt(252)
(daily_annualization_factor * ls_factor_returns.mean() / ls_factor_returns.std()).round(2)
```
That's it! You've successfully done sentiment analysis on 10-ks!
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
| github_jupyter |
#### Training Sample: train.csv with undersampling
#### Evaluation Sample: validation_under.csv
#### Method: OOB
#### Output: Best hyperparameters; Pr-curve; ROC AUC
# Training Part
```
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import RandomUnderSampler
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import auc,f1_score,make_scorer,classification_report, matthews_corrcoef, accuracy_score, average_precision_score, roc_auc_score, roc_curve, precision_recall_curve
import matplotlib.pyplot as plt
```
#### Input data is read and named as the following
```
transactions = pd.read_csv('../Data/train.csv')
X_train = transactions.drop(labels='Class', axis=1)
y_train = transactions.loc[:,'Class']
rus = RandomUnderSampler(sampling_strategy=0.8)
X_res, Y_res = rus.fit_resample(X_train, y_train)
```
#### Tuning parameters
```
test = 1
rf = RandomForestClassifier(n_jobs=-1, random_state=1)
if test== 0:
n_estimators = [75,150,800,1000,1200]
min_samples_split = [2, 5]
min_samples_leaf = [1, 5]
else:
n_estimators = [800]
min_samples_split = [2]
min_samples_leaf = [1]
param_grid_rf = {'n_estimators': n_estimators,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'oob_score': [True]
}
grid_rf = GridSearchCV(estimator=rf, param_grid=param_grid_rf,cv = 5,
n_jobs=-1, pre_dispatch='2*n_jobs', verbose=1, return_train_score=False)
grid_rf.fit(X_res, Y_res)
```
#### The best score and the estimator
```
grid_rf.best_score_
grid_rf.best_params_
y_pre = grid_rf.predict(X_res)
print('Classification Report for training')
print(classification_report(y_pre, Y_res))
```
# Evaluation Part
```
evaluation = pd.read_csv('../Data/validation_under.csv')
X_eval = evaluation.drop(labels='Class', axis=1)
y_eval = evaluation.loc[:,'Class']
def Random_Forest_eval(estimator, X_test, y_test):
y_pred = estimator.predict(X_test)
print('Classification Report')
print(classification_report(y_test, y_pred))
y_score = estimator.predict_proba(X_test)[:,1]
print('AUPRC', average_precision_score(y_test, y_score))
print('AUROC', roc_auc_score(y_test, y_score))
Random_Forest_eval(grid_rf, X_eval, y_eval)
```
### Receiver Operating Characteristic Curve
```
def Draw_ROC(Y_prob, Y_observed, model_name = 'Model'):
ns_probs = [0 for _ in range(len(Y_observed))]
# calculate scores
ns_auc = roc_auc_score(Y_observed, ns_probs)
lr_auc = roc_auc_score(Y_observed, Y_prob)
# summarize scores
print('Chance: ROC AUC=%.3f' % (ns_auc))
print('%s: ROC AUC=%.3f' % (model_name, lr_auc))
# calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(Y_observed, ns_probs, pos_label=1)
lr_fpr, lr_tpr, _ = roc_curve(Y_observed, Y_prob, pos_label=1)
# plot the roc curve for the model
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Chance')
plt.plot(lr_fpr, lr_tpr, marker='.', label=model_name)
# axis labels
plt.title('Receiver operating characteristic curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# show the legend
plt.legend()
# show the plot
plt.show()
y_score = grid_rf.predict_proba(X_eval)[:,1]
Draw_ROC(y_score, y_eval,'Random Forest')
```
### Precision Recall Curve
```
def Draw_PR(Y_prob, Y_predicted, Y_observed, model_name = 'Model'):
# predict class values
lr_precision, lr_recall, _ = precision_recall_curve(Y_observed, Y_prob, pos_label=1)
lr_f1, lr_auc = f1_score(Y_observed, Y_predicted), auc(lr_recall, lr_precision)
# summarize scores
print('Random Forest: f1=%.3f auc=%.3f' % (lr_f1, lr_auc))
# plot the precision-recall curves
no_skill = len(Y_observed[Y_observed==1]) / len(Y_observed)
plt.plot([0, 1], [no_skill, no_skill], linestyle='--', label='Chance')
plt.plot(lr_recall, lr_precision, marker='.', label=model_name)
# axis labels
plt.title('2-class Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
# show the legend
plt.legend()
# show the plot
plt.show()
Y_predicted = grid_rf.predict(X_eval)
Draw_PR(y_score, Y_predicted, y_eval,'Random Forest')
```
| github_jupyter |
# 05 - Data Preparation and Advanced Model Evaluation
by [Alejandro Correa Bahnsen](http://www.albahnsen.com/) & [Iván Torroledo](http://www.ivantorroledo.com/)
version 1.3, June 2018
## Part of the class [Applied Deep Learning](https://github.com/albahnsen/AppliedDeepLearningClass)
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Kevin Markham](https://github.com/justmarkham)
# Handling missing values
scikit-learn models expect that all values are **numeric** and **hold meaning**. Thus, missing values are not allowed by scikit-learn.
```
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/titanic.csv.zip', 'r') as z:
f = z.open('titanic.csv')
titanic = pd.read_csv(f, sep=',', index_col=0)
titanic.head()
# check for missing values
titanic.isnull().sum()
```
One possible strategy is to **drop missing values**:
```
# drop rows with any missing values
titanic.dropna().shape
# drop rows where Age is missing
titanic[titanic.Age.notnull()].shape
```
Sometimes a better strategy is to **impute missing values**:
```
# mean Age
titanic.Age.mean()
# median Age
titanic.Age.median()
titanic.loc[titanic.Age.isnull()]
```
# most frequent Age
titanic.Age.mode()
```
# fill missing values for Age with the median age
titanic.Age.fillna(titanic.Age.median(), inplace=True)
```
Another strategy would be to build a **KNN model** just to impute missing values. How would we do that?
If values are missing from a categorical feature, we could treat the missing values as **another category**. Why might that make sense?
How do we **choose** between all of these strategies?
# Handling categorical features
How do we include a categorical feature in our model?
- **Ordered categories:** transform them to sensible numeric values (example: small=1, medium=2, large=3)
- **Unordered categories:** use dummy encoding (0/1)
```
titanic.head(10)
# encode Sex_Female feature
titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1})
# create a DataFrame of dummy variables for Embarked
embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')
embarked_dummies.drop(embarked_dummies.columns[0], axis=1, inplace=True)
# concatenate the original DataFrame and the dummy DataFrame
titanic = pd.concat([titanic, embarked_dummies], axis=1)
titanic.head(1)
```
- How do we **interpret** the encoding for Embarked?
- Why didn't we just encode Embarked using a **single feature** (C=0, Q=1, S=2)?
- Does it matter which category we choose to define as the **baseline**?
- Why do we only need **two dummy variables** for Embarked?
```
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
```
# ROC curves and AUC
```
# predict probability of survival
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
# calculate AUC
print(metrics.roc_auc_score(y_test, y_pred_prob))
```
Besides allowing you to calculate AUC, seeing the ROC curve can help you to choose a threshold that **balances sensitivity and specificity** in a way that makes sense for the particular context.
```
# histogram of predicted probabilities grouped by actual response value
df = pd.DataFrame({'probability':y_pred_prob, 'actual':y_test})
df.hist(column='probability', by='actual', sharex=True, sharey=True)
# ROC curve using y_pred_class - WRONG!
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_class)
plt.plot(fpr, tpr)
# AUC using y_pred_class - WRONG!
print(metrics.roc_auc_score(y_test, y_pred_class))
```
If you use **y_pred_class**, it will interpret the zeros and ones as predicted probabilities of 0% and 100%.
# Cross-validation
## Review of model evaluation procedures
**Motivation:** Need a way to choose between machine learning models
- Goal is to estimate likely performance of a model on **out-of-sample data**
**Initial idea:** Train and test on the same data
- But, maximizing **training accuracy** rewards overly complex models which **overfit** the training data
**Alternative idea:** Train/test split
- Split the dataset into two pieces, so that the model can be trained and tested on **different data**
- **Testing accuracy** is a better estimate than training accuracy of out-of-sample performance
- But, it provides a **high variance** estimate since changing which observations happen to be in the testing set can significantly change testing accuracy
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
print(metrics.accuracy_score(y_test, y_pred_class))
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
print(metrics.accuracy_score(y_test, y_pred_class))
res=[]
for i in range(100):
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=3*i)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
res.append(metrics.accuracy_score(y_test, y_pred_class))
pd.Series(res).plot()
```
train test spliting create bias due to the intrinsic randomness in the sets selection
# K-fold cross-validation
1. Split the dataset into K **equal** partitions (or "folds").
2. Use fold 1 as the **testing set** and the union of the other folds as the **training set**.
3. Calculate **testing accuracy**.
4. Repeat steps 2 and 3 K times, using a **different fold** as the testing set each time.
5. Use the **average testing accuracy** as the estimate of out-of-sample accuracy.
Diagram of **5-fold cross-validation:**

```
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print('{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations'))
for iteration, data in enumerate(kf, start=1):
print('{:^9} {} {:^25}'.format(str(iteration), str(data[0]), str(data[1])))
```
- Dataset contains **25 observations** (numbered 0 through 24)
- 5-fold cross-validation, thus it runs for **5 iterations**
- For each iteration, every observation is either in the training set or the testing set, **but not both**
- Every observation is in the testing set **exactly once**
```
# Create k-folds
kf = KFold(X.shape[0], n_folds=10, random_state=0)
results = []
for train_index, test_index in kf:
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
results.append(metrics.accuracy_score(y_test, y_pred_class))
pd.Series(results).describe()
from sklearn.cross_validation import cross_val_score
logreg = LogisticRegression(C=1e9)
results = cross_val_score(logreg, X, y, cv=10, scoring='accuracy')
pd.Series(results).describe()
```
## Comparing cross-validation to train/test split
Advantages of **cross-validation:**
- More accurate estimate of out-of-sample accuracy
- More "efficient" use of data (every observation is used for both training and testing)
Advantages of **train/test split:**
- Runs K times faster than K-fold cross-validation
- Simpler to examine the detailed results of the testing process
## Cross-validation recommendations
1. K can be any number, but **K=10** is generally recommended
2. For classification problems, **stratified sampling** is recommended for creating the folds
- Each response class should be represented with equal proportions in each of the K folds
- scikit-learn's `cross_val_score` function does this by default
## Improvements to cross-validation
**Repeated cross-validation**
- Repeat cross-validation multiple times (with **different random splits** of the data) and average the results
- More reliable estimate of out-of-sample performance by **reducing the variance** associated with a single trial of cross-validation
**Creating a hold-out set**
- "Hold out" a portion of the data **before** beginning the model building process
- Locate the best model using cross-validation on the remaining data, and test it **using the hold-out set**
- More reliable estimate of out-of-sample performance since hold-out set is **truly out-of-sample**
**Feature engineering and selection within cross-validation iterations**
- Normally, feature engineering and selection occurs **before** cross-validation
- Instead, perform all feature engineering and selection **within each cross-validation iteration**
- More reliable estimate of out-of-sample performance since it **better mimics** the application of the model to out-of-sample data
# Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
**If our estimator is underperforming, how should we move forward?**
- Use simpler or more complicated model?
- Add more features to each observed data point?
- Add more training samples?
The answer is often counter-intuitive. In particular, **Sometimes using a
more complicated model will give _worse_ results.** Also, **Sometimes adding
training data will not improve your results.** The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
### Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple **linear regression** problem.
This can be accomplished within scikit-learn with the `sklearn.linear_model` module.
We'll create a simple nonlinear function that we'd like to fit
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
```
Now let's create a realization of this dataset:
```
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
```
Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
```
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test,c='r')
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
```
We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is **biased**, or that it **under-fits** the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the ``PolynomialFeatures`` preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
```
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
```
Now we'll use this to fit a quadratic curve to the data.
```
X_poly = PolynomialFeatures(degree=2).fit_transform(X)
X_test_poly = PolynomialFeatures(degree=2).fit_transform(X_test)
model = LinearRegression()
model.fit(X_poly, y)
y_test = model.predict(X_test_poly)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test,c='r')
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X_poly), y)));
```
This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
```
X_poly = PolynomialFeatures(degree=30).fit_transform(X)
X_test_poly = PolynomialFeatures(degree=30).fit_transform(X_test)
model = LinearRegression()
model.fit(X_poly, y)
y_test = model.predict(X_test_poly)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test,c='r')
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X_poly), y)))
plt.ylim(-4, 14);
```
When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a **high-variance model**, and we say that it **over-fits** the data.
### Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use **cross-validation** to get a better handle on how the model fit is working.
Let's do this here, again using the ``validation_curve`` utility. To make things more clear, we'll use a slightly larger dataset:
```
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.model_selection import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
```
Now let's plot the validation curves:
```
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
```
Notice the trend here, which is common for this type of plot.
1. For a small model complexity, the training error and validation error are very similar. This indicates that the model is **under-fitting** the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a **high-bias** model.
2. As the model complexity grows, the training and validation scores diverge. This indicates that the model is **over-fitting** the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a **high-variance** model.
3. Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
```
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test),c='r');
```
### Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of *learning curves*, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of *Number of Training Points*
```
from sklearn.learning_curve import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
```
Let's see what the learning curves look like for a linear model:
```
plot_learning_curve(1)
```
This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates **over-fitting**. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential **under-fitting**.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will *never* get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
```
plot_learning_curve(3)
```
Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
```
plot_learning_curve(10)
```
For an even more complex model, we still converge, but the convergence only happens for *large* amounts of training data.
So we see the following:
- you can **cause the lines to converge** by adding more points or by simplifying the model.
- you can **bring the convergence error down** only by increasing the complexity of the model.
Thus these curves can give you hints about how you might improve a sub-optimal model. If the curves are already close together, you need more model complexity. If the curves are far apart, you might also improve the model by adding more data.
To make this more concrete, imagine some telescope data in which the results are not robust enough. You must think about whether to spend your valuable telescope time observing *more objects* to get a larger training set, or *more attributes of each object* in order to improve the model. The answer to this question has real consequences, and can be addressed using these metrics.
# Recall, Precision and F1-Score
Intuitively, [precision](http://en.wikipedia.org/wiki/Precision_and_recall#Precision) is the ability
of the classifier not to label as positive a sample that is negative, and
[recall](http://en.wikipedia.org/wiki/Precision_and_recall#Recall) is the
ability of the classifier to find all the positive samples.
The [F-measure](http://en.wikipedia.org/wiki/F1_score>)
($F_\beta$ and $F_1$ measures) can be interpreted as a weighted
harmonic mean of the precision and recall. A
$F_\beta$ measure reaches its best value at 1 and its worst score at 0.
With $\beta = 1$, $F_\beta$ and
$F_1$ are equivalent, and the recall and the precision are equally important.
```
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/titanic.csv.zip', 'r') as z:
f = z.open('titanic.csv')
titanic = pd.read_csv(f, sep=',', index_col=0)
titanic.head()
# fill missing values for Age with the median age
titanic.Age.fillna(titanic.Age.median(), inplace=True)
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred_class)
from sklearn.metrics import precision_score, recall_score, f1_score
print('precision_score ', precision_score(y_test, y_pred_class))
print('recall_score ', recall_score(y_test, y_pred_class))
```
### F1Score
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:
$$F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}.$$
```
print('f1_score ', f1_score(y_test, y_pred_class))
```
## Summary
We've gone over several useful tools for model validation
- The **Training Score** shows how well a model fits the data it was trained on. This is not a good indication of model effectiveness
- The **Validation Score** shows how well a model fits hold-out data. The most effective method is some form of cross-validation, where multiple hold-out sets are used.
- **Validation Curves** are a plot of validation score and training score as a function of **model complexity**:
+ when the two curves are close, it indicates *underfitting*
+ when the two curves are separated, it indicates *overfitting*
+ the "sweet spot" is in the middle
- **Learning Curves** are a plot of the validation score and training score as a function of **Number of training samples**
+ when the curves are close, it indicates *underfitting*, and adding more data will not generally improve the estimator.
+ when the curves are far apart, it indicates *overfitting*, and adding more data may increase the effectiveness of the model.
These tools are powerful means of evaluating your model on your data.
| github_jupyter |
```
from pymongo import MongoClient
client = MongoClient('mongodb+srv://AmazonianSentiments:6pVOMaDeacyVgrre@amazoniansentiments.duy3v.mongodb.net/AmazonianSentiments?retryWrites=true&w=majority')
mydb = client["AmazonianSentiments"] #pyramids is the database
mycol = mydb["AllBeauty"] #invoice is the collection
import nltk
import pandas as pd
import numpy as np
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
nltk.download('stopwords')
nltk.download('punkt')
from pymongo import MongoClient
from pandas.io.json import json_normalize
cursor=mycol.find()
df = pd.json_normalize(cursor)
df
# replace field that's entirely space (or empty) with NaN
df.replace(r'^\s*$', np.nan, regex=True)
nltk.download('stopwords')
#Create column of review text with all lowercase, no punctuation, and no stopwords
nan_value = float("NaN") #Create na variable for blanks
df["reviewText"].replace("", nan_value, inplace=True) #Replace blanks with na variable
df.dropna(subset = ["reviewText"], inplace=True) #Drop all rows with na review text
df["ReviewNoFiller"] = df["reviewText"].str.replace('[^\w\s]','') #Create column with review text with no punctuation
df["ReviewNoFiller"] = df["ReviewNoFiller"].str.lower() #Make all words lowercase
stopwords = stopwords.words('english')
df["ReviewNoFiller"] = df["ReviewNoFiller"].apply(lambda x: ' '.join([word for word in x.split() if word not in (stopwords)])) #Remove stop words
df["ReviewNoFiller"].replace("", nan_value, inplace=True) #Replace blanks with na
df.dropna(subset = ["ReviewNoFiller"], inplace=True) #Drop all rows with na review text
#Create column of summary text with all lowercase, no punctuation, and no stopwords
df["SummaryNoFiller"] = df["summary"].str.replace('[^\w\s]','') #Create column with summary text with no punctuation
df["SummaryNoFiller"] = df["SummaryNoFiller"].str.lower() #Make column all lowercase
df["SummaryNoFiller"] = df["SummaryNoFiller"].fillna("") # Remove NA values
df["SummaryNoFiller"] = df["SummaryNoFiller"].apply(lambda x: ' '.join([word for word in x.split() if word not in (stopwords)]))
#Insert columns with tokenized review and summary
df["ReviewToken"] =df.apply(lambda row: word_tokenize(row["ReviewNoFiller"]), axis=1)
df["SummaryToken"] = df.apply(lambda row: word_tokenize(row["SummaryNoFiller"]), axis=1)
#Insert column with review word count
df["WordCount"] = df["ReviewToken"].apply(len)
#Add column with Date from converted Unix time, remove redundant columns
#df["Date"] = pd.to_datetime(PetSuppliesDF["unixReviewTime"], unit='s')
#df = df.drop(['reviewTime', 'unixReviewTime'], axis=1)
#Add Column for Numbers of Reviews for Reviewer
ReviewCount = df.groupby('reviewerID').asin.nunique().to_frame()
ReviewCount.reset_index(inplace=True)
ReviewCount = ReviewCount.rename(columns = {'asin':'ReviewNum'})
df = df.merge(ReviewCount, on = 'reviewerID')
#Add Column for Average Star Rating for Reviewer
ReviewAvg = df.groupby('reviewerID')['overall'].mean().to_frame()
ReviewAvg.reset_index(inplace=True)
ReviewAvg = ReviewAvg.rename(columns = {'overall':'AvgRating'})
df = df.merge(ReviewAvg, on = 'reviewerID')
#Determine the gender in df
import gender_guesser.detector as gender
d = gender.Detector()
first_names = []
for i in range(0,370527):
name = str(df['reviewerName'].values[i]).split(' ', 1)[0]
first_names.append(name)
# lowercase everything
first_names = [k.lower() for k in first_names]
# capitalize the first letter of every name
first_names = [i.capitalize() for i in first_names]
genders = []
for i in first_names[0:len(first_names)]:
if d.get_gender(i) == 'male':
genders.append('male')
elif d.get_gender(i) == 'female':
genders.append('female')
else:
genders.append('unknown')
df['genders'] = genders
df
#Split dataframe into train (60%), validate (20%), and test (20%) dataframes with rows randomized
#train, validate, test = \
#np.split(df.sample(frac=1, random_state=42),
#[int(.6*len(df)), int(.8*len(df))])
#Print some of the dataframe to verify work
#pd.set_option('display.max_columns', None) #So as not to truncate output
#pd.set_option('display.max_rows', None) #So as not to truncate output
#for col in df.columns: #Print column names
#print(col)
#print(PetSuppliesDF.head(1)) # Print first entry in dataframe
# Write final dataframe into csv
#df.to_csv(r'PetSupplies.csv', index = False)
df
df['genders'].value_counts()
word_sent = pd.read_csv("https://raw.githubusercontent.com/JULIELab/EmoMap/master/coling18/main/lexicon_creation/lexicons/Warriner_BE.tsv", sep='\t')
print(word_sent)
word_sent
df
dfE = df.explode('ReviewToken')
dfE
dfE = dfE.merge(word_sent, left_on='ReviewToken', right_on='Word')
dfE = dfE.drop(df.columns[[11, 12, 13,14,15,16,17,18,19,20,21,22,23,24,25]], axis=1)
dfE
dfRI = dfE.groupby(dfE['reviewerID']).mean().reset_index()
dfRI
dfG = dfE.groupby(dfE['genders']).mean().reset_index()
dfG
dfst = dfE.groupby(dfE['genders']).std().reset_index()
dfst
d
dfE.info()
dfE[['vote',]]
```
| github_jupyter |
```
#### Weights from Michael Guerzhoy and Davi Frossard
# http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/
import tensorflow as tf
import numpy as np
variable_data = np.load("saved_models/bvlc_alexnet.npy", encoding='bytes').item()
type(variable_data)
conv1_preW = variable_data["conv1"][0]
conv1_preb = variable_data["conv1"][1]
print(conv1_preW.shape)
print(conv1_preb.shape)
conv2_preW = variable_data["conv2"][0]
conv2_preb = variable_data["conv2"][1]
print(conv2_preW.shape)
print(conv2_preb.shape)
conv3_preW = variable_data["conv3"][0]
conv3_preb = variable_data["conv3"][1]
print(conv3_preW.shape)
print(conv3_preb.shape)
conv4_preW = variable_data["conv4"][0]
conv4_preb = variable_data["conv4"][1]
print(conv4_preW.shape)
print(conv4_preb.shape)
conv5_preW = variable_data["conv5"][0]
conv5_preb = variable_data["conv5"][1]
print(conv5_preW.shape)
print(conv5_preb.shape)
fc6_preW = variable_data["fc6"][0]
fc6_preb = variable_data["fc6"][1]
print(fc6_preW.shape)
print(fc6_preb.shape)
fc7_preW = variable_data["fc7"][0]
fc7_preb = variable_data["fc7"][1]
print(fc7_preW.shape)
print(fc7_preb.shape)
fc8_preW = variable_data["fc8"][0]
fc8_preb = variable_data["fc8"][1]
print(fc8_preW.shape)
print(fc8_preb.shape)
import cat_dog_queue
pixel_depth = 255.0
resized_height = 227
resized_width = 227
num_channels = 3
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.uint8, [None, None, None, num_channels],
name='input')
to_float = tf.cast(x, tf.float32)
resized = tf.image.resize_images(to_float, resized_height, resized_width)
# Convolution 1
with tf.name_scope('conv1') as scope:
kernel = tf.Variable(conv1_preW, name='weights')
biases = tf.Variable(conv1_preb, name='biases')
conv = tf.nn.conv2d(resized, kernel, [1, 4, 4, 1], padding="SAME")
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
# Local response normalization 2
radius = 2
alpha = 2e-05
beta = 0.75
bias = 1.0
lrn1 = tf.nn.local_response_normalization(conv1,
depth_radius=radius,
alpha=alpha,
beta=beta,
bias=bias)
# Maxpool 1
pool1 = tf.nn.max_pool(lrn1,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID',
name='pool1')
# Convolution 2
with tf.name_scope('conv2') as scope:
kernel = tf.Variable(conv2_preW, name='weights')
biases = tf.Variable(conv2_preb, name='biases')
input_a, input_b = tf.split(split_dim=3,
num_split=2,
value=pool1)
kernel_a, kernel_b = tf.split(split_dim=3,
num_split=2,
value=kernel)
with tf.name_scope('A'):
conv_a = tf.nn.conv2d(input_a, kernel_a, [1, 1, 1, 1], padding="SAME")
with tf.name_scope('B'):
conv_b = tf.nn.conv2d(input_b, kernel_b, [1, 1, 1, 1], padding="SAME")
conv = tf.concat(3, [conv_a, conv_b])
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
# Local response normalization 2
radius = 2
alpha = 2e-05
beta = 0.75
bias = 1.0
lrn2 = tf.nn.local_response_normalization(conv2,
depth_radius=radius,
alpha=alpha,
beta=beta,
bias=bias)
# Maxpool 2
pool2 = tf.nn.max_pool(lrn2,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID',
name='pool2')
with tf.name_scope('conv3') as scope:
kernel = tf.Variable(conv3_preW, name='weights')
biases = tf.Variable(conv3_preb, name='biases')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding="SAME")
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4') as scope:
kernel = tf.Variable(conv4_preW, name='weights')
biases = tf.Variable(conv4_preb, name='biases')
input_a, input_b = tf.split(split_dim=3,
num_split=2,
value=conv3)
kernel_a, kernel_b = tf.split(split_dim=3,
num_split=2,
value=kernel)
with tf.name_scope('A'):
conv_a = tf.nn.conv2d(input_a, kernel_a, [1, 1, 1, 1], padding="SAME")
with tf.name_scope('B'):
conv_b = tf.nn.conv2d(input_b, kernel_b, [1, 1, 1, 1], padding="SAME")
conv = tf.concat(3, [conv_a, conv_b])
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5') as scope:
kernel = tf.Variable(conv5_preW, name='weights')
biases = tf.Variable(conv5_preb, name='biases')
input_a, input_b = tf.split(split_dim=3,
num_split=2,
value=conv4)
kernel_a, kernel_b = tf.split(split_dim=3,
num_split=2,
value=kernel)
with tf.name_scope('A'):
conv_a = tf.nn.conv2d(input_a, kernel_a, [1, 1, 1, 1], padding="SAME")
with tf.name_scope('B'):
conv_b = tf.nn.conv2d(input_b, kernel_b, [1, 1, 1, 1], padding="SAME")
conv = tf.concat(3, [conv_a, conv_b])
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
# Maxpool 2
pool5 = tf.nn.max_pool(conv5,
ksize=[1, 3, 3, 1],
strides=[1, 2, 2, 1],
padding='VALID',
name='pool5')
# Fully connected 6
with tf.name_scope('fc6'):
weights = tf.Variable(fc6_preW, name='fc6_weights')
bias = tf.Variable(fc6_preb, name='fc6_bias')
shape = tf.shape(pool5)
size = shape[1] * shape[2] * shape[3]
fc6 = tf.nn.relu_layer(tf.reshape(pool5, [-1, size]),
weights, bias, name='relu')
# Fully connected 7
with tf.name_scope('fc7'):
weights = tf.Variable(fc7_preW, name='weights')
bias = tf.Variable(fc7_preb, name='bias')
fc7 = tf.nn.relu_layer(fc6, weights, bias, name='relu')
# Fully connected 8
with tf.name_scope('fc8'):
weights = tf.Variable(fc8_preW, name='weights')
bias = tf.Variable(fc8_preb, name='bias')
# fc8 = tf.matmul(fc7, weights) + bias
fc8 = tf.nn.xw_plus_b(fc7, weights, bias)
softmax = tf.nn.softmax(fc8)
init = tf.initialize_all_variables()
sess = tf.Session(graph=graph)
sess.run(init)
writer = tf.train.SummaryWriter('tensorboard/alexnet', graph=graph)
writer.close()
```
# Exporting the Entire Model
[Check out the official tutorial for more details](https://www.tensorflow.org/versions/master/how_tos/meta_graph/index.html)
```
with graph.as_default():
saver = tf.train.Saver()
save_path = saver.save(sess, 'saved_models/alex_vars')
```
| github_jupyter |
## Title: Wood_Fiber_Concessions
### Description
“Wood fiber concession” refers to an area allocated by a government or other body for establishment of fast-growing tree plantations for the production of timber and wood pulp for paper and paper products.<br>
This data set displays wood fiber concessions as a single layer assembled by aggregating concession data for multiple countries. The data may come from government agencies, NGOs, or other organizations and varies by date and data sources. For more information on concession data for each country please visit the Open Data Portal.<br>
This data was updated in September 2018 to update boundaries for APRIL.<br>
If you are aware of concession data for additional countries, please email us here.
### FLINT
This dataset has been pre-processed/checked and is suitable for use in FLINT. Please adhere to individual dataset licence conditions and citations. Processed data can be accessed here: https://datasets.mojaglobal.workers.dev/
### Format
<b>Extent: </b>Indonesia, Republic of Congo, Malaysia<br>
<b>Format</b>: Vector polyon geoJSON<br>
<b>Cordinate system:</b> EPSG:4326 (WGS84)<br>
<b>Year: April 2018</b><br>
<b>Size:</b>
### Original source
Accessed from Global Forest Watch Portal: https://data.globalforestwatch.org/datasets/eea64218200743a9ab5d5625089d7256_0 31/01/2021. Feature Service: https://services2.arcgis.com/g8WusZB13b9OegfU/arcgis/rest/services/Wood_Fiber_Concessions/FeatureServer/0
### Licence
Dataset has too many sub-components so overarching licence is not possible. Please cite the source and any changes made.
### Citation
Global Forest Watch Portal (Accessed 2020). Wood Fiber Concessions. https://data.globalforestwatch.org/datasets/eea64218200743a9ab5d5625089d7256_0 Accessed 31/01/2021 (note multiple sources of data in this dataset.
Attribution:<br>
Indonesia: Ministry of Forestry, Asia Pulp and Paper, APRIL<br>
Republic of Congo: WRI & Ministry of Agriculture<br>
Sarawak & Sabah, Malaysia: Earthsight Investigations & Global Witness
### Metadata
View https://data.globalforestwatch.org/datasets/eea64218200743a9ab5d5625089d7256_0
### Notes
Limitations of use: This layer is a compilation of concession data from various countries and sources. The quality of these data can vary depending on the source. This layer may not include all existing concessions in a country, and the location of certain concessions can be inaccurate.<br>
Significant overlap, error in geometry (duplicate vertex and non-simple geometry), Indonesian data looks to have multiple resolutions of input data which has led to slither gaps and overlaps. All overlaps are fixed using the code below fixed (which will mean one polygon will win the area over another based on the longest edge shared), and small slithers less than 2ha are removed.
### Processing
See below for arcpy code to fix geometry, gaps and overlaps. It is recommended that additional manual cleaning is undertaken.
```
import arcpy
import os
# Input variables
in_folder = r"C:\Users\LennyJenny\Documents\ArcGIS\world\UNFCCC\downloads\WoodFibreConcessions\woodfibreoriginal.gdb"
out_folder = r"C:\Users\LennyJenny\Documents\ArcGIS\world\UNFCCC\downloads\WoodFibreConcessions\json"
fullfield = "source" #this needs to be a field in the original table that is fully populated
smallest = "20000" #smallest area to be fixed in m2 - gaps and slithers
scr = arcpy.CreateFileGDB_management(out_folder, "scratch")
scr_folder = os.path.join(out_folder, "scratch.gdb")
# Environments
workspace = in_folder
arcpy.env.workspace = workspace
arcpy.env.outputCoordinateSystem = arcpy.SpatialReference(4326)
arcpy.env.outputZFlag = "Disabled"
arcpy.env.overwriteOutput = True
field = fullfield + " IS NULL or " + fullfield + " = ''"
arcpy.env.parallelProcessingFactor = "100%"
# List features to process
featureclasses = arcpy.ListFeatureClasses()
print(featureclasses)
# Repair/check topology and make FLINT ready
for fc in featureclasses:
fcname = os.path.join(os.path.splitext(fc)[0])
outjson = os.path.join(out_folder, fcname)
whereclause = "FID_" + fcname + " =-1 AND AREA_GEO <= " + smallest +" Or AREA_GEO IS NULL"
print(fcname + ' processing...')
fLayer = "project_Layer"
arcpy.management.MakeFeatureLayer(fc, fLayer)
geomRepair = arcpy.management.RepairGeometry(fLayer, "DELETE_NULL", "OGC")[0]
projectIntersect = os.path.join(scr_folder, "projectIntersect")
arcpy.analysis.Intersect(fLayer, projectIntersect, "ONLY_FID")
projectSingle = os.path.join(scr_folder, "projectSingle")
arcpy.management.MultipartToSinglepart(projectIntersect, projectSingle)
dissolveSlither = os.path.join(scr_folder, "dissolveSlither")
arcpy.management.Dissolve(projectSingle, dissolveSlither, None, None,"SINGLE_PART")
# Take action if overlaps
if arcpy.management.GetCount(dissolveSlither)[0] == "0":
print('no overlaps detected...checking for gaps...')
projectUnion = os.path.join(scr_folder, "projectUnion")
arcpy.analysis.Union(fLayer,projectUnion, "ALL", None, "NO_GAPS")
arcpy.management.AddGeometryAttributes(projectUnion, "AREA_GEODESIC", None, "SQUARE_METERS")
uniSelect = os.path.join(scr_folder, "uniSelect")
arcpy.analysis.Select(projectUnion, uniSelect, whereclause)
if arcpy.management.GetCount(uniSelect)[0] == "0":
# Progress report no error
print(fcname, 'No gaps and overlaps. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fLayer, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fLayer, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
print(outjson, '.geojson complete')
else:
# Take action if gaps
print('gaps detected')
appendGap = arcpy.management.Append(uniSelect, fLayer, "NO_TEST")
selectGap = arcpy.management.SelectLayerByAttribute(fLayer, "NEW_SELECTION", field)
fixedlyr = os.path.join(scr_folder, "fixedlyr")
arcpy.management.Eliminate(selectGap, fixedlyr, "LENGTH")
# Progress report
print(fcname, 'No overlaps but gaps detected and repaired. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fixedlyr, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fixedlyr, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
else:
print('Overlaps detected...')
# Fix overlaps
projectErase = os.path.join(scr_folder, "projectErase")
arcpy.analysis.Erase(fLayer, dissolveSlither, projectErase)
arcpy.management.Append(dissolveSlither, projectErase, "NO_TEST")
selectSlither = arcpy.management.SelectLayerByAttribute(projectErase, "NEW_SELECTION", field)
eliminateSlither = os.path.join(scr_folder, "eliminateSlither")
arcpy.management.Eliminate(selectSlither, eliminateSlither, "LENGTH")
print('Overlaps detected and fixed...checking for gaps...')
projectUnion = os.path.join(scr_folder, "projectUnion")
arcpy.analysis.Union(eliminateSlither, projectUnion, "ALL", None, "NO_GAPS")
arcpy.management.AddGeometryAttributes(projectUnion, "AREA_GEODESIC", None, "SQUARE_METERS")
uniSelect = os.path.join(scr_folder, "uniSelect")
whereUnion= "FID_eliminateSlither = -1 AND AREA_GEO <=" + smallest + " OR AREA_GEO IS NULL"
arcpy.analysis.Select(projectUnion, uniSelect, whereUnion)
if arcpy.management.GetCount(uniSelect)[0] == "0":
# Progress report no error
print(fcname, ' No gaps detected. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(eliminateSlither, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(eliminateSlither, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
print(outjson, '.geojson complete')
else:
# Take action if gaps
appendGap = arcpy.management.Append(uniSelect, eliminateSlither, "NO_TEST")
selectGap = arcpy.management.SelectLayerByAttribute(eliminateSlither, "NEW_SELECTION", field)
fixedlyr = os.path.join(scr_folder, "fixedlyr")
arcpy.management.Eliminate(selectGap, fixedlyr, "LENGTH")
print('gaps detected and repaired')
# Progress report
print(fcname, 'Gaps and overlaps fixed. Repairing geometry and conversion to json...')
# Process: Repair Geometry (non-simple geometry)
geomRepair = arcpy.management.RepairGeometry(fixedlyr, "DELETE_NULL", "OGC")[0]
# Process: Features To JSON
arcpy.conversion.FeaturesToJSON(fixedlyr, outjson, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
MYS = arcpy.management.SelectLayerByAttribute(fixedlyr, "NEW_SELECTION", "COUNTRY = 'MYS'", None)
MYSOUTJSON = os.path.join(out_folder, fcname + "_MYS")
arcpy.conversion.FeaturesToJSON(MYS, MYSOUTJSON, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
IND = arcpy.management.SelectLayerByAttribute(fixedlyr, "NEW_SELECTION", "COUNTRY = 'IND'", None)
INDOUTJSON = os.path.join(out_folder, fcname + "_IND")
arcpy.conversion.FeaturesToJSON(IND, INDOUTJSON, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
COG = arcpy.management.SelectLayerByAttribute(fixedlyr, "NEW_SELECTION", "COUNTRY = 'COG'", None)
COGOUTJSON = os.path.join(out_folder, fcname + "_COG")
arcpy.conversion.FeaturesToJSON(COG, COGOUTJSON, "NOT_FORMATTED", "NO_Z_VALUES", "NO_M_VALUES", "GEOJSON", "WGS84", "USE_FIELD_NAME")
arcpy.AddMessage("All done!")
print('done')
```
| github_jupyter |
# **Installing and Initializing Spark on Google Colab**
```
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-3.0.2/spark-3.0.2-bin-hadoop2.7.tgz
!tar xf spark-3.0.2-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.0.2-bin-hadoop2.7"
import findspark
findspark.find()
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.master("local")\
.appName("Colab")\
.config('spark.ui.port', '4050')\
.getOrCreate()
from pyspark.sql import SQLContext
sqlContext = SQLContext(spark)
spark
```
# **Reading CSV File using Spark**
```
df = spark.read.csv("/content/drive/MyDrive/spark_file/Airports2.csv", header=True, inferSchema=True)
df.registerTempTable('df')
```
# **Basic Insights into Data**
```
df.printSchema()
df.count()
df.describe().show()
```
# **Spark Transformation and Action Operations**
```
df.show(5)
df.select("Origin_airport","Destination_airport","Passengers","Seats").show(15)
from pyspark.sql import functions as F
from pyspark.sql.functions import col
from pyspark.sql.functions import desc
airportAgg_DF = df.groupBy("Origin_airport").agg(F.sum("Passengers"))
airportAgg_DF.show(10)
```
# **Spark SQL**
## **Highest Flight Departures Airport**
```
originAirports = sqlContext.sql("""select Origin_Airport, sum(Flights) as Flights
from df group by Origin_Airport order by sum(Flights) DESC limit 10""")
originAirports.show()
```
## **Highest Passenger Arrival Airport**
```
destinationAirports = sqlContext.sql("""select Destination_airport, sum(Passengers) as Passengers
from df group by Destination_airport order by sum(Passengers) DESC limit 10""")
destinationAirports.show()
```
## **Airports with Most Flights**
```
MostFlightsByAirports = sqlContext.sql("""with destination as (select Destination_airport as Airport, sum(Flights) as Out_Flights
from df group by Destination_airport),
origin as (select Origin_airport as Airport, sum(Flights) as In_Flights
from df group by Origin_airport)
select origin.Airport, (destination.Out_Flights+origin.In_Flights) as Total_Flights
from origin, destination
where origin.Airport = destination.Airport
order by (origin.In_Flights + destination.Out_Flights) DESC
limit 15""")
MostFlightsByAirports.show()
```
## **Airports with Most Passengers**
```
MostPassengersByAirports = sqlContext.sql("""with destination as (select Destination_airport as Airport, sum(Passengers*Flights) as Out_Passengers
from df group by Destination_airport),
origin as (select Origin_airport as Airport, sum(Passengers) as In_Passengers
from df group by Origin_airport)
select origin.Airport, (destination.Out_Passengers+origin.In_Passengers) as Total_Passengers
from origin, destination
where origin.Airport = destination.Airport
order by (origin.In_Passengers + destination.Out_Passengers) DESC
limit 15""")
MostPassengersByAirports.show()
```
## **Occupancy Rates for Routes with Most Flights**
```
distanceQuery = sqlContext.sql("""with table1 as
(select least(Origin_airport, Destination_airport) as Airport1,
greatest(Destination_airport, Origin_airport) as Airport2,
sum(Flights) as Flights,
sum(Passengers) as Passengers,
sum(Seats) as Seats
from df
group by least(Origin_airport, Destination_airport), greatest(Destination_airport, Origin_airport)
order by 1,2)
select t.*, (Passengers*100/Seats) as Occupancy_Rate
from table1 t
order by Flights DESC, Seats DESC, Passengers DESC, Occupancy_Rate DESC
limit 15;""")
distanceQuery = distanceQuery.filter((col("Occupancy_Rate").isNotNull()) & (col("Occupancy_Rate")<=100.0))
distanceQuery.show(15)
```
## **Number of Flights vs Distance - Part 1**
```
distanceQuery = sqlContext.sql("""with table1 as
(select least(Origin_airport, Destination_airport) as Airport1,
greatest(Destination_airport, Origin_airport) as Airport2,
mean(Distance) as Distance,
sum(Flights) as Flights
from df
group by least(Origin_airport, Destination_airport), greatest(Destination_airport, Origin_airport)
order by 1,2)
select t.*
from table1 t
where Flights>0
order by Distance DESC
limit 15;""")
# distanceQuery = distanceQuery.filter((col("Occupancy_Rate").isNotNull()) & (col("Occupancy_Rate")<=100.0))
distanceQuery.show(15)
```
## **Number of Flights vs Distance - Part 2**
```
distanceQuery = sqlContext.sql("""with table1 as
(select least(Origin_airport, Destination_airport) as Airport1,
greatest(Destination_airport, Origin_airport) as Airport2,
mean(Distance) as Distance,
sum(Flights) as Flights
from df
group by least(Origin_airport, Destination_airport), greatest(Destination_airport, Origin_airport)
order by 1,2)
select t.*
from table1 t
where Flights>0
order by Flights DESC
limit 15;""")
# distanceQuery = distanceQuery.filter((col("Occupancy_Rate").isNotNull()) & (col("Occupancy_Rate")<=100.0))
distanceQuery.show(15)
```
| github_jupyter |
```
import numpy as np
from cluster_algorithms import base_kmeans
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import silhouette_score, davies_bouldin_score, calinski_harabasz_score
from scipy.spatial import Voronoi, voronoi_plot_2d
import time
data_files_path = '../data_files/data17_13TeV.AllPeriods.sgn.probes_lhmedium_EGAM2.bkg.VProbes_EGAM7.GRL_v97/'
file_name = 'data17_13TeV.AllPeriods.sgn.probes_lhmedium_EGAM2.bkg.VProbes_EGAM7.GRL_v97_et0_eta0.npz'
plots_path = '../clustering_plots/'
my_seed = 13
def add_subplot_axes(ax,rect,axisbg='w'):
fig = plt.gcf()
box = ax.get_position()
width = box.width
height = box.height
inax_position = ax.transAxes.transform(rect[0:2])
transFigure = fig.transFigure.inverted()
infig_position = transFigure.transform(inax_position)
x = infig_position[0]
y = infig_position[1]
width *= rect[2]
height *= rect[3] # <= Typo was here
subax = fig.add_axes([x,y,width,height],facecolor=axisbg)
x_labelsize = subax.get_xticklabels()[0].get_size()
y_labelsize = subax.get_yticklabels()[0].get_size()
x_labelsize = rect[2]*0.5
y_labelsize = rect[3]*0.5
#subax.xaxis.set_tick_params(labelsize=x_labelsize)
#subax.yaxis.set_tick_params(labelsize=y_labelsize)
return subax
def plot_div_evo(al_object, breg_div, tag, path=plots_path):
plt.figure(figsize=(10,8))
ax = plt.gca()
ax.plot(range(al_object.get_last_iter()), al_object.get_sum_total_div(), '--o', c='g')
ax.set_title('Total sum of the %s divergence' %(breg_div), fontsize=18)
ax.set_ylabel(r'$D_{\phi}[C: D]$', fontsize=10)
ax.set_xlabel(r'Iteractions', fontsize=10)
ax.set_xticks(np.arange(1, al_object.get_last_iter()+ 1))
plt.grid()
ax2 = add_subplot_axes(ax, rect=[.3, .3, .6, .6])
ax2.plot(range(al_object.get_last_iter()), al_object.get_sum_total_div(), '--o', c='g')
ax2.set_ylabel(r'$D_{\phi}[C: D]$', fontsize=15)
ax2.set_xlabel(r'Iteractions', fontsize=15)
#ax2.set_xticks(np.arange(1, al_object.get_last_iter()+ 1))
ax2.set_xlim([0, 8])
ax2.grid()
plt.savefig(path+'sum_total_divergence_ev_'+tag, dpi=100)
plt.close()
def plot_voronoi2D_diagram(al_object, X, classes, divergence, tag, path=plots_path):
centers = al_object.get_centroids()
# Get the Voronoi diagrams
vor = Voronoi(centers)
ax_lim = [np.min(X, axis=0), np.max(X, axis=0)]
fig, axes = plt.subplots(1, 1, figsize=(10,8))
# Draw data using target to colorize them
dict_label = {
0 : ('red','Background'),
1 : ('blue','Signal')
}
for i in np.unique(classes):
axes.scatter(X[classes==i, 0], X[classes==i, 1], c=dict_label[i][0],
edgecolor='k', s=35, alpha=.5, label=dict_label[i][1])
# Draw the centroids
axes.plot(centers[:,0], centers[:,1], '^', c='black', markersize=15, label='Final Centroids')
# Draw voronoi
voronoi_plot_2d(vor, ax=axes, line_colors='darkorange', line_width=3, show_points=False, show_vertices=True)
plt.title('Obtained Clusters for %s divergence' %(divergence), fontsize=18)
plt.grid()
plt.legend(loc='best', fontsize='x-large')
plt.xlim([ax_lim[0][0], ax_lim[1][0]])
plt.ylim([ax_lim[0][1], ax_lim[1][1]])
plt.xticks(fontsize=13)
plt.yticks(fontsize=13)
plt.xlabel(r'$\langle\mu\rangle$', fontsize=15)
plt.ylabel(r'$E_T$', fontsize=13)
plt.savefig(path+'voronoi_diagram_'+tag, dpi=100)
plt.close()
jpsi_data = dict(np.load(data_files_path+file_name))
jpsi_data.keys()
list_of_features = list(jpsi_data['features'])
print(list_of_features)
var_indexes = [list_of_features.index('avgmu'),
list_of_features.index('L2Calo_et'),]
data_ = jpsi_data['data'][:, var_indexes]
my_filter = (data_[:,0] <= 80)
sgn_filter = jpsi_data['target'][my_filter]==1
bkg_filter = jpsi_data['target'][my_filter]==0
data_ = data_[my_filter,:]
y = jpsi_data['target'][my_filter]
print(data_.shape)
sgn_choices_filter = np.random.choice(data_[sgn_filter].shape[0], size=800)
bkg_choices_filter = np.random.choice(data_[bkg_filter].shape[0], size=800)
choices_filter = np.concatenate((sgn_choices_filter,bkg_choices_filter))
data_ = data_[choices_filter]
y = jpsi_data['target'][choices_filter]
print(data_.shape)
GeV = 1e3
epsilon = 1e-1
data_[:, 1] = data_[:, 1]/GeV
#data_[data_[:,0] == 0, 0] = data_[data_[:,0] == 0, 0] + epsilon
n_clusters = [3, 4, 5]
n_folds = 10
divs = ['euclidean', 'exp', 'itakura-saito', 'gen_kl']#, 'gen_kls', 'gen_js']
cluster_measures = {
'silhouette_score' : silhouette_score,
'davies_bouldin_score' : davies_bouldin_score,
'calinski_harabasz_score' : calinski_harabasz_score
}
kf = KFold(n_splits=n_folds, random_state=13)
CVO = list(kf.split(data_))
cv_dict = {}
for idiv in divs:
cv_dict[idiv] = {}
for idx, ifold in enumerate(CVO):
trn_id, tst_id = ifold
scaler = MinMaxScaler(feature_range=(epsilon, 1))
scaler.fit(data_[trn_id])
norm_data = scaler.transform(data_)
cv_dict[idiv][idx] = {}
for icluster in n_clusters:
#print('Clustering with %i clusters using %s divergence in %i Fold...' %(icluster, idiv, idx))
cv_dict[idiv][idx][icluster] = {}
kmeans = base_kmeans(n_clusters=icluster)
kmeans.fit(norm_data, n_iter=50, tol=1e-3, breg_div=idiv)
plot_div_evo(kmeans, breg_div=idiv, tag='%s_%i_fold_%i_cluster' %(idiv, idx, icluster))
plot_voronoi2D_diagram(kmeans, X=norm_data, classes=y, divergence=idiv,
tag='%s_%i_fold_%i_cluster' %(idiv, idx, icluster))
predicted_labels = kmeans.predict_cluster(norm_data[tst_id])
for imeasure in cluster_measures.keys():
cv_dict[idiv][idx][icluster][imeasure] = cluster_measures[imeasure](norm_data[tst_id],
predicted_labels)
info_cluster_dict = {
'bregman_divergence' : [],
'n_cluster' : [],
'silhouette_score' : [],
'davies_bouldin_score' : [],
'calinski_harabasz_score' : [],
}
for idiv in cv_dict.keys():
for ifold in cv_dict[idiv].keys():
for icluster in cv_dict[idiv][ifold].keys():
info_cluster_dict['bregman_divergence'].append(idiv)
info_cluster_dict['n_cluster'].append(icluster)
for jmeasure in cluster_measures.keys():
info_cluster_dict[jmeasure].append(cv_dict[idiv][ifold][icluster][jmeasure])
import pandas as pd
clus_df = pd.DataFrame(info_cluster_dict)
my_measure = list(cluster_measures.keys())
clus_df.head()
cv_table = clus_df.groupby(['bregman_divergence', 'n_cluster'])[my_measure].agg(['mean', 'std'])
cv_table
cv_table.round(2)
```
* As melhores divergências foram a Euclidiana e a Exponencial;
* Itakura-saito obteve os piores resultados em todas os índices;
```
cv_table.round(2).to_excel('../data_files/clusterization_table.xlsx')
scaler = MinMaxScaler(feature_range=(epsilon, 1))
norm_data = scaler.fit_transform(data_)
icluster = 3
for idiv in divs:
kmeans = base_kmeans(n_clusters=icluster)
kmeans.fit(norm_data, n_iter=50, tol=1e-3, breg_div=idiv)
plot_div_evo(kmeans, breg_div=idiv, tag='%s_%i_cluster_operation' %(idiv, icluster))
plot_voronoi2D_diagram(kmeans, X=norm_data, classes=y, divergence=idiv,
tag='%s_%i_cluster_operation' %(idiv, icluster))
```
| github_jupyter |
##### Copyright 2020 Google
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Precomputed analysis
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/precomputed_analysis"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Use precomputed optimal angles to measure the expected value of $\langle C \rangle$ across a variety of problem types, sizes, $p$-depth, and random instances.
## Setup
Install the ReCirq package:
```
try:
import recirq
except ImportError:
!pip install git+https://github.com/quantumlib/ReCirq
```
Now import Cirq, ReCirq and the module dependencies:
```
import recirq
import cirq
import numpy as np
import pandas as pd
```
## Load the raw data
Go through each record, load in supporting objects, flatten everything into records, and put into a massive dataframe.
```
from recirq.qaoa.experiments.precomputed_execution_tasks import \
DEFAULT_BASE_DIR, DEFAULT_PROBLEM_GENERATION_BASE_DIR, DEFAULT_PRECOMPUTATION_BASE_DIR
records = []
for record in recirq.iterload_records(dataset_id="2020-03-tutorial", base_dir=DEFAULT_BASE_DIR):
dc_task = record['task']
apre_task = dc_task.precomputation_task
pgen_task = apre_task.generation_task
problem = recirq.load(pgen_task, base_dir=DEFAULT_PROBLEM_GENERATION_BASE_DIR)['problem']
record['problem'] = problem.graph
record['problem_type'] = problem.__class__.__name__
record['optimum'] = recirq.load(apre_task, base_dir=DEFAULT_PRECOMPUTATION_BASE_DIR)['optimum']
record['bitstrings'] = record['bitstrings'].bits
recirq.flatten_dataclass_into_record(record, 'task')
recirq.flatten_dataclass_into_record(record, 'precomputation_task')
recirq.flatten_dataclass_into_record(record, 'generation_task')
recirq.flatten_dataclass_into_record(record, 'optimum')
records.append(record)
df_raw = pd.DataFrame(records)
df_raw['timestamp'] = pd.to_datetime(df_raw['timestamp'])
df_raw.head()
```
## Narrow down to relevant data
Drop unnecessary metadata and use bitstrings to compute the expected value of the energy. In general, it's better to save the raw data and lots of metadata so we can use it if it becomes necessary in the future.
```
from recirq.qaoa.simulation import hamiltonian_objectives, hamiltonian_objective_avg_and_err
import cirq.google as cg
def compute_energy_w_err(row):
permutation = []
for i, q in enumerate(row['qubits']):
fi = row['final_qubits'].index(q)
permutation.append(fi)
energy, err = hamiltonian_objective_avg_and_err(row['bitstrings'], row['problem'], permutation)
return pd.Series([energy, err], index=['energy', 'err'])
# Start cleaning up the raw data
df = df_raw.copy()
# Don't need these columns for present analysis
df = df.drop(['gammas', 'betas', 'circuit', 'violation_indices',
'precomputation_task.dataset_id',
'generation_task.dataset_id',
'generation_task.device_name'], axis=1)
# p is specified twice (from a parameter and from optimum)
assert (df['optimum.p'] == df['p']).all()
df = df.drop('optimum.p', axis=1)
# Compute energies
df = df.join(df.apply(compute_energy_w_err, axis=1))
df = df.drop(['bitstrings', 'qubits', 'final_qubits', 'problem'], axis=1)
# Normalize
df['energy_ratio'] = df['energy'] / df['min_c']
df['err_ratio'] = df['err'] * np.abs(1/df['min_c'])
df['f_val_ratio'] = df['f_val'] / df['min_c']
df
```
## Plots
```
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.rc('axes', labelsize=16, titlesize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
plt.rc('legend', fontsize=14, title_fontsize=16)
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
QGREEN = '#34a853ff'
QGOLD2 = '#ffca28'
QBLUE2 = '#1e88e5'
C = r'\langle C \rangle'
CMIN = r'C_\mathrm{min}'
COVERCMIN = f'${C}/{CMIN}$'
def percentile(n):
def percentile_(x):
return np.nanpercentile(x, n)
percentile_.__name__ = 'percentile_%s' % n
return percentile_
```
### Raw swarm plots of all data
```
import numpy as np
from matplotlib import pyplot as plt
pretty_problem = {
'HardwareGridProblem': 'Hardware Grid',
'SKProblem': 'SK Model',
'ThreeRegularProblem': '3-Regular MaxCut'
}
for problem_type in ['HardwareGridProblem', 'SKProblem', 'ThreeRegularProblem']:
df1 = df
df1 = df1[df1['problem_type'] == problem_type]
for p in sorted(df1['p'].unique()):
dfb = df1
dfb = dfb[dfb['p'] == p]
dfb = dfb.sort_values(by='n_qubits')
plt.subplots(figsize=(7,5))
n_instances = dfb.groupby('n_qubits').count()['energy_ratio'].unique()
if len(n_instances) == 1:
n_instances = n_instances[0]
label = f'{n_instances}'
else:
label = f'{min(n_instances)} - {max(n_instances)}'
#sns.boxplot(dfb['n_qubits'], dfb['energy_ratio'], color=QBLUE, saturation=1)
#sns.boxplot(dfb['n_qubits'], dfb['f_val_ratio'], color=QGREEN, saturation=1)
sns.swarmplot(dfb['n_qubits'], dfb['energy_ratio'], color=QBLUE)
sns.swarmplot(dfb['n_qubits'], dfb['f_val_ratio'], color=QGREEN)
plt.axhline(1, color='grey', ls='-')
plt.axhline(0, color='grey', ls='-')
plt.title(f'{pretty_problem[problem_type]}, {label} instances, p={p}')
plt.xlabel('# Qubits')
plt.ylabel(COVERCMIN)
plt.tight_layout()
plt.show()
```
### Compare SK and hardware grid vs. n
```
pretty_problem = {
'HardwareGridProblem': 'Hardware Grid',
'SKProblem': 'SK Model',
'ThreeRegularProblem': '3-Regular MaxCut'
}
df1 = df
df1 = df1[
((df1['problem_type'] == 'SKProblem') & (df1['p'] == 3))
| ((df1['problem_type'] == 'HardwareGridProblem') & (df1['p'] == 3))
]
df1 = df1.sort_values(by='n_qubits')
MINQ = 3
df1 = df1[df1['n_qubits'] >= MINQ]
plt.subplots(figsize=(8, 6))
plt.xlim((8, 23))
# SK
dfb = df1
dfb = dfb[dfb['problem_type'] == 'SKProblem']
sns.swarmplot(dfb['n_qubits'], dfb['energy_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QRED)
sns.swarmplot(dfb['n_qubits'], dfb['f_val_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QRED,
marker='s')
dfg = dfb.groupby('n_qubits').mean().reset_index()
# --------
# Hardware
dfb = df1
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
sns.swarmplot(dfb['n_qubits'], dfb['energy_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QBLUE)
sns.swarmplot(dfb['n_qubits'], dfb['f_val_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QBLUE,
marker='s')
dfg = dfb.groupby('n_qubits').mean().reset_index()
# -------
plt.axhline(1, color='grey', ls='-')
plt.axhline(0, color='grey', ls='-')
plt.xlabel('# Qubits')
plt.ylabel(COVERCMIN)
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
from matplotlib.legend_handler import HandlerTuple
lelements = [
Line2D([0], [0], color=QBLUE, marker='o', ms=7, ls='', ),
Line2D([0], [0], color=QRED, marker='o', ms=7, ls='', ),
Line2D([0], [0], color='k', marker='s', ms=7, ls='', markerfacecolor='none'),
Line2D([0], [0], color='k', marker='o', ms=7, ls='', markerfacecolor='none'),
]
plt.legend(lelements, ['Hardware Grid', 'SK Model', 'Noiseless', 'Experiment', ], loc='best',
title=f'p = 3',
handler_map={tuple: HandlerTuple(ndivide=None)}, framealpha=1.0)
plt.tight_layout()
plt.show()
```
### Hardware grid vs. p
```
dfb = df
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
dfb = dfb[['p', 'instance_i', 'n_qubits', 'energy_ratio', 'f_val_ratio']]
P_LIMIT = max(dfb['p'])
def max_over_p(group):
i = group['energy_ratio'].idxmax()
return group.loc[i][['energy_ratio', 'p']]
def count_p(group):
new = {}
for i, c in enumerate(np.bincount(group['p'], minlength=P_LIMIT+1)):
if i == 0:
continue
new[f'p{i}'] = c
return pd.Series(new)
dfgy = dfb.groupby(['n_qubits', 'instance_i']).apply(max_over_p).reset_index()
dfgz = dfgy.groupby(['n_qubits']).apply(count_p).reset_index()
# In the paper, we restrict to n > 10
# dfgz = dfgz[dfgz['n_qubits'] > 10]
dfgz = dfgz.set_index('n_qubits').sum(axis=0)
dfgz /= (dfgz.sum())
dfgz
dfb = df
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
dfb = dfb[['p', 'instance_i', 'n_qubits', 'energy_ratio', 'f_val_ratio']]
# In the paper, we restrict to n > 10
# dfb = dfb[dfb['n_qubits'] > 10]
dfg = dfb.groupby('p').agg(['median', percentile(25), percentile(75), 'mean', 'std']).reset_index()
plt.subplots(figsize=(5.5,4))
plt.errorbar(x=dfg['p'], y=dfg['f_val_ratio', 'mean'],
yerr=(dfg['f_val_ratio', 'std'],
dfg['f_val_ratio', 'std']),
fmt='o-',
capsize=7,
color=QGREEN,
label='Noiseless'
)
plt.errorbar(x=dfg['p'], y=dfg['energy_ratio', 'mean'],
yerr=(dfg['energy_ratio', 'std'],
dfg['energy_ratio', 'std']),
fmt='o-',
capsize=7,
color=QBLUE,
label='Experiment'
)
plt.xlabel('p')
plt.ylabel('Mean ' + COVERCMIN)
plt.ylim((0, 1))
plt.text(0.05, 0.9, r'Hardware Grid', fontsize=16, transform=plt.gca().transAxes, ha='left', va='bottom')
plt.legend(loc='center right')
ax2 = plt.gca().twinx() # instantiate a second axes that shares the same x-axis
dfgz_p = [int(s[1:]) for s in dfgz.index]
dfgz_y = dfgz.values
ax2.bar(dfgz_p, dfgz_y, color=QBLUE, width=0.9, lw=1, ec='k')
ax2.tick_params(axis='y')
ax2.set_ylim((0, 2))
ax2.set_yticks([0, 0.25, 0.50])
ax2.set_yticklabels(['0%', None, '50%'])
ax2.set_ylabel('Fraction best' + ' ' * 41, fontsize=14)
plt.tight_layout()
```
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Data Wrangling Lab**
Estimated time needed: **45 to 60** minutes
In this assignment you will be performing data wrangling.
## Objectives
In this lab you will perform the following:
* Identify duplicate values in the dataset.
* Remove duplicate values from the dataset.
* Identify missing values in the dataset.
* Impute the missing values in the dataset.
* Normalize data in the dataset.
<hr>
## Hands on Lab
Import pandas module.
```
import pandas as pd
```
Load the dataset into a dataframe.
```
df = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m1_survey_data.csv")
```
## Finding duplicates
In this section you will identify duplicate values in the dataset.
Find how many duplicate rows exist in the dataframe.
```
# your code goes here
len(df[df.duplicated()])
```
## Removing duplicates
Remove the duplicate rows from the dataframe.
```
# your code goes here
df = df.drop_duplicates()
```
Verify if duplicates were actually dropped.
```
# your code goes here
len(df[df.duplicated()])
```
## Finding Missing values
Find the missing values for all columns.
```
# your code goes here
df.isnull().sum().sort_values(ascending=False)
```
Find out how many rows are missing in the column 'WorkLoc'
```
# your code goes here
df["WorkLoc"].isnull().sum()
```
## Imputing missing values
Find the value counts for the column WorkLoc.
```
# your code goes here
len(df['WorkLoc'])
```
Identify the value that is most frequent (majority) in the WorkLoc column.
```
#make a note of the majority value here, for future reference
#
# for column in missing_data.columns.values.tolist():
# print(column)
# print (missing_data[column].value_counts())
# print("")
df["WorkLoc"].value_counts()
```
Impute (replace) all the empty rows in the column WorkLoc with the value that you have identified as majority.
```
# your code goes here
df["WorkLoc"].fillna(value="Office",inplace=True)
```
After imputation there should ideally not be any empty rows in the WorkLoc column.
Verify if imputing was successful.
```
# your code goes here
df["WorkLoc"].value_counts()
```
## Normalizing data
There are two columns in the dataset that talk about compensation.
One is "CompFreq". This column shows how often a developer is paid (Yearly, Monthly, Weekly).
The other is "CompTotal". This column talks about how much the developer is paid per Year, Month, or Week depending upon his/her "CompFreq".
This makes it difficult to compare the total compensation of the developers.
In this section you will create a new column called 'NormalizedAnnualCompensation' which contains the 'Annual Compensation' irrespective of the 'CompFreq'.
Once this column is ready, it makes comparison of salaries easy.
<hr>
List out the various categories in the column 'CompFreq'
```
# your code goes here
df["CompFreq"].unique()
```
Create a new column named 'NormalizedAnnualCompensation'. Use the hint given below if needed.
Double click to see the **Hint**.
<!--
Use the below logic to arrive at the values for the column NormalizedAnnualCompensation.
If the CompFreq is Yearly then use the exising value in CompTotal
If the CompFreq is Monthly then multiply the value in CompTotal with 12 (months in an year)
If the CompFreq is Weekly then multiply the value in CompTotal with 52 (weeks in an year)
-->
```
# your code goes here
df["CompFreq"].replace(to_replace="Yearly",value=1,inplace=True)
df["CompFreq"].replace(to_replace="Monthly",value=12,inplace=True)
df["CompFreq"].replace(to_replace="Weekly",value=52,inplace=True)
df["CompFreq"].unique()
df["CompFreq"].value_counts()
df['NormalizedAnnualCompensation'] = df["CompTotal"] * df["CompFreq"]
df
```
## Authors
Ramesh Sannareddy
### Other Contributors
Rav Ahuja
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ----------------- | ---------------------------------- |
| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01\&cm_mmc=Email_Newsletter-\_-Developer_Ed%2BTech-\_-WW_WW-\_-SkillsNetwork-Courses-IBM-DA0321EN-SkillsNetwork-21426264\&cm_mmca1=000026UJ\&cm_mmca2=10006555\&cm_mmca3=M12345678\&cvosrc=email.Newsletter.M12345678\&cvo_campaign=000026UJ).
| github_jupyter |
# Hail workshop
This notebook will introduce the following concepts:
- Using Jupyter notebooks effectively
- Loading genetic data into Hail
- General-purpose data exploration functionality
- Plotting functionality
- Quality control of sequencing data
- Running a Genome-Wide Association Study (GWAS)
- Rare variant burden tests
## Hail on Jupyter
From https://jupyter.org:
"The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more."
In the last year, the Jupyter development team [released Jupyter Lab](https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906), an integrated environment for data, code, and visualizations. If you've used R Studio, this is the closest thing that works in Python (and many other languages!).
### Why notebooks?
Part of what we think is so exciting about Hail is that it has coincided with a larger shift in the data science community.
Three years ago, most computational biologists at Broad analyzed genetic data using command-line tools, and took advantage of research compute clusters by explicitly using scheduling frameworks like LSF or Sun Grid Engine.
Now, they have the option to use Hail in interactive Python notebooks backed by thousands of cores on public compute clouds like [Google Cloud](https://cloud.google.com/), [Amazon Web Services](https://aws.amazon.com/), or [Microsoft Azure](https://azure.microsoft.com/).
# Using Jupyter
### Running cells
Evaluate cells using `SHIFT + ENTER`. Select the next cell and run it.
```
print('Hello, world')
```
### Modes
Jupyter has two modes, a **navigation mode** and an **editor mode**.
#### Navigation mode:
- <font color="blue"><strong>BLUE</strong></font> cell borders
- `UP` / `DOWN` move between cells
- `ENTER` while a cell is selected will move to **editing mode**.
- Many letters are keyboard shortcuts! This is a common trap.
#### Editor mode:
- <font color="green"><strong>GREEN</strong></font> cell borders
- `UP` / `DOWN`/ move within cells before moving between cells.
- `ESC` will return to **navigation mode**.
- `SHIFT + ENTER` will evaluate a cell and return to **navigation mode**.
### Cell types
There are several types of cells in Jupyter notebooks. The two you will see here are **Markdown** (text) and **Code**.
```
# This is a code cell
my_variable = 5
```
**This is a markdown cell**, so even if something looks like code (as below), it won't get executed!
my_variable += 1
```
print(my_variable)
```
### Common gotcha: a code cell turns into markdown
This can happen if you are in **navigation mode** and hit the keyboard shortcut `m` while selecting a code cell.
You can either navigate to `Cell > Cell Type > Code` through the top menu, or use the keyboard shortcut `y` to turn it back to code.
### Tips and tricks
Keyboard shortcuts:
- `SHIFT + ENTER` to evaluate a cell
- `ESC` to return to navigation mode
- `y` to turn a markdown cell into code
- `m` to turn a code cell into markdown
- `a` to add a new cell **above** the currently selected cell
- `b` to add a new cell **below** the currently selected cell
- `d, d` (repeated) to delete the currently selected cell
- `TAB` to activate code completion
To try this out, create a new cell below this one using `b`, and print `my_variable` by starting with `print(my` and pressing `TAB`!
### Common gotcha: the state of your code seems wrong
Jupyter makes it easy to get yourself into trouble by executing cells out-of-order, or multiple times.
For example, if I declare `x`:
```
x = 5
```
Then have a cell that reads:
```
x += 1
```
And finally:
```
print(x)
```
If you execute these cells in order and once, I'll see the notebook print `6`. However, there is **nothing stopping you** from executing the middle cell ten times, printing `16`!
### Solution
If you get yourself into trouble into this way, the solution is to clear the kernel (Python process) and start again from the top.
First, `Kernel > Restart & Clear Output > (accept dialog)`.
Second, `Cell > Run all above`.
# Set up our Python environment
In addition to Hail, we import a few methods from the [bokeh](https://bokeh.pydata.org/en/latest/) plotting library. We'll see examples soon!
```
import hail as hl
from bokeh.io import output_notebook, show
```
Now we initialize Hail and set up Bokeh to display inline in the notebook.
```
hl.init()
output_notebook()
```
# Download public 1000 Genomes data
The workshop materials are designed to work on a small (~20MB) downsampled chunk of the public 1000 Genomes dataset.
You can run these same functions on your computer or on the cloud!
```
hl.utils.get_1kg('data/')
```
It is possible to call command-line utilities from Jupyter by prefixing a line with a `!`:
```
! ls -1 data/
```
# Part 1: Explore genetic data with Hail
### Import data from VCF
The [Variant Call Format (VCF)](https://en.wikipedia.org/wiki/Variant_Call_Format) is a common file format for representing genetic data collected on multiple individuals (samples).
Hail's [import_vcf](https://hail.is/docs/0.2/methods/impex.html#hail.methods.import_vcf) function can read this format.
However, VCF is a text format that is easy for humans but very bad for computers. The first thing we do is `write` to a Hail native file format, which is much faster!
```
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
```
### Read 1KG into Hail
We represent genetic data as a Hail [MatrixTable](https://hail.is/docs/0.2/overview/matrix_table.html), and name our variable `mt` to indicate this.
```
mt = hl.read_matrix_table('data/1kg.mt')
```
### What is a `MatrixTable`?
Let's describe it!
The `describe` method prints the **schema**, that is, the fields in the dataset and their types.
You can see:
- **numeric** types:
- integers (`int32`, `int64`), e.g. `5`
- floating point numbers (`float32`, `float64`), e.g. `5.5` or `3e-8`
- **strings** (`str`), e.g. `"Foo"`
- **boolean** values (`bool`) e.g. `True`
- **collections**:
- arrays (`array`), e.g. `[1,1,2,3]`
- sets (`set`), e.g. `{1,3}`
- dictionaries (`dict`), e.g. `{'Foo': 5, 'Bar': 10}`
- **genetic data types**:
- loci (`locus`), e.g. `[GRCh37] 1:10000` or `[GRCh38] chr1:10024`
- genotype calls (`call`), e.g. `0/2` or `1|0`
```
mt.describe()
```
#### `count`
`MatrixTable.count` returns a tuple with the number of rows (variants) and number of columns (samples).
```
mt.count()
```
#### `show`
There is no `mt.show()` method, but you can show individual fields like the sample ID (`s`), or the locus (`locus`).
```
mt.s.show(5)
mt.locus.show(5)
```
### <font color="brightred"><strong>Exercise: </strong></font> show other fields
You can see the names of fields above. `show()` the first few values for a few of them, making sure to include at least one **row field** and **at least one entry field**. Capitalization is important.
To print fields inside the `info` structure, you must add another dot, e.g. `mt.info.AN`.
What do you notice being printed alongside some of the fields?
### Hail has functions built for genetics
For example, `hl.summarize_variants` prints useful statistics about the genetic variants in the dataset.
```
hl.summarize_variants(mt)
```
### Most of Hail's functionality is totally general-purpose!
Functions like `summarize_variants` are built out of Hail's general-purpose data manipulation functionality. We can use Hail to ask arbitrary questions about the data:
```
mt.aggregate_rows(hl.agg.count_where(mt.alleles == ['A', 'T']))
```
Or if we had travel data:
```
data.aggregate(
hl.agg.count_where(data.departure_city == 'Boston')
)
```
The `counter` aggregator makes it possible to see distributions of categorical data, like alleles:
```
snp_counts = mt.aggregate_rows(
hl.array(hl.agg.counter(mt.alleles)))
snp_counts
```
By sorting the result in Python, we can recover an interesting bit of biology...
```
sorted(snp_counts,
key=lambda x: x[1])
```
### <font color="brightred"><strong>Question: </strong></font> What is interesting about this distribution?
### <font color="brightred"><strong>Question: </strong></font> Why do the counts come in pairs?
# Part 2: Annotation and quality control
## Integrate sample information
We're building toward a genome-wide association test in part 3, but we don't just need genetic data to do a GWAS -- we also need phenotype data! Luckily, our `hl.utils.get_1kg` function also downloaded some simulated phenotype data.
This is a text file:
```
! head data/1kg_annotations.txt
```
We can import it as a [Hail Table](https://hail.is/docs/0.2/overview/table.html) with [hl.import_table](https://hail.is/docs/0.2/methods/impex.html?highlight=import_table#hail.methods.import_table).
We call it "sa" for "sample annotations".
```
sa = hl.import_table('data/1kg_annotations.txt',
impute=True,
key='Sample')
```
While we can see the names and types of fields in the logging messages, we can also `describe` and `show` this table:
```
sa.describe()
sa.show()
```
## Add sample metadata into our 1KG `MatrixTable`
It's short and easy:
```
mt = mt.annotate_cols(pheno = sa[mt.s])
```
### What's going on here?
Understanding what's going on here is a bit more difficult. To understand, we need to understand a few pieces:
#### 1. `annotate` methods
In Hail, `annotate` methods refer to **adding new fields**.
- `MatrixTable`'s `annotate_cols` adds new column fields.
- `MatrixTable`'s `annotate_rows` adds new row fields.
- `MatrixTable`'s `annotate_entries` adds new entry fields.
- `Table`'s `annotate` adds new row fields.
In the above cell, we are adding a new column field called "pheno". This field should be the values in our table `sa` associated with the sample ID `s` in our `MatrixTable` - that is, this is performing a **join**.
Python uses square brackets to look up values in dictionaries:
d = {'foo': 5, 'bar': 10}
d['foo']
You should think of this in much the same way - for each column of `mt`, we are looking up the fields in `sa` using the sample ID `s`.
```
mt.describe()
```
### <font color="brightred"><strong>Exercise: </strong></font> Query some of these column fields using `mt.aggregate_cols`.
Some of the aggregators we used earlier:
- `hl.agg.counter`
- `hl.agg.stats`
- `hl.agg.count_where`
## Sample QC
We'll start with examples of sample QC.
Hail has the function [hl.sample_qc](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.sample_qc) to compute a list of useful statistics about samples from sequencing data.
**Click the link** above to see the documentation, which lists the fields and their descriptions.
```
mt = hl.sample_qc(mt)
mt.sample_qc.describe()
p = hl.plot.scatter(x=mt.sample_qc.r_het_hom_var,
y=mt.sample_qc.call_rate)
show(p)
```
### <font color="brightred"><strong>Exercise: </strong></font> Plot some other fields!
Modify the cell above. Remember `hl.plot.histogram` as well!
If you want to start getting fancy, you can plot more complicated expressions -- the ratio between two fields, for instance.
### Filter columns using generated QC statistics
```
mt = mt.filter_cols(mt.sample_qc.dp_stats.mean >= 4)
mt = mt.filter_cols(mt.sample_qc.call_rate >= 0.97)
```
## Genotype QC
We explored GQ above, and analysts often set thresholds for GQ to filter entries (genotypes). Another useful metric is **allele read balance**.
This value is defined by:
$\quad AB = \dfrac{N_{alt}}{{N_{ref} + N_{alt}}}$
Where $N_{ref}$ is the number of reference reads and $N_{alt}$ is the number of alternate reads.
We want
```
# call rate before filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = (
hl.case()
.when(mt.GT.is_hom_ref(), ab <= 0.1)
.when(mt.GT.is_het(), (ab >= 0.25) & (ab <= 0.75))
.default(ab >= 0.9) # hom-var
)
mt = mt.filter_entries(filter_condition_ab)
# call rate after filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
```
## Variant QC
Hail has the function [hl.variant_qc](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.variant_qc) to compute a list of useful statistics about **variants** from sequencing data.
Once again, **Click the link** above to see the documentation!
```
mt = hl.variant_qc(mt)
mt.variant_qc.describe()
mt.variant_qc.AF.show()
```
### Remove rare sites:
```
mt = mt.filter_rows(hl.min(mt.variant_qc.AF) > 1e-6)
```
### Remove sites far from [Hardy-Weinberg equilbrium](https://en.wikipedia.org/wiki/Hardy%E2%80%93Weinberg_principle):
```
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 0.005)
# final variant and sample count
mt.count()
```
# Part 3: GWAS!
A GWAS is an independent association test performed per variant of a genetic dataset. We use the same phenotype and covariates, but test the genotypes for each variant separately.
In Hail, the method we use is [hl.linear_regression_rows](https://hail.is/docs/0.2/methods/stats.html#hail.methods.linear_regression_rows).
We use the phenotype `CaffeineConsumption` as our dependent variable, the number of alternate alleles as our independent variable, and no covariates besides an intercept term (that's the `1.0`).
```
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.describe()
```
Two of the plots that analysts generally produce are a [Manhattan plot](https://en.wikipedia.org/wiki/Manhattan_plot) and a [Q-Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot).
We'll start with the Manhattan plot:
```
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
```
## Confounded!
The Q-Q plot indicates **extreme** inflation of p-values.
If you've done a GWAS before, you've probably included a few other covariates -- age, sex, and principal components.
Principal components are a measure of genetic ancestry, and can be used to control for [population stratification](https://en.wikipedia.org/wiki/Population_stratification).
We can compute principal components with Hail:
```
pca_eigenvalues, pca_scores, pca_loadings = hl.hwe_normalized_pca(mt.GT, compute_loadings=True)
```
The **eigenvalues** reflect the amount of variance explained by each principal component:
```
pca_eigenvalues
```
The **scores** are the principal components themselves, computed per sample.
```
pca_scores.describe()
pca_scores.scores[0].show()
```
The **loadings** are the contributions to each component for each variant.
```
pca_loadings.describe()
```
We can **annotate** the principal components back onto `mt`:
```
mt = mt.annotate_cols(pca = pca_scores[mt.s])
```
## Principal components measure ancestry
```
p = hl.plot.scatter(mt.pca.scores[0],
mt.pca.scores[1],
label=mt.pheno.SuperPopulation)
show(p)
```
### <font color="brightred"><strong>Question: </strong></font> Does your plot match your neighbors'?
If not, how is it different?
## Control confounders and run another GWAS
```
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.pca.scores[0], mt.pca.scores[1], mt.pca.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
```
# Part 4: Burden tests
GWAS is a great tool for finding associations between **common variants** and disease, but a GWAS can't hope to find associations between rare variants and disease. Even if we have sequencing data for 1,000,000 people, we won't have the statistical power to link a mutation found in only a few people to any disease.
But rare variation has lots of information - especially because statistical genetic theory dictates that rarer variants have, on average, stronger effects on disease per allele.
One possible strategy is to **group together rare variants with similar predicted consequence**. For example, we can group all variants that are predicted to knock out the function of each gene and test the variants for each gene as a group.
We will be running a burden test on our common variant dataset to demonstrate the technical side, but we shouldn't hope to find anything here -- especially because we've only got 10,000 variants!
### Import gene data
We start by importing gene names and coordinates.
```
gene_ht = hl.import_table('data/ensembl_gene_annotations.txt', impute=True)
gene_ht.show()
gene_ht.count()
```
### Create an interval key
```
gene_ht = gene_ht.transmute(interval = hl.locus_interval(gene_ht['Chromosome'],
gene_ht['Gene start'],
gene_ht['Gene end'],
reference_genome='GRCh37'))
gene_ht = gene_ht.key_by('interval')
```
### Annotate variants using these intervals
```
mt = mt.annotate_rows(gene_info = gene_ht[mt.locus])
mt.gene_info.show()
```
### Aggregate genotypes per gene
There is no `hl.burden_test` function -- instead, a burden test is the composition of two modular pieces of Hail functionality:
- `group_rows_by / aggregate`
- `hl.linear_regression_rows`.
While this might be a few more lines of code to write than `hl.burden_test`, it means that you can flexibly specify the genotype aggregation however you like. Using other tools , you may have a few ways to aggregate, but if you want to do something different you are out of luck!
```
burden_mt = (
mt
.group_rows_by(gene = mt.gene_info['Gene name'])
.aggregate(n_variants = hl.agg.count_where(mt.GT.n_alt_alleles() > 0))
)
burden_mt.describe()
```
### What is `burden_mt`?
It is a **gene-by-sample** matrix (compare to `mt`, a **variant-by-sample** matrix).
It has one row field, the `gene`.
It has one entry field, `n_variants`.
It has all the column fields from `mt`.
### Run linear regression per gene
This should look familiar!
```
burden_results = hl.linear_regression_rows(
y=burden_mt.pheno.CaffeineConsumption,
x=burden_mt.n_variants,
covariates=[1.0,
burden_mt.pheno.isFemale,
burden_mt.pca.scores[0],
burden_mt.pca.scores[1],
burden_mt.pca.scores[2]])
```
### Sorry, no `hl.plot.manhattan` for genes!
Instead, we can sort by p-value and print:
```
burden_results.order_by(burden_results.p_value).show()
```
### <font color="brightred"><strong>Exercise: </strong></font> Where along the genome can we find the top gene?
# Part 5: Whirlwind tour
You've seen just a very small fraction of the functionality in Hail. Here are a few examples of things that a large library makes easy.
### Find related individuals using IBD (identity by descent)
```
ht = hl.identity_by_descent(mt).cache()
ht.describe()
ht.filter(ht.ibd.PI_HAT > 0.20).show()
```
### Infer sex from X-chromosome data
```
ht = hl.impute_sex(mt.GT).cache()
ht.show()
```
### Simulate genetic data
```
sim_mt = hl.balding_nichols_model(n_populations=3,
n_samples=1000,
n_variants=1000)
# simulate variant effects using spike-and-slab model
spike_prob = 0.2
sim_mt = sim_mt.annotate_rows(beta = hl.rand_bool(spike_prob) * hl.rand_norm(0, 1))
# compute risk scores from betas
sim_mt = sim_mt.annotate_cols(risk = hl.agg.sum(sim_mt.beta * sim_mt.GT.n_alt_alleles()) / sim_mt.count_rows())
show(hl.plot.histogram(sim_mt.risk))
```
# The case for modularity
Most of the "black-box" methods we've used above (`impute_sex`, `variant_qc`, `sample_qc`, etc) are actually implemented on top of Hail's Python interface using `Table` and `MatrixTable` operations, expressions, aggregations, and linear algebra!
| github_jupyter |
# Getting started with DoWhy: A simple example
This is a quick introduction to the DoWhy causal inference library.
We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.
First, let us load all required packages.
```
import numpy as np
import pandas as pd
import dowhy
from dowhy import CausalModel
import dowhy.datasets
# Avoid printing dataconversion warnings from sklearn
import warnings
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
```
Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome.
Beta is the true causal effect.
```
data = dowhy.datasets.linear_dataset(beta=10,
num_common_causes=5,
num_instruments = 2,
num_effect_modifiers=1,
num_samples=20000,
treatment_is_binary=True,
num_discrete_common_causes=1)
df = data["df"]
print(df.head())
print(data["dot_graph"])
print("\n")
print(data["gml_graph"])
```
Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input.
## Interface 1 (recommended): Input causal graph
We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.
To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html#) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy.
```
# With graph
model=CausalModel(
data = df,
treatment=data["treatment_name"],
outcome=data["outcome_name"],
graph=data["gml_graph"]
)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
```
The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify
the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect.
**DoWhy philosophy: Keep identification and estimation separate**
Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.
It is important to understand that these are orthogonal steps.
* Identification
```
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
print(identified_estimand)
```
Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored.
* Estimation
```
causal_estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification")
print(causal_estimate)
print("Causal Estimate is " + str(causal_estimate.value))
```
You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`.
```
# Causal effect on the control group (ATC)
causal_estimate_att = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification",
target_units = "atc")
print(causal_estimate_att)
print("Causal Estimate is " + str(causal_estimate_att.value))
```
## Interface 2: Specify common causes and instruments
```
# Without graph
model= CausalModel(
data=df,
treatment=data["treatment_name"],
outcome=data["outcome_name"],
common_causes=data["common_causes_names"],
effect_modifiers=data["effect_modifier_names"])
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
```
We get the same causal graph. Now identification and estimation is done as before.
```
identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)
```
* Estimation
```
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.propensity_score_stratification")
print(estimate)
print("Causal Estimate is " + str(estimate.value))
```
## Refuting the estimate
Let us now look at ways of refuting the estimate obtained.
### Adding a random common cause variable
```
res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause")
print(res_random)
```
### Adding an unobserved common cause variable
```
res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause",
confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear",
effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02)
print(res_unobserved)
```
### Replacing treatment with a random (placebo) variable
```
res_placebo=model.refute_estimate(identified_estimand, estimate,
method_name="placebo_treatment_refuter", placebo_type="permute")
print(res_placebo)
```
### Removing a random subset of the data
```
res_subset=model.refute_estimate(identified_estimand, estimate,
method_name="data_subset_refuter", subset_fraction=0.9)
print(res_subset)
```
As you can see, the propensity score stratification estimator is reasonably robust to refutations.
For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below.
```
res_subset=model.refute_estimate(identified_estimand, estimate,
method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1)
print(res_subset)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Mamiraua Dataset Plot
```
lag = 512 # [32, 64, 128, 256, 512
base = pd.read_pickle('../pkl_datasets/mamiraua_dataset_ACF_' + str(lag) + '.gzip')
cotas = pd.read_csv('./boundary_files/Cotas_HxC_bins_' + str(int(lag)) + '.csv')
noise = pd.read_csv('./coloredNoises/coloredNoises_' + str(int(lag)) + '.csv')
base.head()
```
## Data analysis by months
```
july = ['20160711', '20160712', '20160713', '20160714', '20160715']
september = ['20160901', '20160902', '20160903', '20160904', '20160905', '20160906', '20160907', '20160908']
df_july = base[base['date'].isin(july)]
df_september = base[base['date'].isin(september)]
plt.figure(figsize=(24,10))
plt.rc('font', size=22)
plt.rc('axes', titlesize=22)
plt.subplot(1,2,1)
plt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = 'HxC boundaries')
plt.plot(noise['Entropy'],noise['Complexity'], '--b', label = 'Colored noises')
plt.xlim([0, 1])
plt.ylim([0, np.max(cotas['Complexity'])+0.01])
plt.ylabel('Complexity [C]')
plt.xlabel('Entropy [H]')
plt.plot(np.mean(df_july['H']), np.mean(df_july['C']), '.k', label = 'Centroid')
plt.legend(loc = 'upper left', frameon=False)
plt.scatter(df_july['H'], df_july['C'], marker='.', s=15, c=df_july['C'],
norm=plt.Normalize(vmax=np.max(df_july['C']), vmin=np.min(df_july['C'])-0.1),
cmap = 'Greens') # seismic # viridis # plasma # jet # PuBu # YlOrRd # Blues
plt.errorbar(np.mean(df_july['H']), np.mean(df_july['C']), xerr=np.std(df_july['H']), yerr=np.std(df_july['C']), color = 'k',fmt='o')
plt.title('July')
plt.rc('font', size=22)
plt.rc('axes', titlesize=22)
plt.subplot(1,2,2)
plt.plot(cotas['Entropy'],cotas['Complexity'], '--k', label = 'HxC boundaries')
plt.plot(noise['Entropy'],noise['Complexity'], '--b', label = 'Colored noises')
plt.xlim([0, 1])
plt.ylim([0, np.max(cotas['Complexity'])+0.01])
plt.ylabel('Complexity [C]')
plt.xlabel('Entropy [H]')
plt.plot(np.mean(df_september['H']), np.mean(df_september['C']), '.k', label = 'Centroid')
plt.legend(loc = 'upper left', frameon=False)
plt.scatter(df_september['H'], df_september['C'], marker='.', s=15, c=df_september['C'],
norm=plt.Normalize(vmax=np.max(df_september['C']), vmin=np.min(df_september['C'])-0.1),
cmap = 'Greens') # seismic # viridis # plasma # jet # PuBu # YlOrRd # Blues
plt.errorbar(np.mean(df_september['H']), np.mean(df_september['C']),
xerr=np.std(df_september['H']), yerr=np.std(df_september['C']),
color = 'k',fmt='o', label = 'centroid')
plt.title('September')
plt.rc('font', size=22)
plt.rc('axes', titlesize=22)
plt.show()
```
| github_jupyter |
This notebook shows you how to visualize the changes in ozone and particulate matter from different runs of CCTM. Note that you must first run the `combine` program distributed with CMAQ for the files here to exist. The need for postprocessing of CCTM outputs is explained in [this section](https://github.com/USEPA/CMAQ/blob/main/DOCS/Users_Guide/CMAQ_UG_ch08_analysis_tools.md#82-aggregating-and-transforming-model-species) of the CMAQ User's Guide.
```
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cartopy.io.shapereader as shpreader
from matplotlib import cm
from matplotlib import ticker
from matplotlib import colors
from cmaqpy.runcmaq import CMAQModel
from cmaqpy import plots
import monet as m
import monetio as mio
import optwrf.plots as owplt
import wrf as wrfpy
def convert_tz_xr(df, input_tz='UTC', output_tz='US/Eastern'):
tidx_in = df.time.to_index().tz_localize(tz=input_tz)
df.coords['time'] = tidx_in.tz_convert(output_tz).tz_localize(None)
return df
# Specify the start/end times
start_datetime = 'August 06, 2016' # first day that you want run
end_datetime = 'August 14, 2016' # last day you want run
# Define the coordinate name (must match that in GRIDDESC)
coord_name = 'LAM_40N97W'
# Create a CMAQModel object
base_sim = CMAQModel(start_datetime, end_datetime, '2016Base_4OTC2', coord_name, '4OTC2', setup_yaml='dirpaths_2016Base_4OTC2.yml', verbose=True)
ren_sim = CMAQModel(start_datetime, end_datetime, '2016_4OTC2', coord_name, '4OTC2', setup_yaml='dirpaths_2016_4OTC2.yml', verbose=True)
conc_base = f'{base_sim.POST}/COMBINE_ACONC_{base_sim.cctm_runid}_201608.nc'
conc_ren = f'{ren_sim.POST}/COMBINE_ACONC_{ren_sim.cctm_runid}_201608.nc'
c_base = mio.cmaq.open_dataset(fname=conc_base)
c_ren = mio.cmaq.open_dataset(fname=conc_ren)
c_base = convert_tz_xr(c_base)
c_ren = convert_tz_xr(c_ren)
def get_proj(ds):
"""
Extracts the CMAQ projection information from the proj4_srs attribute.
:param ds:
:return:
"""
proj_params = ds.proj4_srs
proj_params = proj_params.replace(' ', '')
proj_params = proj_params.split('+')
proj = proj_params[1].split('=')[1]
truelat1 = float(proj_params[2].split('=')[1])
truelat2 = float(proj_params[3].split('=')[1])
central_latitude = float(proj_params[4].split('=')[1])
central_longitude = float(proj_params[5].split('=')[1])
if proj == 'lcc':
cartopy_crs = ccrs.LambertConformal(central_longitude=central_longitude,
central_latitude=central_latitude,
standard_parallels=[truelat1, truelat2])
return cartopy_crs
else:
raise ValueError('Your projection is not the expected Lambert Conformal.')
def get_domain_boundary(ds, cartopy_crs):
"""
Finds the boundary of the WRF domain.
:param ds:
:param cartopy_crs:
:return:
"""
# Rename the lat-lon corrdinates to get wrf-python to recognize them
variables = {'latitude': 'XLAT',
'longitude': 'XLONG'}
try:
ds = xr.Dataset.rename(ds, variables)
except ValueError:
print(f'Variables {variables} cannot be renamed, '
f'those on the left are not in this dataset.')
# I need to manually convert the boundaries of the WRF domain into Plate Carree to set the limits.
# Get the raw map bounds using a wrf-python utility
raw_bounds = wrfpy.util.geo_bounds(ds)
# Get the projected bounds telling cartopy that the input coordinates are lat/lon (Plate Carree)
projected_bounds = cartopy_crs.transform_points(ccrs.PlateCarree(),
np.array([raw_bounds.bottom_left.lon, raw_bounds.top_right.lon]),
np.array([raw_bounds.bottom_left.lat, raw_bounds.top_right.lat]))
return projected_bounds
def conc_map(plot_var, cmap=cm.get_cmap('bwr'), ax=None, cartopy_crs=None, proj_bounds=None,
vmin=-1, vmax=1, cbar_ticks=[], cbar_label='Concentration'):
"""
Creates a filled colormap across the full domain in the native (Lambert
Conformal) map projection.
"""
if ax is None:
# Create a figure
fig = plt.figure(figsize=(8, 8))
# Set the GeoAxes to the projection used by WRF
ax = fig.add_subplot(1, 1, 1, projection=cartopy_crs)
# Normalize the values, so that the colorbar plots correctly
norm = colors.Normalize(vmin=vmin, vmax=vmax)
# Create the pcolormesh
cn = ax.pcolormesh(wrfpy.to_np(plot_var.longitude), wrfpy.to_np(plot_var.latitude), wrfpy.to_np(plot_var),
transform=ccrs.PlateCarree(),
cmap=cmap,
norm=norm,
)
if proj_bounds is not None:
# Format the projected bounds so they can be used in the xlim and ylim attributes
proj_xbounds = [proj_bounds[0, 0], proj_bounds[1, 0]]
proj_ybounds = [proj_bounds[0, 1], proj_bounds[1, 1]]
# Finally, set the x and y limits
ax.set_xlim(proj_xbounds)
ax.set_ylim(proj_ybounds)
# Download and add the states, coastlines, and lakes
shapename = 'admin_1_states_provinces_lakes'
states_shp = shpreader.natural_earth(resolution='10m',
category='cultural',
name=shapename)
# Add features to the maps
ax.add_geometries(
shpreader.Reader(states_shp).geometries(),
ccrs.PlateCarree(),
facecolor='none',
linewidth=.5,
edgecolor="black"
)
# Add features to the maps
# ax.add_feature(cfeature.LAKES)
# ax.add_feature(cfeature.OCEAN)
# Add color bars
cbar = plt.colorbar(cn,
ax=ax,
ticks=cbar_ticks,
label=cbar_label,
pad=0.05
)
pm25_mean_diff = (c_ren.PM25_TOT - c_base.PM25_TOT).mean(dim='time').squeeze()
pm25_pct_diff = (c_ren.PM25_TOT - c_base.PM25_TOT) / c_base.PM25_TOT
pm25_mean_pct_diff = pm25_pct_diff.mean(dim='time').squeeze()
cartopy_crs = get_proj(c_ren)
proj_bounds = get_domain_boundary(c_ren, cartopy_crs)
conc_map(pm25_mean_diff, cmap=cm.get_cmap('bwr'), ax=None, cartopy_crs=cartopy_crs, proj_bounds=proj_bounds,
vmin=-0.01, vmax=0.01, cbar_ticks=[-0.01, -0.005, 0, 0.005, 0.01], cbar_label='PM$_{2.5}$ Difference ($\mu g/m^{3}$)')
# If you just want to do a quick map visualization
c_base.PM25_TOT.sel(time='2016-08-07 23').monet.quick_map(robust=True)
pm25_mean = c_ren.PM25_TOT.mean(dim='time')
pm25_mean_diff = (c_ren.PM25_TOT - c_base.PM25_TOT).mean(dim='time')
# pm25_pct_diff = (c1.PM25_TOT - c.PM25_TOT) / c.PM25_TOT
# pm25_mean_pct_diff = pm25_pct_diff.mean(dim='time')
plots.conc_compare(pm25_mean, pm25_mean_diff, extent = [-83, -70, 37, 46],
vmin1=0, vmax1=10, vmin2=-1, vmax2=1, cmap1=cm.get_cmap('YlOrBr'), cmap2=cm.get_cmap('bwr'),
cbar_label1='PM$_{2.5}$ ($\mu g/m^{3}$)', cbar_label2='PM$_{2.5}$ Difference ($\mu g/m^{3}$)',
figsize=(7,3.5), savefig=True, figpath1='../cmaqpy/data/plots/PM2.5.png',
figpath2='../cmaqpy/data/plots/PM2.5_diff.png')
o3_mean = c_ren.O3.mean(dim='time')
o3_mean_diff = (c_ren.O3 - c_base.O3).mean(dim='time')
# o3_pct_diff = (c1.O3 - c.O3) / c.O3
# o3_mean_pct_diff = o3_pct_diff.mean(dim='time')
plots.conc_compare(o3_mean, o3_mean_diff, extent = [-83, -70, 37, 46],
vmin1=15, vmax1=45, vmin2=-1, vmax2=1, cmap1=cm.get_cmap('cividis'), cmap2=cm.get_cmap('bwr'),
cbar_label1='O$_{3}$ ($ppbV$)', cbar_label2='O$_{3}$ Difference ($ppbV$)',
figsize=(7,3.5), savefig=True, figpath1='../cmaqpy/data/plots/O3.png',
figpath2='../cmaqpy/data/plots/O3_diff.png')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.metrics import pairwise_distances
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn import cluster
from sklearn import preprocessing
```
## Overall principal of iterated feature selection
1. Perform k-means on each of the features individually for some k.
2. For each cluster measure some clustering performance metric like the Dunn's index or silhouette.
3. Take the feature which gives you the best performance and add it to Sf
4. Perform k-means on Sf and each of the remaining features individually
5. Take the feature which gives you the best performance and add it to Sf
4. If you have reached the desired number of features stop, else go back to 4
```
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
n, m = X.shape[0], X.shape[1]
print(n, m)
X
y
pd.value_counts(pd.Series(y))
```
### Feature selection - code version 1
#### Organize Iris dataset into a dataframe in order to better imitate our project data format
1. Perform k-means on each of the features individually for some k.
2. For each cluster measure some clustering performance metric like the Dunn's index or silhouette.
3. Take the feature which gives you the best performance and add it to Sf
4. Perform k-means on Sf and each of the remaining features individually
5. Take the feature which gives you the best performance and add it to Sf
4. If you have reached the desired number of features stop, else go back to 4
```
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
dir(dataset)
iris = pd.DataFrame(X)
iris.columns = dataset.feature_names
iris
n, m = iris.shape[0], iris.shape[1]
print("number of data points:", n, ". number of variables:", m)
#Let's first use all the features to perform K-Means clustering
model_test = cluster.KMeans(n_clusters=3)
model_test.fit(iris)
pred_y=model_test.labels_
print("True class labels", "\n", pd.value_counts(pd.Series(y)))
print("Clustered class labels:", "\n", pd.value_counts(pd.Series(pred_y)))
# let's assume there are 3 clusters
num_of_cluster = 3
num_of_iter = 3
model = cluster.KMeans(n_clusters=num_of_cluster)
score = np.zeros([num_of_iter, m]) # the sum of squared distances of samples to their closest cluster center
exclude_columns = [] # best performed models with selected features will be added to this list after every iteration
include_columns = [i for i in range(np.shape(score)[1]) if i not in exclude_columns] # rest of the features
# Let's assume we're going to select 3 features out of 4 features, therefore we're going to iterate 3 times
for iteration in range(3):
# The first iteration, we're going to test clustering models on each individual variables
if iteration == 0:
print("Now processing iteration %d" %iteration, "\n")
for i in range(m):
model.fit(iris[[i]])
pred_y = model.labels_
print("cluster labels based on variable %s:" %iris.columns[i], "\n", pd.value_counts(pd.Series(pred_y)))
score[iteration][i] = model.inertia_
print("the sum of squared distances of samples to their closest cluster center based on variable %s" \
%iris.columns[i], "is:", score[iteration][i])
score = score[:,include_columns]
selected_feature_index = np.argmin(score[iteration], axis=0)
selected_feature_score = np.amin(score[iteration], axis=0)
selected_feature = iris[[selected_feature_index]]
exclude_columns.append(selected_feature_index)
print("Conclusion: cluster based on variable %s" %iris.columns[selected_feature_index], "gives the best performance", "\n")
#for following iteration, we're going to add the rest the feature to the selected feature and perform cluster model
else:
print("Now processing iteration %d" %iteration, "\n")
for i in range(m):
if i not in exclude_columns:
# Generate data with features selected from last iteration plus each individual rest of the features
data = pd.concat([selected_feature, iris[[i]]], axis=1)
model.fit(data)
pred_y = model.labels_
print("cluster labels based on variables:", data.columns, "\n", pd.value_counts(pd.Series(pred_y)))
score[iteration][i] = model.inertia_
print("the sum of squared distances of samples to their closest cluster center based on variables:", \
data.columns, "is:", score[iteration][i])
include_columns = [i for i in range(np.shape(score)[1]) if i not in exclude_columns]
selected_feature_score = np.amin(score[:,include_columns][iteration], axis=0)
selected_feature_index = np.where(score[iteration] == selected_feature_score)[0][0]
selected_feature = pd.concat([selected_feature, iris[[selected_feature_index]]], axis=1)
exclude_columns.append(selected_feature_index)
print("Conclusion: cluster based on variable %s" %iris.columns[exclude_columns], "gives the best performance", "\n")
print("Selected features are %s" %iris.columns[exclude_columns])
```
#### That completes three iterations of feature selection and selected three iterations for best performance
### Feature selection - code version 2
* Perform k-means on each of the features individually for some k.
* For each cluster measure some clustering performance metric like the Dunn's index or silhouette.
* Take the feature which gives you the best performance and add it to Sf
```
for d in range(1,5):
exec(f'X{d} = np.zeros(n)')
for i in range(n):
exec(f'X{d}[i] = X[i][d-1]')
score = [0]*m
for i in range(1, m+1):
print("Now processing model %d" %i, ", which uses feature # %d only" %i)
exec(f'model_{i} = cluster.KMeans(n_clusters=3)')
exec(f'X{i} = X{i}[:, np.newaxis]')
exec(f'model_{i}.fit(X{i})')
exec(f'pred_y = model_{i}.labels_')
print("Now let's compare the true class label to the class labels obtained by clustering")
print(pd.value_counts(pd.Series(pred_y)))
exec(f'score[i-1] = model_{i}.inertia_')
print("For model %d:" %i,"the sum of squared distances of samples to their closest cluster center is %s" %score[i-1])
selected_feature = score.index(min(score)) + 1
print("The # %d feature" % (score.index(min(score)) + 1), "gives the best performance")
```
| github_jupyter |
# 集成方法: ensemble method(元算法: meta algorithm) 概述
* 概念: 是对其他算法进行组合的一种形式。
* 集成方法:
1.投票选举(bagging: 自举汇聚法 bootstrap aggregating): 是基于数据随机重抽样分类器构造的方法
2.再学习(boosting): 是基于所有分类器的加权求和的方法
## bagging和boosting的区别是什么?
1.bagging 是一种与 boosting 很类似的技术, 所使用的多个分类器的类型(数据量和特征量)都是一致的。
2.bagging 是由**不同的分类器**(1.数据随机化 2.特征随机化)经过训练,综合得出的出现最多分类结果;boosting 是通过**调整已有分类器错分的那些数据**来获得新的分类器,得出目前最优的结果。
3.bagging 中的分类器权重是相等的;而 boosting 中的分类器加权求和,所以权重并不相等,每个权重代表的是其对应分类器在上一轮迭代中的成功度。
## bagging和boosting的流行版本
* 目前 bagging 方法最流行的版本是: 随机森林(random forest)
选男友: 美女选择择偶对象的时候,会问几个闺蜜的建议,最后选择一个综合得分最高的一个作为男朋友
(闺蜜是平等的,每条建议都会考虑)
* 目前 boosting 方法最流行的版本是: AdaBoost
追女友: 3个帅哥追同一个美女,第1个帅哥失败->(传授经验: 姓名、家庭情况) 第2个帅哥失败->(传授经验: 兴趣爱好、性格特点) 第3个帅哥成功
(关注帅哥的失败经验,调整错误数据信息)
# 随机森林(random forest)
>概念:是利用多棵树进行训练和预测的模型
为了使得随机森林的决策树互不相同,我们尽量提高系统的多样性,在构建随机森林时,我们应该关注:
1. 数据的随机性
2. 特征的随机性
>数据的随机性
我们采取有放回的抽样方式构造子数据集,利用子数据集构建子决策树,每个子决策树输出一个结果,最后统计子决策树的输出结果,得到最终的分类就是随机森林的输出结果。
>特征的随机性
一般的决策树在选取特征时,通过ID3,CART等算法选取最优的分裂特征,而随机森林的子树特征选取是随机的,我们在选取的随机特征中选取最优的特征。
### 准备数据,导入csv文件
```
from random import seed, randrange, random
def load_dataset(filename):
dataset = []
fr = open(filename)
for line in fr.readlines():
if not line:
continue
cur_line = line.strip().split(',')
line_arr = []
for str_f in cur_line:
if str_f.isdigit():
line_arr.append(float(str_f))
else:
line_arr.append(str_f)
dataset.append(line_arr)
return dataset
load_dataset('sonar.all-data.csv')
```
* 训练数据随机化
```
def subsample(dataset, ratio): # 创建数据集的随机子样本
"""random_forest(评估算法性能,返回模型得分)
Args:
dataset 训练数据集
ratio 训练数据集的样本比例
Returns:
sample 随机抽样的训练样本
"""
sample = list()
# 训练样本的按比例抽样。
# round() 方法返回浮点数x的四舍五入值。
n_sample = round(len(dataset) * ratio)
while len(sample) < n_sample:
# 有放回的随机采样,有一些样本被重复采样,从而在训练集中多次出现,有的则从未在训练集中出现,此则自助采样法。从而保证每棵决策树训练集的差异性
index = randrange(len(dataset))
sample.append(dataset[index])
return sample
```
* 特征随机化
```
def test_split(index, value, dataset):
left, right = list(), list()
for row in dataset:
if row[index] < value:
left.append(row)
else:
right.append(row)
return left, right
def gini_index(groups, class_values):
gini = 0.0
for class_value in class_values: # class_values = [0, 1]
for group in groups: # groups = (left, right)
size = len(group)
if size == 0:
continue
proportion = [row[-1] for row in group].count(class_value) / float(size)
gini += (proportion * (1.0 - proportion)) # 个人理解: 计算代价,分类越准确,则 gini 越小
return gini
def get_split(dataset, n_features):
class_values = list(set(row[-1] for row in dataset)) # class_values =[0, 1]
b_index, b_value, b_score, b_groups = 999, 999, 999, None
features = list()
while len(features) < n_features:
index = randrange(len(dataset[0])-1) # 往 features 添加 n_features 个特征( n_feature 等于特征数的根号),特征索引从 dataset 中随机取
if index not in features:
features.append(index)
for index in features: # 在 n_features 个特征中选出最优的特征索引,并没有遍历所有特征,从而保证了每课决策树的差异性
for row in dataset:
groups = test_split(index, row[index], dataset) # groups=(left, right), row[index] 遍历每一行 index 索引下的特征值作为分类值 value, 找出最优的分类特征和特征值
gini = gini_index(groups, class_values)
# 左右两边的数量越一样,说明数据区分度不高,gini系数越大
if gini < b_score:
b_index, b_value, b_score, b_groups = index, row[index], gini, groups # 最后得到最优的分类特征 b_index,分类特征值 b_value,分类结果 b_groups。b_value 为分错的代价成本
# print(b_score)
return {'index': b_index, 'value': b_value, 'groups': b_groups}
```
* 样本数据随机无放回抽样-用于交叉验证
```
def cross_validation_split(dataset, n_folds):
"""cross_validation_split(将数据集进行抽重抽样 n_folds 份,数据可以重复重复抽取,每一次list的元素是无重复的)
Args:
dataset 原始数据集
n_folds 数据集dataset分成n_flods份
Returns:
dataset_split list集合,存放的是: 将数据集进行抽重抽样 n_folds 份,数据可以重复重复抽取,每一次list的元素是无重复的
"""
dataset_split = list()
dataset_copy = list(dataset) # 复制一份 dataset,防止 dataset 的内容改变
fold_size = len(dataset) / n_folds
for i in range(n_folds):
fold = list() # 每次循环 fold 清零,防止重复导入 dataset_split
while len(fold) < fold_size: # 这里不能用 if,if 只是在第一次判断时起作用,while 执行循环,直到条件不成立
# 有放回的随机采样,有一些样本被重复采样,从而在训练集中多次出现,有的则从未在训练集中出现,此则自助采样法。从而保证每棵决策树训练集的差异性
index = randrange(len(dataset_copy))
# 将对应索引 index 的内容从 dataset_copy 中导出,并将该内容从 dataset_copy 中删除。
# pop() 函数用于移除列表中的一个元素(默认最后一个元素),并且返回该元素的值。
# fold.append(dataset_copy.pop(index)) # 无放回的方式
fold.append(dataset_copy[index]) # 有放回的方式
dataset_split.append(fold)
# 由dataset分割出的n_folds个数据构成的列表,为了用于交叉验证
return dataset_split
```
## 随机森林生成
```
def to_terminal(group):
outcomes = [row[-1] for row in group] # max() 函数中,当 key 参数不为空时,就以 key 的函数对象为判断的标准
return max(set(outcomes), key=outcomes.count) # 输出 group 中出现次数较多的标签
def split(node, max_depth, min_size, n_features, depth): # max_depth = 10, min_size = 1, n_features = int(sqrt((dataset[0])-1))
left, right = node['groups']
del(node['groups'])
# check for a no split
if not left or not right:
node['left'] = node['right'] = to_terminal(left + right)
return
# check for max depth
if depth >= max_depth: # max_depth=10 表示递归十次,若分类还未结束,则选取数据中分类标签较多的作为结果,使分类提前结束,防止过拟合
node['left'], node['right'] = to_terminal(left), to_terminal(right)
return
# process left child
if len(left) <= min_size:
node['left'] = to_terminal(left)
else:
node['left'] = get_split(left, n_features) # node['left']是一个字典,形式为{'index':b_index, 'value':b_value, 'groups':b_groups},所以node是一个多层字典
split(node['left'], max_depth, min_size, n_features, depth+1) # 递归,depth+1计算递归层数
# process right child
if len(right) <= min_size:
node['right'] = to_terminal(right)
else:
node['right'] = get_split(right, n_features)
split(node['right'], max_depth, min_size, n_features, depth+1)
def build_tree(train, max_depth, min_size, n_features):
"""build_tree(创建一个决策树)
Args:
train 训练数据集
max_depth 决策树深度不能太深,不然容易导致过拟合
min_size 叶子节点的大小
n_features 选取的特征的个数
Returns:
root 返回决策树
"""
# 返回最优列和相关的信息
root = get_split(train, n_features)
# 对左右2边的数据 进行递归的调用,由于最优特征使用过,所以在后面进行使用的时候,就没有意义了
# 例如: 性别-男女,对男使用这一特征就没任何意义了
split(root, max_depth, min_size, n_features, 1)
return root
def predict(node, row): # 预测模型分类结果
if row[node['index']] < node['value']:
if isinstance(node['left'], dict): # isinstance 是 Python 中的一个内建函数。是用来判断一个对象是否是一个已知的类型。
return predict(node['left'], row)
else:
return node['left']
else:
if isinstance(node['right'], dict):
return predict(node['right'], row)
else:
return node['right']
def bagging_predict(trees, row):
"""bagging_predict(bagging预测)
Args:
trees 决策树的集合
row 测试数据集的每一行数据
Returns:
返回随机森林中,决策树结果出现次数做大的
"""
# 使用多个决策树trees对测试集test的第row行进行预测,再使用简单投票法判断出该行所属分类
predictions = [predict(tree, row) for tree in trees]
return max(set(predictions), key=predictions.count)
def random_forest(train, test, max_depth, min_size, sample_size, n_trees, n_features):
"""random_forest(评估算法性能,返回模型得分)
Args:
train 训练数据集
test 测试数据集
max_depth 决策树深度不能太深,不然容易导致过拟合
min_size 叶子节点的大小
sample_size 训练数据集的样本比例
n_trees 决策树的个数
n_features 选取的特征的个数
Returns:
predictions 每一行的预测结果,bagging 预测最后的分类结果
"""
trees = list()
# n_trees 表示决策树的数量
for i in range(n_trees):
# 随机抽样的训练样本, 随机采样保证了每棵决策树训练集的差异性
sample = subsample(train, sample_size)
# 创建一个决策树
tree = build_tree(sample, max_depth, min_size, n_features)
trees.append(tree)
# 每一行的预测结果,bagging 预测最后的分类结果
predictions = [bagging_predict(trees, row) for row in test]
return predictions
```
## 算法测试
```
# Calculate accuracy percentage
def accuracy_metric(actual, predicted): # 导入实际值和预测值,计算精确度
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# 评估算法性能,返回模型得分
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
"""evaluate_algorithm(评估算法性能,返回模型得分)
Args:
dataset 原始数据集
algorithm 使用的算法
n_folds 数据的份数
*args 其他的参数
Returns:
scores 模型得分
"""
# 将数据集进行抽重抽样 n_folds 份,数据可以重复重复抽取,每一次 list 的元素是无重复的
folds = cross_validation_split(dataset, n_folds)
scores = list()
# 每次循环从 folds 从取出一个 fold 作为测试集,其余作为训练集,遍历整个 folds ,实现交叉验证
for fold in folds:
train_set = list(folds)
train_set.remove(fold)
# 将多个 fold 列表组合成一个 train_set 列表, 类似 union all
"""
In [20]: l1=[[1, 2, 'a'], [11, 22, 'b']]
In [21]: l2=[[3, 4, 'c'], [33, 44, 'd']]
In [22]: l=[]
In [23]: l.append(l1)
In [24]: l.append(l2)
In [25]: l
Out[25]: [[[1, 2, 'a'], [11, 22, 'b']], [[3, 4, 'c'], [33, 44, 'd']]]
In [26]: sum(l, [])
Out[26]: [[1, 2, 'a'], [11, 22, 'b'], [3, 4, 'c'], [33, 44, 'd']]
"""
train_set = sum(train_set, [])
test_set = list()
# fold 表示从原始数据集 dataset 提取出来的测试集
for row in fold:
row_copy = list(row)
row_copy[-1] = None
test_set.append(row_copy)
predicted = algorithm(train_set, test_set, *args)
actual = [row[-1] for row in fold]
# 计算随机森林的预测结果的正确率
accuracy = accuracy_metric(actual, predicted)
scores.append(accuracy)
return scores
if __name__ == '__main__':
# 加载数据
dataset = load_dataset('sonar.all-data.csv')
# print(dataset)
n_folds = 5 # 分成5份数据,进行交叉验证
max_depth = 20 # 调参(自己修改) #决策树深度不能太深,不然容易导致过拟合
min_size = 1 # 决策树的叶子节点最少的元素数量
sample_size = 1.0 # 做决策树时候的样本的比例
# n_features = int((len(dataset[0])-1))
n_features = 15 # 调参(自己修改) #准确性与多样性之间的权衡
for n_trees in [1, 10, 20]: # 理论上树是越多越好
scores = evaluate_algorithm(dataset, random_forest, n_folds, max_depth, min_size, sample_size, n_trees, n_features)
# 每一次执行本文件时都能产生同一个随机数
seed(1)
print('random=', random())
print('Trees: %d' % n_trees)
print('Scores: %s' % scores)
print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores))))
```
| github_jupyter |
# Laboratory 06 - Temperature Measurement
## MAE 3120, Spring 2020
## Grading Rubric
Procedures, Results, Plots, Tables - 50%
Discussion Questions - 40%
Neatness - 10%
## Introduction and Background
Thermistors are temperature sensors whose resistance changes as the temperature changes. They are very accurate and can detect small temperature changes. Thermistors find uses in many applications including automotive to measure ambient or coolant fluid temperatures. The thermistor that will be used for this lab is a negative temperature coefficient (NTC) thermistor since as the temperature increases, the resistance value decreases and vice versa. For this thermistor, the nominal resistance of 10 kΩ corresponds to a temperature of 25°C. Any temperature above 25°C produces a resistance smaller than 10 kΩ and temperatures below produce resistances greater than 10 kΩ. The thermistor data sheet has been given to you, and it gives information about the expected resistances at various temperatures.
Typical NTC thermistors are non-linear and resemble an exponentially decaying function. For very small temperature changes the change in resistance can be approximated linearly with the change in resistance using a correlation coefficient. However, over larger temperature ranges the Steinhart-Hart equation, given below, is used where R is the resistance of the thermistor at the current temperature in Ω, T is the current temperature in Kelvin, and a, b, and c are constants determined experimentally.
$$\frac{1}{T} = a + b \ln R + c (\ln R)^3$$
An alternative form of the equation can also be used which does not require solving for the experimental constants. Instead, a $\beta$ value parameter is given in the data sheet for the thermistor to use with the equation below. Where $R_0$ is the resistance of the thermistor at reference temperature $T_0$ in Ω, $\beta$ is the parameter given in the Data Sheet in Kelvin, and $T_0$ is the reference temperature in Kelvin (25°C = 298.15 K).
$$R = R_0 e^{\large \beta \left(\frac{1}{T} - \frac{1}{T_0}\right)}$$
## Equipment
- Computer<br><p></p>
- Digital multimeter (DMM)<br><p></p>
- Hardware: National Instrument CompactDAQ cDAQ-9174, NI-9201 C Series Voltage Input Module <br><p></p>
- Breadboard <br><p></p>
- Power Supply<br><p></p>
- Resistors: 2 x 10 kΩ<br><p></p>
- 2 thermistors<br><p></p>
- Ice bath<br><p></p>
- Various BNC and banana cords and breadboard jumper wires as needed
## Procedure
For this lab there are questions to be answered on your lab report for each part. Make sure you pay attention to what the procedure requires you to do.
### Part I - Configuration
To do measurements on a Wheatstone bridge it is necessary to use a DAQ device with a differential input – the measured signal is $V_o = V_o^+-V_o^-$. For this, we will use the NI-9211 module (specifications in Appendix B). In addition to being able to measure thermocouple inputs, the NI-9211 module can measure differential inputs.
1. Open the ***DAQ*** Jupyter Notebook located in the *Labs* folder and configure the `acquire` function with the additional parameters listed in *Steps 2-4*. <br><p></p>
- Set the `daq_name` to the appropriate value for the location of the *NI 9211* card (most likely `'cDAQ1Mod4'`). <br><p></p>
- Set the `mod_type` to `'ai'`.<br><p></p>
### Part II - Voltage Divider
<img src="img/VoltageDivider.png">
With thermistors, you should be using the Wheatstone bridge for the measurements; however, to simplify the circuitry, we will only use a voltage divider for acquiring the signal. Build two voltage dividers each made of a thermistor ($R_2$) and a 10 kΩ resistor ($R_1$), as pictured in the image above.
### Part III - Thermistor Data Sheet Calculations
You will do this part at home for the lab report.
1. Using the data from the data sheet choose 3 points and calculate the constants a, b, and c from the Steinhart-Hart equation. Verify the values that you calculated for a, b, and c by picking two points (not the ones used in determining a, b, and c) and calculate the corresponding temperature.<br><p></p>
2. Verify that the $\beta$ parameter value given for this thermistor on the data sheet satisfies the second equation given in the intro, by solving for R at temperatures of 0, 50, and 100°C.<br><p></p>
3. Plot the two curves on the same graph. Is there a difference in the accuracy of the two methods? If so, which one is more accurate?
For the following parts use the following values of the coefficients:
a = 1.305×10-3, b = 2.143×10-4, c = 9.709×10-8
### Part IV - Temperature Measurement
Now you can start with the measurements. Make sure to save your data to file appropriately by configuring the `acquire` function. You will need the data to plot them for your report.
1. Select the proper acquisition rate and number of samples. Discuss your choice in the lab report. If needed, modify the plot update rate using the `time_sep` parameter of the `acquire` function. By default, the live plot updates every `1` second. It cannot update at any faster rate. This value does not need to be changed unless you are acquiring data for a long period of time and/or at a high sampling rate (> 15 seconds). If you are acquiring data for more than 1 minute, this value should be changed to a number around `30`<br><p></p>
- Turn on the power supply to excite the voltage divider. Set it up to -5 V.<br><p></p>
- You will now place your thermistor (using some tape) near the exhaust of the computer fan.<br><p></p>
- If everything is plugged correctly and your function is configured properly, you should notice a slow increase in the output voltage. If you do not see the voltage (temperature) curve on the measurement displays, remember to change the limits from +/-10 V to larger values, or to turn on auto scale Y (if you have no mistakes in your settings, you should expect the temperature to be around 20°C). To change the limits of the waveform, just double click on the values and enter the new ones.<br><p></p>
- Remember you can always troubleshoot your system with the oscilloscope.<br><p></p>
- Record how much time it takes for the thermistor to reach the temperature of the computer. <br><p></p>
- Remove the thermistor from the back of the computer and place it on the desk. Record how long it takes for the temperature to reach steady state.
### Part V - Time Response Measurement
Let's focus more on the time response measurement of the probe. Again, make sure to save your data to file for estimating the time constant in a post processing phase.
- Select the proper acquisition rate and number of samples. Discuss your choice in the lab report.<br><p></p>
- You will now place your thermistor in an ice bath of temperature 0°C. <br><p></p>
- Record how much time it takes for the thermistor to reach the temperature of the ice bath, do not forget to save the data to file. Look into your notes on 1st order time response to determine the time constant $\tau$. <br><p></p>
- Remove the thermistor from the bath and place it on the desk.<br><p></p>
- Record again how much time it takes to come back to room temperature and save the data to file.<br><p></p>
- Use Python to estimate how long it takes for the temperature of the thermistor to reach steady state.
# Discussion Questions
Answer the questions asked in the procedure, and state the answers clearly in your lab report.
1. If you were to acquire two signals (on two independent AI channels) with your current DAQ module, would the two signal be acquired simultaneously?
A. If no how are they acquired and what electronic component in the DAQ system allows to do that?
B. For a sampling rate $f$, what is the time interval between the samples?
C. Note, for most practical application one can consider that the data was acquired simultaneously, but you should be aware of this phenomenon.<br><p></p>
2. In Part III-4-c:
A. Is the resistor value within its tolerance?
B. Using uncertainty analysis, what would be the uncertainty on the temperature if the resistor value was not precisely measured?<br><p></p>
3. How does the rise time you measured in *Part IV* compare to the one of an RTD? A Thermocouple?
# Appendices
## Appendix A - NI cDAQ-9174
<img src="img/cDAQ-9174.png" width=240 align="left"><br><br><br><br><br><br><br><br>
[Online Manual](https://www.ni.com/documentation/en/compactdaq-chassis/latest/cdaq-9174/overview/)
[User Manual](https://www.ni.com/pdf/manuals/372838e.pdf)
[Specification Sheet](https://www.ni.com/pdf/manuals/374045a.pdf)
## Appendix B - NI 9201
<img src="img/NI-9201.png" width=150 align="left"><br><br><br><br><br><br><br><br><br>
[HTML Manual](https://www.ni.com/en-us/support/model.ni-9201.html/)
| github_jupyter |
```
%matplotlib inline
import pandas as pd
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
titles = pd.DataFrame.from_csv('data/titles.csv', index_col=None)
titles.head()
cast = pd.DataFrame.from_csv('data/cast.csv', index_col=None)
cast.head()
# What are the ten most common movie names of all time?
titles.title.value_counts().head(10)
# Which three years of the 1930s saw the most films released?
t = titles
t = t[t.year // 10 == 193]
t.year.value_counts().head(3)
# Plot the number of films that have been released each decade
# over the history of cinema.
t = titles
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
# Plot the number of "Hamlet" films made each decade.
t = titles
t = t[t.title == 'Hamlet']
(t.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
# Plot the number of "Rustler" characters
# in each decade of the history of film.
c = cast
c = c[c.character == 'Rustler']
(c.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
# Plot the number of "Hamlet" characters each decade.
c = cast
c = c[c.character == 'Hamlet']
(c.year // 10 * 10).value_counts().sort_index().plot(kind='bar')
# What are the 11 most common character names in movie history?
cast.character.value_counts().head(11)
# Who are the 10 people most often credited as "Herself" in film history?
c = cast
c[c.character == 'Herself'].name.value_counts().head(10)
# Who are the 10 people most often credited as "Himself" in film history?
c = cast
c[c.character == 'Himself'].name.value_counts().head(10)
# Which actors or actresses appeared in the most movies in the year 1945?
cast[cast.year == 1945].name.value_counts().head(10)
# Which actors or actresses appeared in the most movies in the year 1985?
cast[cast.year == 1985].name.value_counts().head(10)
# Plot how many roles Mammootty has played in each year of his career.
cast[cast.name == 'Mammootty'].year.value_counts().sort_index().plot()
# What are the 10 most frequent roles that start with the phrase "Patron in"?
c = cast
c[c.character.str.startswith('Patron in ')].character.value_counts().head(10)
# What are the 10 most frequent roles that start with the word "Science"?
c = cast
c[c.character.str.startswith('Science')].character.value_counts().head(10)
# Plot the n-values of the roles that Judi Dench has played over her career.
c = cast
c = c[c.name == 'Judi Dench'].sort_values('year')
c = c[c.n.notnull()]
c.plot(x='year', y='n', kind='scatter')
# Plot the n-values of Cary Grant's roles through his career.
c = cast
c = c[c.name == 'Cary Grant'].sort_values('year')
c = c[c.n.notnull()]
c.plot(x='year', y='n', kind='scatter')
# Plot the n-value of the roles that Sidney Poitier has acted
# over the years.
c = cast
c = c[c.name == 'Sidney Poitier'].sort_values('year')
c = c[c.n.notnull()]
c.plot(x='year', y='n', kind='scatter')
# How many leading (n=1) roles were available to actors,
# and how many to actresses, in the 1950s?
c = cast
c = c[c.year // 10 == 195]
c = c[c.n == 1]
c.type.value_counts()
# How many supporting (n=2) roles were available to actors,
# and how many to actresses, in the 1950s?
c = cast
c = c[c.year // 10 == 195]
c = c[c.n == 2]
c.type.value_counts()
```
| github_jupyter |
```
!pip uninstall sagemaker -y && pip install sagemaker
%%time
import pickle, gzip, urllib.request, json
import numpy as np
# Load the dataset
urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz")
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
print(train_set[0].shape)
%matplotlib inline
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=10, figsize=(10, 10))
for i in range(0, 10):
img = train_set[0][i]
label = train_set[1][i]
img_reshape = img.reshape((28,28))
ax = axes[i]
imgplot = ax.imshow(img_reshape, cmap='gray')
ax.axis("off")
ax.set_title(label)
plt.show()
%%time
import os
import boto3
import re
import copy
import time
import io
import struct
from time import gmtime, strftime
from sagemaker import get_execution_role
role = get_execution_role()
region = boto3.Session().region_name
bucket='sagemaker-200816' # Replace with your s3 bucket name
prefix = 'sagemaker/xgboost-mnist' # Used as part of the path in the bucket where you store data
s3_client = boto3.client("s3")
def convert_data():
data_partitions = [('train', train_set), ('validation', valid_set), ('test', test_set)]
for data_partition_name, data_partition in data_partitions:
print(f"{data_partition_name}: {data_partition[0].shape} {data_partition[1].shape}")
labels = [t.tolist() for t in data_partition[1]]
features = [t.tolist() for t in data_partition[0]]
if data_partition_name != 'test':
# examples: [[y_label, labels...], ...]
examples = np.insert(features, 0, labels, axis=1)
else:
examples = features
np.savetxt('data.csv', examples, delimiter=',')
key = f"{prefix}/{data_partition_name}/examples"
url = f"s3://{prefix}/{data_partition_name}"
s3_client.upload_file(Filename="data.csv", Bucket=bucket, Key=key)
print(f"Done writing to {url}")
convert_data()
import sagemaker
from sagemaker import image_uris
container = image_uris.retrieve('xgboost', region=region, version="latest")
train_data_url = f"s3://{bucket}/{prefix}/train"
validation_data_url = f"s3://{bucket}/{prefix}/validation"
s3_output_location = f"s3://{bucket}/{prefix}/xgboost_model_sdk"
print(train_data_url)
xgb_model = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
volume_size = 5,
output_path=s3_output_location,
sagemaker_session=sagemaker.Session())
xgb_model.set_hyperparameters(max_depth = 5,
eta = .2,
gamma = 4,
min_child_weight = 6,
silent = 0,
objective = "multi:softmax",
num_class = 10,
num_round = 10)
train_channel = sagemaker.inputs.TrainingInput(train_data_url, content_type='text/csv')
valid_channel = sagemaker.inputs.TrainingInput(validation_data_url, content_type='text/csv')
data_channels = {'train': train_channel, 'validation': valid_channel}
xgb_model.fit(inputs=data_channels, logs=True)
xgb_predictor = xgb_model.deploy(initial_instance_count=1,
serializer=sagemaker.serializers.CSVSerializer(),
instance_type='ml.t2.medium')
test_key = f"{prefix}/test/examples"
s3_client.download_file(Bucket=bucket, Key=test_key, Filename="test_data")
%matplotlib inline
fig, axes = plt.subplots(nrows=1, ncols=10, figsize=(10, 10))
for i in range(0, 10):
img = test_set[0][i]
label = test_set[1][i]
img_reshape = img.reshape((28,28))
ax = axes[i]
imgplot = ax.imshow(img_reshape, cmap='gray')
ax.axis("off")
ax.set_title(label)
plt.show()
with open('test_data', 'r') as f:
for j in range(0,10):
single_test = f.readline()
result = xgb_predictor.predict(single_test)
print(int(float(result.decode())), end=" ")
# The location of the test dataset
batch_input = f"s3://{bucket}/{prefix}/test/examples"
# The location to store the results of the batch transform job
batch_output = f"s3://{bucket}/{prefix}/batch-inference"
transformer = xgb_model.transformer(instance_count=1, instance_type='ml.m4.xlarge', output_path=batch_output)
transformer.transform(data=batch_input, data_type='S3Prefix', content_type='text/csv', split_type='Line')
transformer.wait()
test_key = f"{prefix}/batch-inference/examples.out"
s3_client.download_file(Bucket=bucket, Key=test_key, Filename="batch_results")
with open('batch_results') as f:
results = f.readlines()
for j in range (0, 50):
print(int(float(results[j])), end=" ")
import boto3
import os
import io
import boto3
import json
import csv
ENDPOINT_NAME = "xgboost-2020-08-16-06-47-01-579"
runtime = boto3.client('runtime.sagemaker')
with open('test_data', 'r') as f:
for j in range(0,50):
payload = f.readline()
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=payload)
result = json.loads(response["Body"].read().decode())
print(int(float(result)), end=" ")
```
| github_jupyter |
# Downloading Dependencies and Configuring your Environment
In this step we will be downloading tools, cloning sources from GitHub, and configuring your environment. These tools will be used in the remainder of this Tutorial. Some of the steps will require downloading packages to your host computer while some will be downloaded to your PYNQ board.
The host commands are shown as Bash shell commands. On a Linux host, these can be run from a Bash Terminal. From Windows, you must download [Cygwin](https://www.cygwin.com).
# Host Dependencies
You will need to install the following programs on your host computer:
1. Git [(Windows & Linux Tutorial)](https://www.atlassian.com/git/tutorials/install-git)
2. Xilinx Vivado WebPack 2017.1 [(Windows & Linux Tutorial - See Chapter 3)](https://www.xilinx.com/support/documentation/sw_manuals/xilinx2017_1/ug973-vivado-release-notes-install-license.pdf)
3. Cygwin (with Git, Meld, GNU Make, python, and openSSH packages) [Tutorial](http://www.mcclean-cooper.com/valentino/cygwin_install/)
## Host Repositories
Once these dependencies have been installed you need to run the following commands on your host computer. These will clone the [PYNQ-HLS repository](https://github.com/drichmond/PYNQ-HLS). We have cloned these directories to `/home/xilinx` on our host machine.
On Linux, these commands can be run in the terminal. On Windows you will need run these commands from the Cygwin Terminal.
Run the following command. This command clones the git repository containing these notebooks and corresponding source files to you host machine:
git clone https://github.com/drichmond/PYNQ-HLS ~/PYNQ-HLS
# PYNQ Dependencies
First, check that your PYNQ board has an internet connection. Run the following cell to test the internet connection on your PYNQ board. You should see 10 responses from xilinx.com. If you do not see 10 responses then check your PYNQ Board's internet connection and try again.
```
!ping xilinx.com -c 10
```
If your board has an internet connection, you need to download the PYNQ-HLS repository to your PYNQ board. Run the following cell to download it to your PYNQ board.
```
!git clone https://github.com/drichmond/PYNQ-HLS /home/xilinx/PYNQ-HLS
!chown -R xilinx:xilinx /home/xilinx/PYNQ-HLS
```
Alternatively, you can copy the repository you cloned in [Host Repositories](1-Downloading-Dependencies.ipynb#Host-Repositories) to your PYNQ board using [SAMBA](http://pynq.readthedocs.io/en/v2.0/getting_started.html#accessing-files-on-the-board), or SCP
You should place the copy of the PYNQ-HLS repository in `/home/xilinx`.
Verify that the repository has been cloned and that the repository has the correct permissions:
```
!ls -l /home/xilinx/PYNQ-HLS
```
That's it! You are now ready to move on to the next step **[Creating a Vivado HLS Core](2-Creating-A-Vivado-HLS-Core.ipynb)**
| github_jupyter |
```
%matplotlib inline
import pickle
import datetime
import time
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
data_path = "/workspace/dataset/"
train_file = "train_sample.csv"
dev_file = "dev_sample.csv"
test_file = "test_sample.csv"
train_df = pd.read_csv(data_path + train_file)
train_df.columns
```
## make tag dict
```
exception_word_list = ["참조", "없음", "표기", "별도", "상세", "자체", "불명", "미입력", "공개불가"]
exception_tag_list = ["기타", "", "."]
def check_valid(keyword):
if keyword is np.nan:
return False
else:
if any(exception in keyword for exception in exception_word_list):
return False
elif keyword in exception_tag_list:
return False
else:
return True
def get_trimed_tag(brand, maker):
brand_valid = check_valid(brand)
maker_valid = check_valid(maker)
if brand_valid:
return brand
else:
if maker_valid:
return maker
return "-1"
train_df['tag'] = train_df.apply(lambda x: get_trimed_tag(x['brand'], x['maker']), axis=1)
train_df['tag'].value_counts()[train_df['tag'].value_counts() > 30][:10]
valid_tag_list = train_df['tag'].value_counts()[train_df['tag'].value_counts() > 30].index
print(len(valid_tag_list)) # 가지고있는 아이템의 개수가 30개 이상인 브랜드의 개수
print(train_df[train_df['tag'].isin(valid_tag_list)].shape) # 위의 브랜드를 포함하는 전체 데이터의 개수
print(train_df[train_df['tag'] == ""].shape) # outlier에 해당하는 tag 개수
# 30개 이상 아이템을 가지고 있는 브랜드이면서, outlier에 해당하는 브랜드를 제외한 브랜드를 가지고 있는 데이터
print((7308567 - 3855743) / train_df.shape[0]) # 약 42% 정도가 '의미를 가질 확률이 높은' 브랜드 정보를 가지고 있음
print((train_df.shape[0] - 7308567) / train_df.shape[0]) # 약 10% 정도가 30개 미만의 아이템을 가지고 있는 브랜드 정보를 가지고 있음
train_df[train_df['tag'].isin(valid_tag_list)][['brand', 'maker', 'tag']].sample(100).head(50)
valid_tag_dict = {}
idx = 1
for tag in valid_tag_list:
valid_tag_dict[tag] = idx
idx = idx + 1
len(valid_tag_dict)
# valid_tag_dict
valid_tag_dict['-1']
# save
with open('valid_tag_dict.pickle', 'wb') as f:
pickle.dump(valid_tag_dict, f, pickle.HIGHEST_PROTOCOL)
del train_df['brand']
del train_df['maker']
del train_df['tag']
```
## make price dict
```
pd.options.display.float_format = '{:.6f}'.format
train_df[(train_df['price']!=-1) & (train_df['price'] < 1000000)]['price'].describe()
quantile_1 = train_df[train_df['price']!=-1]['price'].quantile(0.3)
quantile_2 = train_df[train_df['price']!=-1]['price'].quantile(0.7)
print("quantile 1", str(train_df[train_df['price']!=-1]['price'].quantile(0.3)))
print("quantile 2", str(train_df[train_df['price']!=-1]['price'].quantile(0.7)))
def get_price_level(price):
if price == -1:
return "2"
elif price == np.nan:
return "2"
else:
if price < quantile_1:
return "1"
elif price > quantile_2:
return "3"
else:
return "2"
train_df['price_level'] = train_df['price'].apply(lambda x: get_price_level(x))
train_df['price_level'].value_counts()
vc = train_df['price_level'].value_counts()
val_sum = vc.sum()
print(vc.index[0], "ratio :", str(vc.values[0] / val_sum))
print(vc.index[1], "ratio :", str(vc.values[1] / val_sum))
print(vc.index[2], "ratio :", str(vc.values[2] / val_sum))
price_quantile_dict = {"quantile_1": quantile_1, "quantile_2": quantile_2}
# save
with open('price_quantile_dict.pickle', 'wb') as f:
pickle.dump(price_quantile_dict, f, pickle.HIGHEST_PROTOCOL)
```
## make aging min-max scaler dict for each div(train, dev, test)
```
del train_df['price_level']
del train_df['price']
time_aging_dict = {'train': {'min': 0,
'max': 0,
'stand_unix_time': 0},
'dev': {'min': 0,
'max': 0,
'stand_unix_time': 0},
'test': {'min': 0,
'max': 0,
'stand_unix_time': 0}}
train_df['year_mon_day'] = train_df['updttm'].apply(lambda x: int(str(x[2:10])))
train_recent_updttm_day = train_df['year_mon_day'].max()
unix_time = time.mktime(datetime.datetime.strptime(str(train_recent_updttm_day), "%Y%m%d").timetuple())
time_aging_dict['train']['stand_unix_time'] = unix_time + 86400
dev_df = pd.read_csv(data_path + dev_file)
dev_df['year_mon_day'] = dev_df['updttm'].apply(lambda x: int(str(x[2:10])))
dev_recent_updttm_day = dev_df['year_mon_day'].max()
unix_time = time.mktime(datetime.datetime.strptime(str(dev_recent_updttm_day), "%Y%m%d").timetuple())
time_aging_dict['dev']['stand_unix_time'] = unix_time + 86400
test_df = pd.read_csv(data_path + test_file)
test_df['year_mon_day'] = test_df['updttm'].apply(lambda x: int(str(x[2:10])))
test_recent_updttm_day = test_df['year_mon_day'].max()
unix_time = time.mktime(datetime.datetime.strptime(str(test_recent_updttm_day), "%Y%m%d").timetuple())
time_aging_dict['test']['stand_unix_time'] = unix_time + 86400
time_aging_dict
def get_unix_time_aging(stand_unix_time, time_str):
date_str = time_str[2:10]
unix_time = time.mktime(datetime.datetime.strptime(date_str, "%Y%m%d").timetuple())
return (stand_unix_time - unix_time)
div_time_aging = time_aging_dict['train']['stand_unix_time']
train_df['unix_time_aging'] = train_df['updttm'].apply(lambda x: get_unix_time_aging(div_time_aging, x))
div_time_aging = time_aging_dict['dev']['stand_unix_time']
dev_df['unix_time_aging'] = dev_df['updttm'].apply(lambda x: get_unix_time_aging(div_time_aging, x))
div_time_aging = time_aging_dict['test']['stand_unix_time']
test_df['unix_time_aging'] = test_df['updttm'].apply(lambda x: get_unix_time_aging(div_time_aging, x))
time_aging_dict['train']['min'] = train_df['unix_time_aging'].min()
time_aging_dict['train']['max'] = train_df['unix_time_aging'].max()
time_aging_dict['dev']['min'] = dev_df['unix_time_aging'].min()
time_aging_dict['dev']['max'] = dev_df['unix_time_aging'].max()
time_aging_dict['test']['min'] = test_df['unix_time_aging'].min()
time_aging_dict['test']['max'] = test_df['unix_time_aging'].max()
print(time_aging_dict)
train_min = time_aging_dict['train']['min']
train_max = time_aging_dict['train']['max']
train_df['unix_time_norm'] = (train_df['unix_time_aging'] - train_min) / (train_max - train_min)
# save
with open('time_aging_dict.pickle', 'wb') as f:
pickle.dump(time_aging_dict, f, pickle.HIGHEST_PROTOCOL)
```
## aging feature visualization
#### train distribution
```
plt.axis('off')
train_df.groupby('bcateid')['unix_time_norm'].mean().plot.bar()
plt.axis('off')
train_df.groupby('mcateid')['unix_time_norm'].mean().plot.bar()
plt.axis('off')
train_df.groupby('scateid')['unix_time_norm'].mean().plot.bar()
plt.axis('off')
train_df.groupby('dcateid')['unix_time_norm'].mean().plot.bar()
```
| github_jupyter |
```
import os
hostname = os.popen("hostname").read().split("\n")[0]
if(hostname != "reckoner1429-Predator-PH315-52" and hostname != "jarvis"):
from google.colab import drive
from google.colab import drive
drive.mount('/content/gdrive')
! chmod 755 "/content/gdrive/My Drive/collab-var.sh"
! "/content/gdrive/My Drive/collab-var.sh"
%cd "/content/gdrive/My Drive/github/video-emotion-recognition"
import librosa
import librosa.display
import numpy as np
import utils.data_util as data_util
import utils.preprocess_util as preprocess_util
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense, Flatten, Concatenate
from tensorflow.keras import Model
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.models import load_model
import tensorflow.keras as keras
import time
import utils.config as config
from utils.hyparam_util import load_fusion_hyparam
import sklearn.metrics as skm
import seaborn as sn
# if(config.CURRENT_DATASET == 'SAVEE'):
# dataset = preprocess_util.SAVEE()
# elif(config.CURRENT_DATASET == 'RAVDESS'):
# dataset = preprocess_util.RAVDESS()
dataset = preprocess_util.SAVEE()
SEED = 0
X_train_audio, X_test_audio, Y_train_audio, Y_test_audio = dataset.load_audio_filenames(SEED, 0.2)
X_train_face, X_test_face, Y_train_face, Y_test_face = dataset.load_visual_filenames(SEED, 0.2)
print(X_test_audio)
print(Y_test_audio)
#change the iteration and model name here
iteration = "test"
model_name = 'xception'
path = os.path.join(dataset.MODEL_SAVE_DIR, "iteration-"+iteration)
new_model = tf.keras.models.load_model(path +'/saved_models/'+model_name+'-8-8-8-'+iteration+'.h5')
#new_model.summary()
INPUT_WIDTH = 224
INPUT_HEIGHT = 224
hyparams = load_fusion_hyparam(iteration)
BATCH_SIZE = hyparams['batch_size']
N_CLASSES = len(dataset.emotion_classes)
epochs = hyparams['epochs']
# ITERATION = hyparams['iteration']
X_val_gen = data_util.MultimodalDataGenerator(X_test_face, X_test_audio, Y_test_face, BATCH_SIZE, INPUT_WIDTH, INPUT_HEIGHT)
pred = new_model.predict(X_val_gen)
#confusion matrix
print(type(pred))
print(pred.shape)
print(Y_test_face.shape)
EMOTION_CLASSES = dataset.emotion_classes
true_values = []
pred_values = []
for i in range(len(pred)):
true_values.append(EMOTION_CLASSES[np.argmax(Y_test_face[i])])
pred_values.append(EMOTION_CLASSES[np.argmax(pred[i])])
print(true_values)
print(pred_values)
cm = skm.confusion_matrix(true_values, pred_values, labels = EMOTION_CLASSES)
# cm = cm/np.sum(cm, axis=1) * 100
print(cm)
sum = np.sum(cm, axis = 1)
print(sum)
cm = (np.divide(cm.T, sum).T)*100
print(cm)
cmap = 'Greens'
svm = sn.heatmap(cm, cmap=cmap, annot=True,
fmt = '.1f', cbar_kws={'label':'Percentage(%)'},
xticklabels=dataset.EMOTION_LABELS, yticklabels=dataset.EMOTION_LABELS)
plt.show()
fig = svm.get_figure()
plot_save_dir = os.path.join(dataset.DATASET_BASE_DIR, 'plots',
'iteration-'+str(iteration), new_model.name)
if(not(os.path.exists(plot_save_dir))):
os.makedirs(plot_save_dir)
fig.savefig(os.path.join(plot_save_dir, new_model.name +'-'+ cmap+'-'+'cm.png'), bbox_inches='tight', dpi=300)
#Histories
path = dataset.MODEL_SAVE_DIR
path_fusion = path + '/iteration-'+iteration+'/history'
path_ftm = path + '/ftm-0/history'
# print(path_ftm)
# print(path_fusion)
model_histories = {}
for model_history in os.listdir(path_fusion):
model_history_path = path_fusion + '/' + model_history
if(os.path.isfile(model_history_path)):
model_histories[model_history.split('.')[0]] = np.load(model_history_path, allow_pickle = True)
print(model_history)
#print("=======================================================================================")
for model_history in os.listdir(path_ftm):
model_history_path = path_ftm + '/' + model_history
if(os.path.isfile(model_history_path)):
model_histories[model_history.split('.')[0]] = np.load(model_history_path, allow_pickle = True)
print(model_history)
#print("=======================================================================================")
#change model name here to check history
key_fusion = model_name + '-8-8-8-'+iteration+'-history'
key_face = "ftm-" + model_name + "-face-8-history"
key_audio = "ftm-" + model_name + "-audio-8-history"
print("================================== Visual model =====================================================")
print(model_histories[key_face].item()['val_categorical_accuracy'][199])
print("==================================== Audio Model ===================================================")
print(model_histories[key_audio].item()['val_categorical_accuracy'][199])
print("===================================== Fusion Model ==================================================")
print(model_histories[key_fusion].item()['val_categorical_accuracy'][199])
```
| github_jupyter |
---
title: "Data Science Design Pattern for Student Drop Out"
author: "Microsoft"
output:
rmarkdown::html_vignette:
toc: true
vignette: >
%\VignetteIndexEntry{Vignette Title}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```
knitr::opts_chunk$set(fig.width = 6,
fig.height = 4,
fig.align='center',
dev = "png")
```
# Introducation
Welcome to the Data Science Design Pattern for Student Drop Out. This pattern provides a starting point for the data scientist exploring a new dataset. By no means is it the end point of the data science journey. The pattern is under regular revision and improvement and is provided as is.
We now introduce a generic pattern for building multiple binary classification models using R.
# Pre-configuration
We load the R packages required for modelling.
```
########################################################################
# R SETUP
# Load required packages from local library into R.
library(magrittr) # Data pipelines: %>% %T>% %<>%.
library(stringi) # String operator: %s+%.
library(rattle) # Evaluate using riskchart().
library(mlr) # Dependency of unbalanced.
library(unbalanced) # Resampling using ubSMOTE.
library(rpart) # Model: decision tree.
library(rpart.plot) # Draw fancyRpartPlot().
library(randomForest) # Model: random forest.
library(ada) # Model: ada boosting.
library(party) # Model: ctree and cforest.
library(e1071) # Model: support vector machine.
library(nnet) # Model: neural network.
library(Matrix) # Construct a Matrix of a class that inherits from Matrix.
library(caret) # Tune model hyper-parameters.
library(xgboost) # Model: extreme gradiant boosting.
library(Ckmeans.1d.dp)# Plot feature importance using xgb.plot.importance.
library(DiagrammeR) # Plot xgboost tree using xgb.plot.tree.
library(ROCR) # Use prediction() for evaluation.
library(pROC) # Use auc() for evaluation.
library(ggplot2) # Visually evaluate performance.
```
# Step 4.4: Re-load Dataset
In the Data template we loaded the studentDropIndia dataset, processed it, and saved it to file. Here we re-load the dataset and review its contents. In addition, we define some support functions for evaluation.
```
########################################################################
# DATA INGESTION
# Identify the dataset.
dsname <- "studentDropIndia"
# We define some support functions that we often find useful.
evaluateModel <- function(data, observed, predicted)
{
# Calculate the confusion matrix
confusion <- table(data[[observed]], data[[predicted]], dnn=c("Observed", "Predicted"))
confusion %>% print()
# Calculate the performance metrics
tp <- confusion[rownames(confusion) == 1, colnames(confusion) == 1]
fn <- confusion[rownames(confusion) == 1, colnames(confusion) == 0]
fp <- confusion[rownames(confusion) == 0, colnames(confusion) == 1]
tn <- confusion[rownames(confusion) == 0, colnames(confusion) == 0]
accuracy <- (tp + tn) / (tp + fn + fp + tn)
precision <- tp / (tp + fp)
recall <- tp / (tp + fn)
fscore <- 2 * (precision * recall) / (precision + recall)
# Construct the vector of performance metrics
metrics <- c("Accuracy" = accuracy,
"Precision" = precision,
"Recall" = recall,
"F-Score" = fscore)
# Return the vector of performance metrics
return(metrics)
}
rocChart <- function(pr, target)
{
# Calculate the true positive and the false positive rates.
rates <- pr %>%
prediction(target) %>%
performance("tpr", "fpr")
# Calulcate the AUC.
auc <- pr %>%
prediction(target) %>%
performance("auc") %>%
attr("y.values") %>%
extract2(1)
# Construct the plot.
pl <- data.frame(tpr=attr(rates, "y.values")[[1]],
fpr=attr(rates, "x.values")[[1]]) %>%
ggplot(aes(fpr, tpr)) +
geom_line() +
annotate("text", x=0.875, y=0.125, vjust=0,
label=paste("AUC =", round(100*auc, 2)),
family="xkcd") +
xlab("False Positive Rate (1-Specificity)") +
ylab("True Positive Rate (Sensitivity)")
# Return the plot object.
return(pl)
}
# Identify the dataset to load.
fpath <- "data"
dsdate <- "_" %s+% "20161215"
# Filename of the saved dataset.
dsrdata <-
file.path(fpath, dsname %s+% dsdate %s+% ".RData") %T>%
print()
# Load the R objects from file and list them.
load(dsrdata) %>% print()
# Review the metadata.
dsname
dspath
dsdate
nobs
vars
target
id
ignore
omit
```
# Step 4.5: Prepare - Formula to Describe the Goal
We continue on from the Data module where we had Steps 1, 2, and 3 and the beginnings of Step 4 of a data mining process.
The next step is to describe the model to be built by way of writing a formula to capture our intent. The formula describes the model to be built as being constructed to predict the target variable based on the other (suitable) variables available in the dataset. The notation used to express this is to name the target (continue_drop), followed by a tilde (~) followed by a period (.) to represent all other variables (these variables will be listed in vars in our case).
```
########################################################################
# PREPARE FOR MODELLING
# Formula for modelling.
form <- ds[vars] %>% formula() %T>% print()
```
A common methodology for model building is to randomly partition the available data into a training dataset and testing dataset. We sometimes also introducing a third dataset called the validation dataset, used during the building of the model, but for now we will use just the two.
First we (optionally) initiate the random number sequence with a randomly selected seed, and report what the seed is so that we could repeat the experiments presented here if required. For consistency in this module we use a particular seed of 123.
Next we partition the dataset into two subsets. The first is a 70% random sample for building the model (the training dataset) and the second is the remainder, used to evaluate the performance of the model (the testing dataset).
```
# Initialise random numbers for repeatable results.
seed <- 123
set.seed(seed)
# Partition the full dataset into two.
train <-
sample(nobs, 0.70*nobs) %T>%
{length(.) %>% print()}
head(train)
test <-
seq_len(nobs) %>%
setdiff(train) %T>%
{length(.) %>% print()}
head(test)
```
# Step 5: Resampling - Rebalancing the Proportion of Minority over Majority (Optional)
Since the proportion of minority class (student dropping-out) is around 5% among the whole dataset, we here implement the SMOTE on the training dataset by using the function ubSMOTE from the R package “unbalanced”. This yields a dropping-out proportion of 23% among all the training data. By using the training dataset after SMOTE as the modeling input, we can greatly improve the model performance, especially when applying some of the algorithms not suitable for unbalanced data.
```
# Rebalance the training dataset.
traindata <- as.data.frame(ds[train, inputs])
traintarget <- as.factor(as.numeric(as.data.frame(ds[train, target])[[1]])-1)
smote <- ubSMOTE(X=traindata, Y=traintarget,
perc.over=200, perc.under=500,
k=3, verbose=TRUE)
trainsmote <- cbind(smote$X, smote$Y)
names(trainsmote)[names(trainsmote) == "smote$Y"] <- "continue_drop"
traindata <- trainsmote
# Check the dropping-out proportion
table(traindata$continue_drop)/nrow(traindata)
```
# Step 6.1: Build - Decision Tree Model
The commonly used classification model builders include rpart() decision tree, randomForest() random forest, ada() stochastic boosting, ect. Now we build an rpart() decision tree, as a baseline model builder. Note that our models from now on are all built on the original training dataset in the purpose of demonstration.
```
# Train model: rpart
ctrl <- rpart.control(maxdepth=3)
system.time(m.rp <- rpart(form, ds[train, vars], control=ctrl))
m.rp
# Record the type of the model for later use.
mtype <- "rpart"
```
We can also draw the model.
```
fancyRpartPlot(m.rp)
```
# Step 6.2: Evaluate - Decision Tree Model
As we have noted though, performing any evaluation on the training dataset provides a biased estimate of the actual performance. We must instead evaluate the performance of our models on a previously unseen dataset (at least unseen by the algorithm building the model).
So we now evaluate the model performance on the testing dataset.
```
# Score model
predictions <- predict(m.rp, ds[test, vars], type="prob")
threshold <- 0.5
rpart_probability <- predictions[, 2]
rpart_prediction <- ifelse(rpart_probability > threshold, 1, 0)
pred <- cbind(ds[test, vars], rpart_prediction, rpart_probability)
head(pred)
# Evaluate model
pred$continue_drop <- as.numeric(pred$continue_drop)-1
metrics.rp <- evaluateModel(data=pred,
observed="continue_drop",
predicted="rpart_prediction")
metrics.rp
rocChart(pr=pred$rpart_probability, target=pred$continue_drop)
```
# Step 6.3: Compare - Multiple Models using Experiment
We can repeat the modelling multiple times, randomly selecting different datasets for training, to get an estimate
of the actual expected performance and variation we see in the performance. The helper function experi() can be used
to assist us here. It is available as http://onepager.togaware.com/experi.R and we show some of the coding of experi()
below.
```
# Show the function experi()
experi <- function(form, ds, dsname, target, modeller, details="",
n=100, control=NULL,
keep=FALSE, # Keep the last model built.
prob="prob",
class="class",
log="experi.log")
{
suppressPackageStartupMessages(require(pROC))
user <- Sys.getenv("LOGNAME")
node <- Sys.info()[["nodename"]]
wsrpart.model <- modeller=="wsrpart"
numclass <- length(levels(ds[,target]))
start.time <- proc.time()
seeds <- cors <- strs <- aucs <- accs <- NULL
for (i in seq_len(n))
{
loop.time <- proc.time()
seeds <- c(seeds, seed <- sample(1:1000000, 1))
set.seed(seed)
....
result[-c(1:7)] <- round(result[-c(1:7)], 2)
row.names(result) <- NULL
if (keep)
{
if (numclass==2)
{
attr(result, "pr") <- pr
attr(result, "test") <- test
}
attr(result, "model") <- model
}
}
return(result)
}
```
Let's run the experiments using the algorihtms rpart (Therneau and Atkinson, 2014), randomForest (Breiman et al., 2012),
ada (Culp et al., 2012), ctree() from party (Hothorn et al., 2013). In such way, we can conveniently implement those
models and compare their performance.
```
# # Source experi.R
#
# source("http://onepager.togaware.com/experi.R")
#
# # Set the times of loops
#
# n <- 10
#
# # Run experiments
#
# ex.rp <- experi(form, ds[vars], dsname, target, "rpart", "1", n=n, keep=TRUE)
# ex.rf <- experi(form, ds[vars], dsname, target, "randomForest", "500", n=n, keep=TRUE, control=list(na.action=na.omit))
# ex.ad <- experi(form, ds[vars], dsname, target, "ada", "50", n=n, keep=TRUE)
# ex.ct <- experi(form, ds[vars], dsname, target, "ctree", "1", n=n, keep=TRUE)
#
# # Compare results
#
# results <- rbind(ex.rp, ex.rf, ex.ad, ex.ct)
# rownames(results) <- results$modeller
# results$modeller <- NULL
# results
```
# Step 7.1: Other Models - Support Vector Machine Model
Except for the above commonly used binary classification models, we could also try some more advanced models, for instance, svm(), support vector machine, nnet(), neural network, xgboost(), extreme gradient boosting, ect. We firstly build a svm() support vector machine model here.
```
# Tune hyper-parameters
system.time({
m.svm.cv <- tune.svm(form,
data=ds[train, vars],
gamma=2^(-1:1),
cost=2^(2:4),
type="C-classification",
probability=TRUE,
scale=FALSE)
})
print(m.svm.cv$best.performance)
# Train model: svm
system.time({
m.svm <- svm(form,
data=ds[train, vars],
#gamma=0.1,
#cost=0.1,
gamma=m.svm.cv$best.parameters[1],
cost=m.svm.cv$best.parameters[2],
type="C-classification",
probability = TRUE,
scale = FALSE)
})
# Check the model information
m.svm
```
Then we score the model on testing dataset and evaluate the model performance.
```
# Score model
predictions <- predict(m.svm, ds[test, vars], probability=TRUE)
threshold <- 0.5
svm_probability <- attr(predictions, 'probabilities')[, 2]
svm_prediction <- ifelse(svm_probability > threshold, 1, 0)
pred <- cbind(ds[test, vars], svm_prediction, svm_probability)
head(pred)
# Evaluate model
pred$continue_drop <- as.numeric(pred$continue_drop)-1
metrics.svm <- evaluateModel(data=pred,
observed="continue_drop",
predicted="svm_prediction")
metrics.svm
rocChart(pr=pred$svm_probability, target=pred$continue_drop)
```
# Step 7.2: Other Models - Neural Network Model
Next we build a nnet(), neural network model.
```
# Tune hyper-parameters
system.time({
m.nnet.cv <- tune.nnet(form,
data=ds[train, vars],
size=c(2, 4, 6, 8, 10),
decay=5*10^(-5:-1),
rang=0.1,
maxit=200)
})
print(m.nnet.cv$best.performance)
# Train model: nnet
system.time({
m.nnet <- nnet(formula=form,
data=ds[train, vars],
#size=10,
#decay=5e-4,
size=as.numeric(m.nnet.cv$best.parameters[1]),
decay=as.numeric(m.nnet.cv$best.parameters[2]),
rang=0.1,
maxit=200)
})
# Check the model information
m.nnet
```
Then we score the model on testing dataset and evaluate the model performance.
```
# Score model
predictions <- predict(m.nnet, ds[test, vars], type="raw")
threshold <- 0.5
nnet_probability <- predictions
nnet_prediction <- ifelse(nnet_probability > threshold, 1, 0)
pred <- cbind(ds[test, vars], nnet_prediction, nnet_probability)
head(pred)
# Evaluate model
pred$continue_drop <- as.numeric(pred$continue_drop)-1
metrics.nnet <- evaluateModel(data=pred,
observed="continue_drop",
predicted="nnet_prediction")
metrics.nnet
rocChart(pr=pred$nnet_probability, target=pred$continue_drop)
```
# Step 7.3: Other Models - Extreme Gradient Boosting Model
Finally, we build a xgboost() extreme gradient boosting, as a specicial example, which performs well when dealing with unbalanced data. In our case, the proportion of student drop-out is around 5% in the original training dataset. Here we just use it as input to demonstrate the power of xgboost() in dealing with unbalanced data.
```
# Re-structure the training data set
traindata <- ds[train, inputs]
traindata[, c(1:ncol(traindata))] <- sapply(traindata[, c(1:ncol(traindata))], as.numeric)
ntrain <- as.matrix(traindata[ , c(1:ncol(traindata))])
dtrain <- list()
dtrain$data <- Matrix(ntrain, sparse=TRUE)
dtrain$label <- as.numeric(as.data.frame(ds[train, target])[[1]]) - 1
dtrain %>% str()
# Tune hyper-parameters
cv.ctrl <- trainControl(method="cv", # specify resampling method to be cross-validation
number=5, # set the number of folds to be 5
verboseIter=FALSE, # set to FALSE for not printing a training log
returnData=FALSE, # set to FALSE for not saving the data
returnResamp="all", # save losses across all models
classProbs=TRUE, # set to TRUE for class probabilities be computed for classification models
summaryFunction=twoClassSummary, # specify a function AUC to compute performance metrics across resamples
allowParallel=TRUE) # use a parallel backend if it is available
grid.xgb <- expand.grid(nrounds=2,
max_depth=2^(1:5),
eta=1*10^(-4:0),
min_child_weight=1,
colsample_bytree=1,
subsample=1,
gamma=0)
set.seed(45)
m.xgb.cv <-train(x=ntrain,
y=as.data.frame(ds[train, target])[[1]],
method="xgbTree",
trControl=cv.ctrl,
tuneGrid=grid.xgb,
verbose=TRUE,
metric="ROC",
nthread =2)
m.xgb.cv
# Train model: xgboost
system.time({
m.xgb <- xgboost(data=dtrain$data,
label=dtrain$label,
nround=m.xgb.cv$bestTune[[1]],
max.depth=m.xgb.cv$bestTune[[2]],
eta=m.xgb.cv$bestTune[[3]],
min_child_weight=1,
colsample_bytree=1,
subsample=1,
gamma=0,
nthread=2,
objective="binary:logistic")
})
m.xgb
# Calculate feature importance
importance <- xgb.importance(feature_names=dtrain$data@Dimnames[[2]],
model=m.xgb)
print(importance)
# Visualize feature importance
xgb.plot.importance(importance)
# Plot a boosted tree model
xgb.plot.tree(dtrain$data@Dimnames[[2]], model=m.xgb)
```
Now we score the model on testing dataset and evaluate the model performance.
```
# Re-structure the testing data set
testdata <- ds[test, inputs]
testdata[, c(1:ncol(traindata))] <- sapply(testdata[, c(1:ncol(traindata))], as.numeric)
ntest <- as.matrix(testdata[, c(1:ncol(traindata))])
dtest <- list()
dtest$data <- Matrix(ntest, sparse=TRUE)
dtest$label <- as.numeric(as.data.frame(ds[test, target])[[1]]) - 1
dtest %>% str()
# Score model
predictions <- predict(m.xgb, dtest$data)
threshold <- 0.5
xgboost_probability <- predictions
xgboost_prediction <- ifelse(xgboost_probability > threshold, 1, 0)
pred <- cbind(testdata, dtest$label, xgboost_prediction, xgboost_probability)
names(pred)[names(pred) == "dtest$label"] <- target
head(pred)
# Evaluate model
metrics.xgb <- evaluateModel(data=pred,
observed="continue_drop",
predicted="xgboost_prediction")
metrics.xgb
rocChart(pr=pred$xgboost_probability, target=pred$continue_drop)
```
# Step 8: Finish Up - Save Model
We save the model, together with the dataset and other variables, into a binary R file. Here, we use xgboost model as an example.
```
model <- m.xgb
mtype <- 'xgboost'
pr <- xgboost_probability
cl <- xgboost_prediction
dname <- "models"
if (! file.exists(dname)) dir.create(dname)
time.stamp <- format(Sys.time(), "%Y%m%d_%H%M%S")
fstem <- paste(dsname, mtype, time.stamp, sep="_")
(fname <- file.path(dname, sprintf("%s.RData", fstem)))
save(ds, dsname, vars, target, ignore,
form, nobs, seed, train, test, model, mtype, pr, cl,
file=fname)
```
We can then load this later and replicate the process.
```
(load(fname))
```
Note that by using generic variable names we can load different model files and perform common operations on them without changing the names within a script. However, do note that each time we load such a saved model file we overwrite any other variables of the same name.
| github_jupyter |
```
import sys
sys.path.append("/mnt/home/TF_NEW/tf-transformers/src/")
# Install tf-transformers from github
import datasets
import json
import glob
import tensorflow as tf
import numpy as np
from tf_transformers.data import TFWriter, TFReader, TFProcessor
from tf_transformers.models import AlbertModel
from tf_transformers.tasks import Classification_Model
from tf_transformers.core import optimization, SimpleTrainer
from tf_transformers.losses import cross_entropy_loss
from transformers import AlbertTokenizer
```
### Load Tokenizer
```
# Load HuggingFace Tokenizer
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
```
### Load MNLI dataset from Huggingface datasets
```
examples = datasets.load_from_disk("/mnt/home/PRE_MODELS/HuggingFace_models/datasets/glue/wnli/")
train_examples = examples["train"]
for item in train_examples:
print(item)
break
max_seq_length=128
def parse_train():
result = {}
for f in train_examples:
input_ids_s1 = [tokenizer.cls_token] + tokenizer.tokenize(f['sentence1'])[: max_seq_length-2] + [tokenizer.sep_token] # -2 to add CLS and SEP
input_ids_s1 = tokenizer.convert_tokens_to_ids(input_ids_s1)
input_type_ids_s1 = [0] * len(input_ids_s1) # 0 for s1
input_ids_s2 = tokenizer.tokenize(f['sentence2'])[: max_seq_length-1] + [tokenizer.sep_token] # -1 to add SEP
input_ids_s2 = tokenizer.convert_tokens_to_ids(input_ids_s2)
input_type_ids_s2 = [1] * len(input_ids_s2)
input_ids = input_ids_s1 + input_ids_s2
input_type_ids = input_type_ids_s1 + input_type_ids_s2
input_mask = [1] * len(input_ids) # 1 for s2
result = {}
result['input_ids'] = input_ids
result['input_mask'] = input_mask
result['input_type_ids'] = input_type_ids
result['labels'] = f['label']
yield result
# Lets write using TF Writer
# Use TFProcessor for smalled data
schema = {
"input_ids": ("var_len", "int"),
"input_mask": ("var_len", "int"),
"input_type_ids": ("var_len", "int"),
"labels": ("var_len", "int"),
}
tfrecord_train_dir = '../../OFFICIAL_TFRECORDS/glue/alberta/wnli/train'
tfrecord_filename = 'wnli'
tfwriter = TFWriter(schema=schema,
file_name=tfrecord_filename,
model_dir=tfrecord_train_dir,
tag='train',
overwrite=True
)
tfwriter.process(parse_fn=parse_train())
```
### Read TFRecords using TFReader
```
# Read Data
schema = json.load(open("{}/schema.json".format(tfrecord_train_dir)))
all_files = glob.glob("{}/*.tfrecord".format(tfrecord_train_dir))
tf_reader = TFReader(schema=schema,
tfrecord_files=all_files)
x_keys = ['input_ids', 'input_type_ids', 'input_mask']
y_keys = ['labels']
batch_size = 32
train_dataset = tf_reader.read_record(auto_batch=True,
keys=x_keys,
batch_size=batch_size,
x_keys = x_keys,
y_keys = y_keys,
shuffle=True,
drop_remainder=True
)
for (batch_inputs, batch_labels) in train_dataset.take(1):
print(batch_inputs, batch_labels)
```
### Load Albert V2 Model
```
# Lets load Albert Model
model_layer, model, config = AlbertModel(model_name='albert_base_v2',
is_training=True,
use_dropout=False
)
model.load_checkpoint("/mnt/home/PRE_MODELS/LegacyAI_models/checkpoints/albert-base-v2/")
# model_layer -> Legacylayer inherited from tf.keras.Layer
# model -> legacyModel inherited from tf.keras.Model
```
### Load Classification Model
```
classification_layer = Classification_Model(model=model,
num_classes=2,
use_all_layers=True,
is_training=True)
classification_model = classification_layer.get_model()
# Delete to save up memory
del model
del model_layer
del classification_layer
```
### Define Loss
Loss function is simple.
* labels: 1D (batch_size) # class indices
* logits: 2D (batch_size x num_classes)
**Joint loss** - We minimze loss over each hidden layer .
```
def loss_fn(labels, logits):
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=tf.squeeze(labels, axis=1)))
return loss
def joint_loss(y_true_dict, y_pred_dict):
layer_loss = []
for class_logits in y_pred_dict['class_logits']:
loss = loss_fn(y_true_dict['labels'], class_logits)
layer_loss.append(loss)
return tf.reduce_mean(layer_loss)
```
### Define Optimizer
```
train_data_size = 800
learning_rate = 2e-5
steps_per_epoch = int(train_data_size / batch_size)
EPOCHS = 4
num_train_steps = steps_per_epoch * EPOCHS
warmup_steps = int(0.1 * num_train_steps)
# creates an optimizer with learning rate schedule
optimizer_type = 'adamw'
optimizer, learning_rate_fn = optimization.create_optimizer(learning_rate,
steps_per_epoch * EPOCHS,
warmup_steps,
optimizer_type)
```
### Train Using Keras :-)
- ```compile2``` allows you to have directly use model outputs as well batch dataset outputs into the loss function, without any further complexity.
Note: For ```compile2```, loss_fn must be None, and custom_loss_fn must be active. Metrics are not supprted for time being.
```
# # Compile
keras_loss_fn = {'class_logits': joint_loss}
classification_model.compile2(optimizer=optimizer,
loss=None,
custom_loss=keras_loss_fn)
# Change steps per epoch to large value/ ignore it completely to train
# on full dataset
history = classification_model.fit(train_dataset, epochs=2, steps_per_epoch=10)
```
### Train using SimpleTrainer (part of tf-transformers)
```
history = SimpleTrainer(model = classification_model,
optimizer = optimizer,
loss_fn = joint_loss,
dataset = train_dataset.repeat(EPOCHS+1), # This is important
epochs = EPOCHS,
num_train_examples = train_data_size,
batch_size = batch_size,
steps_per_call=10,
gradient_accumulation_steps=None)
```
### Save Models
You can save models as checkpoints using ```.save_checkpoint``` attribute, which is a part of all ```LegacyModels```
```
model_save_dir = "../../OFFICIAL_MODELS/glue/wnli/albert"
classification_model.save_checkpoint(model_save_dir)
```
### Parse validation data
We use ```TFProcessor``` to create validation data, because dev data is small
```
dev_examples = examples['validation']
def parse_dev():
result = {}
for f in dev_examples:
input_ids_s1 = [tokenizer.cls_token] + tokenizer.tokenize(f['sentence1'])[: max_seq_length-2] + [tokenizer.sep_token] # -2 to add CLS and SEP
input_ids_s1 = tokenizer.convert_tokens_to_ids(input_ids_s1)
input_type_ids_s1 = [0] * len(input_ids_s1) # 0 for s1
input_ids_s2 = tokenizer.tokenize(f['sentence2'])[: max_seq_length-1] + [tokenizer.sep_token] # -1 to add SEP
input_ids_s2 = tokenizer.convert_tokens_to_ids(input_ids_s2)
input_type_ids_s2 = [1] * len(input_ids_s2)
input_ids = input_ids_s1 + input_ids_s2
input_type_ids = input_type_ids_s1 + input_type_ids_s2
input_mask = [1] * len(input_ids) # 1 for s2
result = {}
result['input_ids'] = input_ids
result['input_mask'] = input_mask
result['input_type_ids'] = input_type_ids
result['labels'] = f['label']
yield result
tf_processor = TFProcessor()
dev_dataset = tf_processor.process(parse_fn=parse_dev())
x_keys = ['input_ids', 'input_type_ids', 'input_mask']
y_keys = ['labels']
dev_dataset = tf_processor.auto_batch(dev_dataset, shuffle=False, x_keys=x_keys, y_keys=y_keys, batch_size=32, drop_remainder=False)
```
### Evaluate dev dataset - Accuracy
```
num_hidden_layers = 12
predictions_per_layer = {i:[] for i in range(num_hidden_layers)}
original_labels = []
for (batch_inputs, batch_labels) in dev_dataset:
model_outputs = classification_model(batch_inputs)['class_logits']
for i in range(num_hidden_layers):
predictions_per_layer[i].append(tf.argmax(model_outputs[i], axis=1).numpy())
original_labels.append(batch_labels['labels'].numpy())
from sklearn.metrics import accuracy_score, f1_score
eval_metrics = {}
for i in range(num_hidden_layers):
acc = accuracy_score(np.hstack(predictions_per_layer[i]), np.hstack(original_labels))
eval_metrics[i] = acc
print(i, eval_metrics[i])
with open('eval_wnli.json', 'w') as f:
json.dump(eval_metrics, f)
```
| github_jupyter |
# What are factor models?
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
---
(Linear) factor models try to express a rate of return of an asset $i$ in terms of some <i>factors</i> as
$$R_i = a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i$$
where the variables $F_j$ are called factors and the coefficients $b_{ij}$ are called factor sensitivities. The error term $\epsilon_i$ has mean zero and represents the asset-specific random fluctuation, which can be eliminated away by combining diverse assets into a portfolio. Notice that the factors are not indexed by $i$. This reflects the fact that factor models are intended to be applicable for different return streams, and are not asset-specific. Therefore we can use one model to evaluate or predict the returns on different assets and portfolios.
What are the factors that we use in the model? One possibility is the return on the market: a model with only this one factor is what we use for beta hedging. Other potential factors include inflation, changes in industrial production, and an asset's price-to-book ratio. We discuss the different kinds of factors in more detail below, but the overarching principle is that, as with regression models we have conisered in the past, we should have an economic reason to believe that the factors we choose might explain returns. We also want factors that explain the returns of many assets.
To get the model for a portfolio, we simply take the weighted average of the models for the constituent assets. If our portfolio is $P = \sum_{i=1}^n w_i x_i$, then our portfolio model is
$$ R_p = \sum_{i=1}^n w_i R_i = \sum_{i=1}^n w_i a_i + \left(\sum_{i=1}^n w_i b_{i1} \right) F_1 + \left(\sum_{i=1}^n w_i b_{i2} \right) F_2 + \ldots + \left(\sum_{i=1}^n w_i b_{iK} \right) F_K + \sum_{i=1}^n w_i \epsilon_i $$
For factor analysis to be valid, we need the error term $\epsilon_i$ to be uncorrelated across assets $i$ (so that it does in fact represent asset-specific risk), and uncorrelated with the factors $F_1,\ldots, F_k$ (if it is correlated with some factor $F_j$, then we have chosen $b_{ij}$ incorrectly, which has forced the error term to compensate).
# Types of factor models
Factor models can be categorized into three main groups, depending on the types of factors they use. There are also mixed factor models, which combine factors from different groups.
## Macroeconomic factor models
In this model, the variable $a_i$ is the expected return, and the factors are <i>surprises</i> (actual value minus expected value) in macroeconomic variables. That is, we have some prediction $a_i$ for the return which depends on our predictions of variables like inflation, and surprises in these variables will cause the actual returns to deviate from the expectation. So, the actual return is broken down into the expected return, unexpected return resulting from unexpected changes in the factors, and an error term. If we take the expected value of both sides, we get $a_i$, since by definition $F_j$ and $\epsilon_i$ have expected value 0:
\begin{align}
E(R_i) &= E(a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i) \\
&= E(a_i) + E(b_{i1} F_1) + E(b_{i2} F_2) + \ldots + E(b_{iK} F_K) + E(\epsilon_i) \\
&= a_i + b_{i1} E(F_1) + b_{i2} E(F_2) + \ldots + b_{iK} E(F_K) + 0 \\
a_i &= a_i + 0 + 0 + \ldots + 0 + 0
\end{align}
We use a regression to calculate the coefficients $b_{ij}$ (one regression for each asset $i$, using historical data). The meanings of these factor sensitivities are the same as in any linear regression: we predict that every unit of surprise in variable $j$, which is an increase of $F_j$ by one unit, will cause an increase of $b_{ik}$ in the returns of stock $i$, assuming that the other factors are held constant.
## Fundamental factor models
Fundamentals are data having to do with the asset issuer, like the sector, size, and expenses of the company. In the fundamental factor model, we use the same equation as before, but interpret the terms differently. The factors $F_j$ represent the returns associated with some fundamental characteristics.
## Statistical factor models
This type of model computes the factors that best describe returns using statistical methods, without specifying what the factors represent. The two main methods for obtaining the factors are principal component analysis (resulting in factors that are the portfolios best explaining historical return variances) and factor analysis (factors are portfolios that best explain historical return covariances). After we compute the portfolios we want to use as factors, we can estimate the loadings with linear regression.
# Currently used models
There are many factor models in use in academia and industry today, whose factors were chosen based on extensive research. Among these are the BIRR model (which is macroeconomic) and the Fama-French three-factor model (which is a mixed model, using market cap, P/B ratio, and market returns).
| github_jupyter |
# Numpy and Pandas Performance Comparison
[Goutham Balaraman](http://gouthamanbalaraman.com)
Pandas and Numpy are two packages that are core to a lot of data analysis. In this post I will compare the performance of numpy and pandas.
tl;dr:
- `numpy` consumes less memory compared to `pandas`
- `numpy` generally performs better than `pandas` for 50K rows or less
- `pandas` generally performs better than `numpy` for 500K rows or more
- for 50K to 500K rows, it is a toss up between `pandas` and `numpy` depending on the kind of operation
```
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn-pastel")
%matplotlib inline
import seaborn.apionly as sns
import numpy as np
from timeit import timeit
import sys
iris = sns.load_dataset('iris')
data = pd.concat([iris]*100000)
data_rec = data.to_records()
print (len(data), len(data_rec))
```
Here I have loaded the `iris` dataset and replicated it so as to have 15MM rows of data. The space requirement for 15MM rows of data in a `pandas dataframe` is more than twice that of a `numpy recarray`.
```
MB = 1024*1024
print("Pandas %d MB " % (sys.getsizeof(data)/MB))
print("Numpy %d MB " % (sys.getsizeof(data_rec)/MB))
```
A snippet of the data shown below.
```
data.head()
# <!-- collapse=True -->
def perf(inp, statement, grid=None):
length = len(inp)
gap = int(length/5)
#grid = np.array([int(x) for x in np.logspace(np.log10(gap), np.log10(length+1) , 5)])
if grid is None:
grid = np.array([10000, 100000, 1000000, 5000000, 10000000])
num = 100
time = []
data = {'pd': pd, 'np': np}
for i in grid:
if isinstance(inp, pd.DataFrame):
sel = inp.iloc[:i]
data['data'] = sel
else:
sel = inp[:i]
data['data_rec'] = sel
t = timeit(stmt=statement, globals=data, number=num)
time.append(t/num)
return grid, np.array(time)
def bench(pd_inp, pd_stmt, np_inp, np_stmt, title="", grid=None):
g,v1 = perf(pd_inp, pd_stmt, grid)
g,v2 = perf(np_inp, np_stmt, grid)
fig, ax = plt.subplots()
ax.loglog()
ax.plot(g, v1, label="pandas",marker="o", lw=2)
ax.plot(g, v2, label="numpy", marker="v", lw=2)
ax.set_xticks(g)
plt.legend(loc=2)
plt.xlabel("Number of Records")
plt.ylabel("Time (s)")
plt.grid(True)
plt.xlim(min(g)/2,max(g)*2)
plt.title(title)
```
In this post, performance metrics for a few different categories are compared between `numpy` and `pandas`:
- operations on a column of data, such as mean or applying a vectorised function
- operations on a filtered column of data
- vector operations on a column or filtered column
## Operations on a Column
Here some performance metrics with operations on one column of data. The operations involved in here include fetching a view, and a reduction operation such as `mean`, vectorised `log` or a string based `unique` operation. All these are `O(n)` calculations. The mean calculation is orders of magnitude faster in `numpy` compared to `pandas` for array sizes of 100K or less. For sizes larger than 100K `pandas` maintains a lead over `numpy`.
```
bench(data, "data.loc[:, 'sepal_length'].mean()",
data_rec, "np.mean(data_rec.sepal_length)",
title="Mean on Unfiltered Column")
```
Below, the vectorized `log` operation is faster in `numpy` for sizes less than 100K but pandas costs about the same for sizes larger than 100K.
```
bench(data, "np.log(data.loc[:, 'sepal_length'])",
data_rec, "np.log(data_rec.sepal_length)",
title="Vectorised log on Unfiltered Column")
```
The one differentiating aspect about the test below is that the column `species` is of string type. The operation demonstrated is a `unique` calculation. We observe that the `unique` calculation is roughly an order of magnitude faster in pandas for sizes larger than 1K rows.
```
bench(data, "data.loc[:,'species'].unique()",
data_rec, "np.unique(data_rec.species)",
grid=np.array([100, 1000, 10000, 100000, 1000000]),
title="Unique on Unfiltered String Column")
```
## Operations on a Filtered Column
Below we perform the same tests as above, except that the column is not a full view, but is instead a filtered view. The filters are simple filters with an arithmetic bool comparison for the first two and a string comparison for the third below.
Below, `mean` is calculated for a filtered column `sepal_length`. Here performance of `pandas` is better for row sizes larger than 10K. In the `mean` on unfiltered column shown above, `pandas` performed better for 1MM or more. Just having selection operations has shifted performance chart in favor of `pandas` for even smaller number of records.
```
bench(data, "data.loc[(data.sepal_width>3) & \
(data.petal_length<1.5), 'sepal_length'].mean()",
data_rec, "np.mean(data_rec[(data_rec.sepal_width>3) & \
(data_rec.petal_length<1.5)].sepal_length)",
grid=np.array([1000, 10000, 100000, 1000000]),
title="Mean on Filtered Column")
```
For vectorised `log` operation on a unfiltered column shown above, `numpy` performed better than `pandas` for number of records less than 100K while the performance was comparable for the two for sizes larger than 100K. But the moment you introduce a filter on a column, `pandas` starts to show an edge over `numpy` for number of records larger than 10K.
```
bench(data, "np.log(data.loc[(data.sepal_width>3) & \
(data.petal_length<1.5), 'sepal_length'])",
data_rec, "np.log(data_rec[(data_rec.sepal_width>3) & \
(data_rec.petal_length<1.5)].sepal_length)",
grid=np.array([1000, 10000, 100000, 1000000]),
title="Vectorised log on Filtered Column")
```
Here is another example of a `mean` reduction on a column but with a string filter. We see a similar behavior where `numpy` performs significantly better at small sizes and `pandas` takes a gentle lead for larger number of records.
```
bench(data, "data[data.species=='setosa'].sepal_length.mean()",
data_rec, "np.mean(data_rec[data_rec.species=='setosa'].sepal_length)",
grid=np.array([1000, 10000, 100000, 1000000]),
title="Mean on (String) Filtered Column")
```
## Vectorized Operation on a Column
In this last section, we do vectorised arithmetic using multiple columns. This involves creating a view and vectorised math on these views. Even when there is no filter, `pandas` has a slight edge over `numpy` for large number of records. For smaller than 100K records, `numpy` performs significantly better.
```
bench(data, "data.petal_length * data.sepal_length + \
data.petal_width * data.sepal_width",
data_rec, "data_rec.petal_length*data_rec.sepal_length + \
data_rec.petal_width * data_rec.sepal_width",
title="Vectorised Math on Unfiltered Columns")
```
In the following figure, the filter involves vectorised arithmetic operation, and `mean` reduction is computed on the filtered column. The presence of a filter makes `pandas` significantly faster for sizes larger than 100K, while `numpy` maitains a lead for smaller than 10K number of records.
```
bench(data, "data.loc[data.sepal_width * data.petal_length > \
data.sepal_length, 'sepal_length'].mean()",
data_rec, "np.mean(data_rec[data_rec.sepal_width * data_rec.petal_length \
> data_rec.sepal_length].sepal_length)",
title="Vectorised Math in Filtering Columns",
grid=np.array([100, 1000, 10000, 100000, 1000000]))
```
## Conclusion
`Pandas` is often used in an interactive environment such as through Jupyter notebooks. In such a case, any performance loss from `pandas` will be in significant. But if you have smaller `pandas` dataframes (<50K number of records) in a production environment, then it is worth considering `numpy` recarrays.
- `numpy` consumes (roughtly 1/3) less memory compared to `pandas`
- `numpy` generally performs better than `pandas` for 50K rows or less
- `pandas` generally performs better than `numpy` for 500K rows or more
- for 50K to 500K rows, it is a toss up between `pandas` and `numpy` depending on the kind of operation
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.