code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="assets/PyViz_logo_wm_line.png" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 04. Working with tabular data</h2></div>
As we have already discovered, HoloViews elements are simple wrappers around your data that provide a semantically meaningful representation. The real power of HoloViews becomes most evident when working with larger, multi-dimensional datasets, whether they are tabular like in a database or CSV file, or gridded like large datasets of images.
Tabular data (also called columnar data) is one of the most common, general, and versatile data formats, corresponding to how data is laid out in a spreadsheet. There are many different ways to put data into a tabular format, but for interactive analysis having [**tidy data**](http://www.jeannicholashould.com/tidy-data-in-python.html) provides flexibility and simplicity. Here we will show how to make your data tidy as a first step, but see [hvPlot](http://hvplot.pyviz.org) for convenient ways to work with non-tidy data directly.
In this tutorial all the information you have learned in the previous sections will finally really pay off. We will discover how to facet data and use different element types to explore and visualize the data contained in a real dataset, using many of the same libraries introduced earlier along with some statistics methods from SciPy:
<div style="margin: 10px">
<a href="http://holoviews.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/holoviews.png"/></a>
<a href="http://bokeh.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/bokeh.png"/></a>
<a href="http://pandas.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="./assets/pandas.png"/></a>
<a href="http://numpy.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/numpy.png"/></a>
<a href="https://docs.scipy.org/doc/scipy/reference"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="./assets/scipy.png"/></a>
</div>
```
import numpy as np
import scipy.stats as ss
import pandas as pd
import holoviews as hv
hv.extension('bokeh')
%opts Curve Scatter Bars [tools=['hover']]
```
## What is tabular, tidy data?
```
macro_df = pd.read_csv('../data/macro.csv')
macro_df.head()
```
For tidy data, the **columns** of the table represent **variables** or **dimensions** and the **rows** represent **observations**.
The opposite of tidy data is so-called **wide** data. To see what wide data looks like, we can use the pandas ``pivot_table`` method:
```
wide = macro_df.pivot_table('unem', 'year', 'country')
wide.head(5)
```
In this wide format we can see that each column represents the unemployment figures for one country indexed, and each row a particular year. A wide table can represent data very concisely in some cases, but it is difficult to work with in practice because it does not make dimensions easily accessible for plotting or analysis. To go from wide to tidy data you can use the ``pd.melt`` function:
```
melted = pd.melt(wide.reset_index(), id_vars='year', value_name='unemployment')
melted.head()
```
## Declaring Dimensions on a Dataset
A HoloViews `Dataset` is similar to a HoloViews Element, without having any specific metadata that lets it visualize itself. A Dataset is useful for specifying a set of `Dimension`s that apply to the data, which will later be inherited by any visualizable elements that you create from the Dataset.
A HoloViews Dimension is the same concept as a **dependent** or **independent** variable in mathematics. In HoloViews such variables are called value dimensions and key dimensions (respectively). In `macro_df`, the ``'country'`` and ``'year'`` are the independent variables, so when we create a Dataset we will declare those as key dimensions. The remaining columns will automatically be inferred to be value dimensions:
```
macro = hv.Dataset(macro_df, ['country', 'year'])
macro
```
One of the first things we'll want to do with our Dimensions is to give them more sensible labels using ``redim.label``:
```
macro = macro.redim.label(growth='GDP Growth', unem='Unemployment', year='Year', country='Country')
```
Notice how HoloViews differs from using a plotting library directly in this case -- here we can annotate the data *once* to capture our knowledge about what those columns represent, and those annotations will then feed directly into any plots later derived from this data. In a plotting program, you would normally need to supply such metadata every single time you make a new plot, which is tedious, error prone, and discourages easy exploration. In the rest of this tutorial we'll see how we can explore and reveal this annotated data.
## Groupby
The great thing about a tidy tabular Dataset is that we can easily group the data by a particular variable, allowing us to plot or analyze each subset separately. Let's say for instance that we want to break the macro-economic data down by 'year'. Using the groupby method we can easily split the Dataset into subsets by year:
```
print(macro.groupby('Year'))
```
The resulting object has a top-level data structure called a [``HoloMap``](http://holoviews.org/reference/containers/bokeh/HoloMap.html), indexed by year. A HoloMap (like its dynamically generated equivalent [``DynamicMap``](http://holoviews.org/reference/containers/bokeh/DynamicMap.html)) is a potentially many-dimensional indexable container for Elements and Datasets, allowing them to be explored easily.
However, we cannot visualize this particular HoloMap, because a Dataset has no visual representation. We haven't yet told HoloViews whether the various dependent variables here are continuous, discrete, binned, or any of the other properties the data could have that would allow a specific Element to be chosen.
## Mapping dimensions to elements
Luckily, as soon as you choose what sort of Element you want a given column to be, you can make this data visualizable using the convenient ``.to`` method, which allows us to group the dataset and map dimensions to elements in a single step.
The ``.to`` method of a Dataset takes up to four main arguments:
1. The element you want to convert to
2. The key dimensions (i.e., independent variables) to display
3. The dependent variables to display, if any
4. The dimensions to group by, if any
As a first simple example let's go through such a declaration:
1. We will use a ``Curve``, to declare that the variables are continuous
2. Our independent variable will be the 'year'
3. Our dependent variable will be 'unem'
4. We will ``groupby`` the 'country'.
```
curves = macro.to(hv.Curve, 'year', 'unem', groupby='country')
print(curves)
curves
```
If you look at the printed output you will see that instead of a simple ``Curve`` we got a ``HoloMap`` of ``Curve`` Elements for each country. Each Curve is now visualizable, but we haven't told HoloViews what to do with the ``Country``, and so to make the entire structure visualizable, HoloViews creates a widget where you can select which country you want to view. Additional value dimensions would result in additional widgets here.
Alternatively we could also group by the year and view the unemployment rate by country as Bars instead. If we simply want to groupby all remaining key dimensions (in this case just the year) we can leave out the groupby argument:
```
%%opts Bars [width=600 xrotation=45]
bars = macro.sort('country').to(hv.Bars, 'country', 'unem')
bars
# Exercise: Create a hv.HeatMap using ``macro.to``, declaring kdims 'year' and 'country', and vdims 'growth'
# You'll need to declare ``width`` and/or ``xrotation`` plot options for HeatMap to make the plot readable
# You can also add ``tools=['hover']`` to get more info on each cell
```
## Displaying distributions
Often we want to summarize the distribution of values, e.g. to reveal the distribution of unemployment rates for each OECD country across all measurements. This means we want to ignore the 'year' dimension in our dataset, letting it be summarized instead. To stop HoloViews from grouping by the extra variable, we pass an empty list to the groupby argument. In this case we can easily declare the ``BoxWhisker`` directly, but omitting a key dimension from the ``groupby`` can be useful in cases when there are more dimensions:
```
%%opts BoxWhisker [width=800 xrotation=30] (box_fill_color=Palette('Category20'))
macro.to(hv.BoxWhisker, 'country', 'growth', groupby=[])
# Is equivalent to:
hv.BoxWhisker(macro, kdims=['country'], vdims=['growth'])
# Exercise: Display the distribution of GDP growth by year using the BoxWhisker element
```
## Faceting dimensions
Once the data has been grouped into a ``HoloMap`` as we did above, we can further use the grouping capabilities by using the ``.grid``, ``.layout`` and ``.overlay`` methods to lay the groups out on the page rather than flipping through them with a set of widgets.
#### NdOverlay
```
%%opts Scatter [width=800 height=400 size_index='growth'] (color=Palette('Category20') size=5)
%%opts NdOverlay [legend_position='left']
ndoverlay = macro.to(hv.Scatter, 'year', ['unem', 'growth']).overlay()
print(ndoverlay)
ndoverlay.relabel('OECD Unemployment 1960 - 1990')
```
#### GridSpace
```
%%opts GridSpace [shared_yaxis=True]
subset = macro.select(country=['Austria', 'Belgium', 'Netherlands', 'West Germany'])
grid = subset.to(hv.Bars, 'year', 'unem').grid()
print(grid)
grid
```
To understand what is actually going on here, let's rewrite this example in a slightly different way. Instead of using the convenient ``.to`` or ``.groupby`` methods, we can express the same thing by explicitly iterating over the countries we want to look at, selecting the subset of the data for that country using the ``.select`` and then passing these plots to the container we want.
In the example above that means we ``select`` by 'country' on the macro ``Dataset``, pass the selection to ``Bars`` elements, and declare the key and value dimension to display. We then pass the dictionary of ``Bars`` elements to the ``GridSpace`` container and declare the kdim of the container as 'Country':
```
countries = ['Austria', 'Belgium', 'Netherlands', 'West Germany']
hv.GridSpace({country: hv.Bars(macro.select(country=country), 'year', 'unem') for country in countries},
kdims=['Country'])
```
As you can see, ``.to`` is much simpler and less error-prone in practice.
#### NdLayout
```
%%opts Curve [width=200 height=200]
ndlayout = subset.to(hv.Curve, 'year', 'unem').layout()
print(ndlayout)
ndlayout
## Exercise: Recreate the plot above using hv.NdLayout and using macro.select just as we did for the GridSpace above
```
## Aggregating
Another common operation is computing aggregates (summary transformations that collapse some dimensions of the data down to scalar values like a mean or a variance). We can compute and visualize these easily using the ``aggregate`` method. The aggregate method lets you declare the dimension(s) to aggregate by and a function to aggregate with (with an optional secondary function to compute the spread if desired). Once we have computed the aggregate we can simply pass it to the [``Curve``](http://holoviews.org/reference/elements/bokeh/Curve.html) and [``ErrorBars``](http://holoviews.org/reference/elements/bokeh/ErrorBars.html):
```
%%opts Curve [width=600]
agg = macro.reindex(vdims=['growth']).aggregate('year', function=np.mean, spreadfn=np.std)
hv.Curve(agg) * hv.ErrorBars(agg)
# Exercise: Display aggregate GDP growth by country, building it up in a series of steps
# Step 1. First, aggregate the data by country rather than by year, using
# np.mean and ss.sem as the function and spreadfn, respectively, then
# make a `Bars` element from the resulting ``agg``
# Step 2: You should now have a bars plot, but with no error bars. Now add ErrorBars as above.
# Hint: You'll want to make the plot wider and use an xrotation to see the labels clearly
```
<!--
agg = macro.reindex(vdims=['growth']).aggregate('year', function=np.mean, spreadfn=np.std)
hv.Curve(agg) * hv.ErrorBars(agg)
-->
## Onward
* Go through the Tabular Data [getting started](http://holoviews.org/getting_started/Tabular_Datasets.html) and [user guide](http://holoviews.org/user_guide/Tabular_Datasets.html).
* Learn about slicing, indexing and sampling in the [Indexing and Selecting Data](http://holoviews.org/user_guide/Indexing_and_Selecting_Data.html) user guide.
The [next section](./05_Working_with_Gridded_Data.ipynb) shows a similar approach, but for working with gridded data, in multidimensional array formats.
| github_jupyter |
```
import os
import math
import matplotlib.pyplot as plt
from mpl_toolkits.axisartist.axislines import AxesZero
import matplotlib.gridspec as gridspec
from matplotlib import cm, transforms
import matplotlib.ticker as mtick
from mpl_axes_aligner import align
import numpy as np
np.seterr(divide='ignore')
from scipy.stats import poisson, norm, lognorm
from scipy import optimize as opti
import pandas as pd
from tqdm import tqdm
from scipy import special
from scipy.stats import norm
from scipy.stats import multivariate_normal
from scipy.signal import savgol_filter
import h5py
import torch
from torchviz import make_dot
import wf_func as wff
np.random.seed(0)
Mu = 4
Tau = 20
Sigma = 5
file = '4.0-20-5'
with h5py.File('waveform/' + file + '.h5', 'r', libver='latest', swmr=True) as ipt:
ent = ipt['Readout/Waveform'][:]
tru = ipt['SimTriggerInfo/PEList'][:]
gmu = ipt['SimTriggerInfo/PEList'].attrs['gmu']
gsigma = ipt['SimTriggerInfo/PEList'].attrs['gsigma']
t0truth = ipt['SimTruth/T'][:]
def normcombine(x, m, s, a):
return a[0] * norm.pdf((x - m[0]) / s[0]) + a[1] * norm.pdf((x - m[1]) / s[1])
def normcombine2d(x, m, s, a, rho):
return a[0, 0] * multivariate_normal.pdf(x, mean=[m[0, 0], m[1, 0]], cov=matrix(s[0, 0], s[1, 0], rho[0, 0])) + a[0, 1] * multivariate_normal.pdf(x, mean=[m[0, 0], m[1, 1]], cov=matrix(s[0, 0], s[1, 1], rho[0, 1])) + a[1, 0] * multivariate_normal.pdf(x, mean=[m[0, 1], m[1, 0]], cov=matrix(s[0, 1], s[1, 0], rho[1, 0])) + a[1, 1] * multivariate_normal.pdf(x, mean=[m[0, 1], m[1, 1]], cov=matrix(s[0, 1], s[1, 1], rho[1, 1]))
def matrix(sx, sy, rho):
return np.array([[sx ** 2, rho * sx * sy], [rho * sx * sy, sy ** 2]])
def chargehist(t):
c = norm.pdf(t, loc=gmu, scale=gsigma)
# q1 = 150.8
# sigma = 37.59
# w = 2.433e-5
# alpha = 0.01335
# mu = 2.851e-5
# c = np.exp(-mu)*(w*alpha*np.exp(-alpha*t))
# c = c + mu*np.exp(-mu)*(
# (1-w)/(sigma*np.sqrt(2*np.pi))*np.exp(-(t-q1)**2/(2*sigma**2))+
# w*(alpha/2*np.exp(-alpha*(t-q1-alpha/2*sigma**2))*(1+special.erf(t-q1-alpha*sigma**2)/(np.sqrt(2)*sigma))))
return c
Thres = wff.Thres
std = 1.
spe_pre = wff.read_model('spe.h5', 1)
p = spe_pre[0]['parameters']
window = wff.window
t_auto = np.arange(window).reshape(window, 1) - np.arange(window).reshape(1, window)
mnecpu = wff.spe((t_auto + np.abs(t_auto)) / 2, p[0], p[1], p[2])
fig = plt.figure(figsize=(8, 6))
t = np.arange(-4 * 5, 5 * 20, 0.1)
# gs = gridspec.GridSpec(1, 1, figure=fig, left=0.15, right=0.95, top=0.95, bottom=0.15, wspace=0.4, hspace=0.5)
# ax = fig.add_subplot(gs[0, 0])
ax = fig.add_axes((.125, .12, .775, .77))
ax.plot(t, wff.convolve_exp_norm(t, 20, 0), label=r'$(20,0)$', color='g')
ax.plot(t, wff.convolve_exp_norm(t, 0, 5), label=r'$(0,5)$', color='r')
ax.plot(t, wff.convolve_exp_norm(t, 20, 5), label=r'$(20,5)$', color='b')
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.grid()
ax.set_xlim(xmin=-4 * int(5))
ax.set_ylabel(r'$\mathrm{PDF}$')
ax.legend(title=r'$(\tau_l, \sigma_l)/\si{ns}$', loc='upper right')
ax.set_ylim(0, ax.get_ylim()[1] * 1.05)
# ax.annotate(r'$t_{0}$', xy=(0, 0), xytext=(5, 0.01), arrowprops=dict(facecolor='k', shrink=0.1, width=0.1, headwidth=2))
fig.savefig('Note/figures/profile.pgf')
fig.savefig('Note/figures/profile.pdf')
plt.close()
ax.get_position()
fig = plt.figure(figsize=(8, 4))
# fig.tight_layout()
gs = gridspec.GridSpec(1, 1, figure=fig, left=0.05, right=0.97, top=0.97, bottom=0.1, wspace=0.3, hspace=0.3)
ax = fig.add_subplot(gs[0, 0])
ax.spines['left'].set_position(('data', 0))
ax.spines['bottom'].set_position(('data', 0))
ax.plot(1, 0, '>k', transform=ax.get_yaxis_transform(), clip_on=False)
ax.plot(0, 1, '^k', transform=ax.get_xaxis_transform(), clip_on=False)
t = np.linspace(0, 6, 201)
ax.plot(t, lognorm.pdf(t, loc=0, s=0.3), color='darkorange')
ax.plot(t, lognorm.pdf(t, loc=3, s=0.3), color='darkblue')
ax.fill_between(t, 0, lognorm.pdf(t, loc=0, s=0.3), color='darkorange', alpha=0.5)
ax.fill_between(t, 0, lognorm.pdf(t, loc=3, s=0.3), color='darkblue', alpha=0.5)
ax.set_xlim(0, 6)
ax.set_ylim(bottom=1e-3)
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel(r'$\mathrm{Time}$')
ax.set_ylabel(r'$\mathrm{Voltage}$')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.annotate(s='', xy=(1.5, 1), xytext=(3.5, 1), arrowprops=dict(arrowstyle='<->'))
ax.text(x=2.2, y=1.1, s=r'$\sim D_w$')
ax.text(x=0.7, y=0.3, s=r'$\sim \mathrm{RSS}$')
ax.text(x=3.7, y=0.3, s=r'$\sim \mathrm{RSS}$')
fig.savefig('Note/figures/tab.pgf')
fig.savefig('Note/figures/tab.pdf')
fig.clf()
plt.close(fig)
i = 2
cid = ent[i]['ChannelID']
eid = ent[i]['TriggerNo']
truth = np.sort(tru[(tru['TriggerNo'] == eid) & (tru['PMTId'] == cid)], kind='stable', order=['TriggerNo', 'PMTId', 'HitPosInWindow'])
wave = ent[i]['Waveform'].astype(np.float) * spe_pre[cid]['epulse']
df = pd.DataFrame(truth)
df = df.rename(columns={'HitPosInWindow':'HitTime'})
charge = df['Charge'].copy()
hittime = df['HitTime'].copy()
df = df.astype({'Charge': 'float32'})
df = df.astype({'TriggerNo' : 'str', 'PMTId' : 'str', 'HitTime' : 'str', 'Charge': 'str'})
df['HitTime'] = ['{:.02f}'.format(s) for s in hittime]
df['Charge'] = ['{:.02f}'.format(s) for s in charge]
df
ind = np.argwhere(wave > spe_pre[cid]['std'] * 5).flatten()
xmin = ((ind.min() - spe_pre[cid]['mar_l']) // 20 - 1) * 20
xmax = max(((ind.max() + spe_pre[cid]['mar_r']) // 20 + 1) * 20, xmin + 200)
TRIALS = 8000
n = 1
b_t0 = [0., 600.]
# initialization
A, y, tlist, t0_t, t0_delta, cha, left_wave, right_wave = wff.initial_params(wave[::wff.nshannon], spe_pre[cid], Tau, Sigma, gmu, Thres['lucyddm'], p, is_t0=True, is_delta=False, n=n)
# assert len(np.unique(np.diff(tlist))) == 1
s_cha = np.cumsum(cha)
# moving average filter of size 2*n+1
cha = np.pad(s_cha[2*n+1:], (n+1, n), 'edge') - np.pad(s_cha[:-(2*n+1)], (n+1, n), 'edge')
cha += 1e-8 # for completeness of the random walk.
p_cha = cha / np.sum(cha)
mu_t = abs(y.sum() / gmu)
# Eq. (9) where the columns of A are taken to be unit-norm.
mus = np.sqrt(np.diag(np.matmul(A.T, A)))
assert np.std(mus) < 1e-4, 'mus must be equal'
mus = mus[0]
A = A / mus
p1 = mu_t * wff.convolve_exp_norm(tlist - t0_t, Tau, Sigma) / n + 1e-8
sig2w = spe_pre[cid]['std'] ** 2
sig2s = (gsigma * mus / gmu) ** 2
nu_star, T_star, c_star, es_history, NPE_evo = wff.metropolis_fbmp(y, A, sig2w, sig2s, mus, p1, p_cha, mu_t)
ilp_cha = np.log(cha.sum()) - np.log(cha)
guess = ilp_cha[es_history['loc'].astype(int)]
es_history['loc'] = np.interp(es_history['loc'], xp=np.arange(0.5, len(tlist)), fp=tlist)
ans = opti.fmin_l_bfgs_b(lambda x: -np.sum(wff.log_convolve_exp_norm(es_history['loc'] - x, Tau, Sigma)), x0=[t0_t], approx_grad=True, bounds=[b_t0], maxfun=500000)
t00 = ans[0].item() if ans[-1]['warnflag'] == 0 else t0_t
def fit():
mu = mu_t
b_mu = [max(1e-8, mu - 5 * np.sqrt(mu)), mu + 5 * np.sqrt(mu)]
def agg_NPE(t0):
log_f = wff.log_convolve_exp_norm(es_history['loc'] - t0, Tau, Sigma) + guess
return wff.jit_agg_NPE(es_history['step'], log_f, TRIALS)
def t_t0(t0):
nonlocal mu
NPE, f_agg = agg_NPE(t0)
ans = opti.fmin_l_bfgs_b(lambda μ: μ - special.logsumexp(NPE * np.log(μ / mu) + f_agg), x0=[mu], approx_grad=True, bounds=[b_mu], maxfun=500000)
mu = ans[0].item()
return ans[1]
ans = opti.fmin_l_bfgs_b(t_t0, x0=[t00], approx_grad=True, bounds=[b_t0], maxfun=500000)
t0 = ans[0].item()
return mu, t0
mu, t0 = fit()
j = 0
xmmse_most = np.zeros(len(tlist))
while np.all(xmmse_most <= 0):
maxindex = nu_star.argsort()[::-1][j]
zx = y - np.dot(A, mus * c_star[maxindex])
Phi_s = wff.Phi(y, A, c_star[maxindex], mus, sig2s, sig2w)
invPhi = np.linalg.inv(Phi_s)
xmmse_most = mus * c_star[maxindex] + np.matmul(np.diagflat(sig2s * c_star[maxindex]), np.matmul(A.T, np.matmul(invPhi, zx)))
j += 0
pet = np.repeat(tlist[xmmse_most > 0], c_star[maxindex][xmmse_most > 0])
cha = np.repeat(xmmse_most[xmmse_most > 0] / mus / c_star[maxindex][xmmse_most > 0], c_star[maxindex][xmmse_most > 0])
pet, pwe = wff.clip(pet, cha, 0.0)
pwe = pwe
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
wave_ylim = ax.get_ylim()
fig.savefig('Note/figures/fbmp.pgf')
fig.savefig('Note/figures/fbmp.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p)
print((t0 - t0truth[i]['T0']).item())
# fig = plt.figure(figsize=(8, 6))
# # fig.tight_layout()
# ax = fig.add_axes((.125, .12, .775, .77))
# ax.plot(wave, label='Waveform')
# ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
# ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
# ax.set_xlim(0, len(wave))
# wave_ylim = ax.get_ylim()
# ax.set_ylim(wave_ylim[0] * 1.05, wave_ylim[1] * 1.05)
# ax.legend()
# fig.savefig('Note/figures/wave.pgf')
# fig.savefig('Note/figures/wave.pdf')
# fig.savefig('Note/figures/wave.png')
# fig.clf()
# plt.close(fig)
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(truth['HitPosInWindow'], 0, truth['Charge'] / gmu, color='r', label='Charge', linewidth=1.0)
ax2.set_ylabel(r'$\mathrm{Charge}$')
ax2.set_ylim(wave_ylim[0] / 30, wave_ylim[1] / 30)
ax.plot(wave, label='Waveform')
# ax.set_xlim(xmin, xmax)
ax.set_xlim(0, len(wave) / 2)
ax.set_ylim(wave_ylim[0] * 0.7, wave_ylim[1] * 0.7)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
align.yaxes(ax, 0, ax2, 0)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
fig.savefig('Note/figures/wave.pgf')
fig.savefig('Note/figures/wave.pdf')
fig.savefig('Note/figures/wave.png')
fig.clf()
plt.close(fig)
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
# gs = gridspec.GridSpec(1, 1, figure=fig, left=0.15, right=0.85, top=0.95, bottom=0.15, wspace=0.4, hspace=0.5)
# ax = fig.add_subplot(gs[0, 0])
ax = fig.add_axes((.125, .12, .775, .77))
t = np.arange(0, 100, 0.1)
ax.plot(t, wff.spe(t, p[0], p[1], p[2]), color='b', label='Single PE response')
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.grid()
ax.set_xlim(0, 80)
ax.set_ylim(wave_ylim[0] * 0.7, wave_ylim[1] * 0.7)
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax.legend()
fig.savefig('Note/figures/spe.pgf')
fig.savefig('Note/figures/spe.pdf')
fig.savefig('Note/figures/spe.png')
plt.close()
# fig = plt.figure(figsize=(8, 6))
# # fig.tight_layout()
# ax = fig.add_axes((.125, .12, .775, .77))
# ax.vlines(truth['HitPosInWindow'], 0, truth['Charge'] / gmu, color='r', label='Charge')
# ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
# ax.set_ylabel(r'$\mathrm{Charge}$')
# ax.set_xlim(0, len(wave))
# ax.set_ylim(wave_ylim[0] / 20, wave_ylim[1] / 20)
# ax.axhline(y=0, color='k', linestyle='dashed', alpha=0.5)
# ax.legend()
# fig.savefig('Note/figures/charge.pgf')
# fig.savefig('Note/figures/charge.pdf')
# fig.savefig('Note/figures/charge.png')
# fig.clf()
# plt.close(fig)
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
# gs = gridspec.GridSpec(1, 1, figure=fig, left=0.15, right=0.85, top=0.95, bottom=0.15, wspace=0.4, hspace=0.5)
# ax = fig.add_subplot(gs[0, 0])
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(truth['HitPosInWindow'], 0, truth['Charge'] / gmu, color='r', label='Charge')
ax.plot(wave, label='Waveform')
ax.hlines(2, 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
ax.set_ylim(bottom=-5)
ax2.set_ylim(bottom=-5 / gmu)
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/goal.pgf')
fig.savefig('Note/figures/goal.pdf')
fig.clf()
plt.close(fig)
print(wave.sum())
print(truth['Charge'][truth['Charge'] > 0].sum()) # made by noise
t = np.load('result/takara/char/Channel00/cnn_testing_record_2021-07-30_17:15:10.npz')['arr_0']
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax.plot(np.arange(1, len(t)+1), t, label=r'$D_w$', color='C1')
ax.set_xlabel(r'$\mathrm{epoch}$')
ax.set_ylabel(r'$\mathrm{Wasserstein\ Distance}/\si{ns}$')
ax.legend()
ax.grid()
fig.savefig('Note/figures/epoch.pgf')
fig.savefig('Note/figures/epoch.pdf')
fig.clf()
plt.close(fig)
pet, pwe = wff.threshold(wave, spe_pre[cid])
pet, pwe = wff.clip(pet, pwe, Thres['threshold'])
output = np.zeros(window)
output[pet] = pwe
alpha = opti.fmin_l_bfgs_b(lambda alpha: wff.rss_alpha(alpha, output, wave, mnecpu), x0=[0.01], approx_grad=True, bounds=[[1e-20, np.inf]], maxfun=50000)[0]
pwe = pwe * alpha
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
ax2.annotate('', xy=(pet.mean(), pwe.max()*1.1), xytext=(pet.mean()+pet.ptp(), pwe.max()*1.1), arrowprops=dict(facecolor='k', shrink=0.01, width=2, headwidth=4))
ax2.set_ylim(top=pwe.max()*1.2)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/threshold.pgf')
fig.savefig('Note/figures/threshold.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
pet, pwe = wff.findpeak(wave, spe_pre[cid])
pet, pwe = wff.clip(pet, pwe, Thres['findpeak'])
output = np.zeros(window)
output[pet] = pwe
alpha = opti.fmin_l_bfgs_b(lambda alpha: wff.rss_alpha(alpha, output, wave, mnecpu), x0=[0.01], approx_grad=True, bounds=[[1e-20, np.inf]], maxfun=50000)[0]
pwe = pwe * alpha
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=1.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
loc = pet + spe_pre[cid]['peak_c']
loc = loc[loc < window]
amp = wave[loc]
for j in range(len(loc)):
ax.annotate('', xy=(loc[j], amp[j]+5), xytext=(loc[j], amp[j]+15), arrowprops=dict(facecolor='k', shrink=0.01, width=0.5, headwidth=2))
ax2.annotate('', xy=(pet.mean(), pwe.max()*1.1), xytext=(pet.mean()+pet.ptp(), pwe.max()*1.1), arrowprops=dict(facecolor='k', shrink=0.01, width=2, headwidth=4))
ax2.set_ylim(top=pwe.max()*1.2)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/findpeak.pgf')
fig.savefig('Note/figures/findpeak.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
pet, pwe = wff.waveformfft(wave, spe_pre[cid])
pet, pwe = wff.clip(pet, pwe, Thres['fftrans'])
output = np.zeros(window)
output[pet] = pwe
alpha = opti.fmin_l_bfgs_b(lambda alpha: wff.rss_alpha(alpha, output, wave, mnecpu), x0=[0.01], approx_grad=True, bounds=[[1e-20, np.inf]], maxfun=50000)[0]
pwe = pwe * alpha
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/fftrans.pgf')
fig.savefig('Note/figures/fftrans.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
pet, pwe = wff.lucyddm(wave, spe_pre[cid]['spe'])
pet, pwe = wff.clip(pet, pwe, Thres['lucyddm'])
output = np.zeros(window)
output[pet] = pwe
alpha = opti.fmin_l_bfgs_b(lambda alpha: wff.rss_alpha(alpha, output, wave, mnecpu), x0=[0.01], approx_grad=True, bounds=[[1e-20, np.inf]], maxfun=50000)[0]
pwe = pwe * alpha
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/lucyddm.pgf')
fig.savefig('Note/figures/lucyddm.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
with h5py.File('result/takara/char/' + file + '.h5', 'r', libver='latest', swmr=True) as ipt:
photoelec = ipt['photoelectron'][:]
s = photoelec[(photoelec['TriggerNo'] == eid) & (photoelec['ChannelID'] == cid)]
pet = s['HitPosInWindow']
pwe = s['Charge']
pwe = pwe / gmu
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
# ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/takara.pgf')
fig.savefig('Note/figures/takara.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
with h5py.File('result/mcmc/char/' + file + '.h5', 'r', libver='latest', swmr=True) as ipt:
photoelec = ipt['photoelectron'][:]
s = photoelec[(photoelec['TriggerNo'] == eid) & (photoelec['ChannelID'] == cid)]
pet = s['HitPosInWindow']
pwe = s['Charge']
pwe = pwe / gmu
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/mcmc.pgf')
fig.savefig('Note/figures/mcmc.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
# pet, pwe = wff.xiaopeip(wave, spe_pre[cid], eta=0)
pet, pwe = wff.xiaopeip(wave, spe_pre[cid], Tau, Sigma, Thres['lucyddm'], p, eta=0)
pet, pwe = wff.clip(pet, pwe, Thres['xiaopeip'])
fig = plt.figure(figsize=(8, 6))
# fig.tight_layout()
ax = fig.add_axes((.125, .12, .775, .77))
ax2 = ax.twinx()
ax2.vlines(pet, 0, pwe, color='r', label='Charge', linewidth=0.5)
ax.plot(wave, label='Waveform')
# ax.hlines(5 * spe_pre[cid]['std'], 0, window, color='g', label='Threshold')
ax.set_xlim(xmin, xmax)
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.yaxis.get_major_formatter().set_powerlimits((0, 1))
ax2.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax2.legend(lines + lines2, labels + labels2)
ax.set_xlabel(r'$\mathrm{t}/\si{ns}$')
ax.set_ylabel(r'$\mathrm{Voltage}/\si{mV}$')
ax2.set_ylabel(r'$\mathrm{Charge}$')
align.yaxes(ax, 0, ax2, 0)
fig.savefig('Note/figures/xiaopeip.pgf')
fig.savefig('Note/figures/xiaopeip.pdf')
fig.clf()
plt.close(fig)
wff.demo(pet, pwe, truth, spe_pre[cid], window, wave, cid, p, fold='/tmp')
t0 = wff.likelihoodt0(pet, char=pwe * gmu, gmu=gmu, Tau=Tau, Sigma=Sigma, mode='charge')[0]
print((t0 - t0truth[i]['T0']).item())
```
```
methods = ['lucyddm', 'xiaopeip', 'takara', 'fbmp', 'mcmc']
for m in methods:
with h5py.File('result/' + m + '/dist/' + file + '.h5', 'r', libver='latest', swmr=True) as distfile:
dt = distfile['Record'][:]
N = np.percentile(dt['wdist'], 95)
M = 500
penum = np.unique(dt['NPE'])
l = min(50, penum.max())
wdist_stats = np.full((l, 6), np.nan)
edist_stats = np.full((l, 6), np.nan)
for i in range(l):
vali = dt['NPE'] == i+1
if np.sum(vali) == 0:
continue
dtwpi = dt['wdist'][vali]
dtepi = dt['RSS'][vali]
wdist_stats[i, 0] = np.median(dtwpi)
wdist_stats[i, 1] = np.median(np.abs(dtwpi - np.median(dtwpi)))
wdist_stats[i, 2] = np.mean(dtwpi)
wdist_stats[i, 3] = np.std(dtwpi)
wdist_stats[i, 4] = np.percentile(dtwpi, 5)
wdist_stats[i, 5] = np.percentile(dtwpi, 95)
edist_stats[i, 0] = np.median(dtepi)
edist_stats[i, 1] = np.median(np.abs(dtepi - np.median(dtepi)))
edist_stats[i, 2] = np.mean(dtepi)
edist_stats[i, 3] = np.std(dtepi)
edist_stats[i, 4] = np.percentile(dtepi, 5)
edist_stats[i, 5] = np.percentile(dtepi, 95)
L = len(dt)
data = dt['wdist']
fig = plt.figure(figsize=(8, 6))
ax1 = fig.add_axes((.125, .12, .6, .77))
boxdict = ax1.boxplot(np.array([dt['wdist'][dt['NPE'] == i+1] for i in range(l)], dtype=np.object), sym='', patch_artist=True)
ax1.set_xticks(np.arange(1, 16, 2))
ax1.set_xticklabels(np.arange(1, 16, 2).astype(str))
ax1.plot(np.arange(1, l + 1), wdist_stats[:, 0], label=r'$D_w$')
ax1.set_xlim(0, l + 1)
ax1.set_ylim(0, max([boxdict['whiskers'][2 * i + 1].get_xydata()[1, 1] for i in range(l)]) * 1.05)
ax1.set_xlabel(r'$N_{\mathrm{PE}}$')
ax1.set_ylabel(r'$\mathrm{Wasserstein\ Distance}/\si{ns}$')
ax1.legend()
ax2 = fig.add_axes((.725, .12, .175, .77))
ax2.hist(data, bins=np.arange(0, data.max(), np.percentile(data, 98) / 40), density=1, orientation='horizontal')
ax2.set_xlabel(r'$\mathrm{arb.\ unit}$')
ax2.set_xlim(0, ax2.get_xlim()[1] * 1.05)
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set_ylim(ax1.get_ylim())
fig.savefig('Note/figures/' + m + 'chargestats.pgf')
fig.savefig('Note/figures/' + m + 'chargestats.pdf')
plt.close(fig)
t = np.arange(0, 1000, 0.1) / gmu
pdf = np.zeros_like(t)
tlist = np.arange(-50, 200)
for mu in tqdm(Mu * wff.convolve_exp_norm(tlist, Tau, Sigma)):
for i in range(1, 15):
pdf += mu * poisson.pmf(i, mu) * norm.pdf(t, loc=1, scale=gsigma / gmu / np.sqrt(i))
methods = ['lucyddm', 'xiaopeip', 'takara', 'fbmp']
colors = {'truth':'k', 'lucyddm':'y', 'xiaopeip':'c', 'takara':'C0', 'fbmp':'r'}
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
ax = fig.add_axes((.1, .12, .85, .80))
t = np.arange(0, 1000, 0.1) / gmu
# ax.plot(t, norm.pdf(t, loc=1, scale=gsigma / gmu) / (1 - norm.cdf(0, loc=1, scale=gsigma / gmu)), color=colors['truth'], alpha=0.2)
ax.plot(t, pdf / pdf.sum() / np.diff(t)[0], label='$\mathrm{ChargePDF}$', color=colors['truth'])
# th = 160 * 5 * 1e-4
th = 10 / gmu
labels = {'truth':'\mathrm{Truth}', 'lucyddm':'\mathrm{LucyDDM}', 'xiaopeip':'\mathrm{Fitting}', 'takara':'\mathrm{CNN}', 'fbmp':'\mathrm{FSMP}', 'fbmpwave':'\mathrm{FSMP}'}
for m in methods:
ch = h5py.File('result/' + m + '/char/' + file + '.h5', 'r', libver='latest', swmr=True)
cha = ch['photoelectron']['Charge'] / gmu
ax.hist(cha[cha > th], bins=np.linspace(th, 400 / gmu, 101), label='$'+labels[m]+'$', histtype='step', density=True, color=colors[m], linewidth=2.)
ax.set_xlim(10 / gmu, 310 / gmu)
ax.set_yticks([])
# ax.yaxis.get_major_formatter().set_powerlimits((0, 1))
# ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax.legend()
# ax.set_xlabel('$\mathrm{Charge}/\si{mV\cdot ns}$')
ax.set_xlabel('$\mathrm{Charge}$')
ax.set_ylabel(r'$\mathrm{Normalized\ Count}$')
plt.savefig('Note/figures/recchargehist.png')
plt.savefig('Note/figures/recchargehist.pdf')
plt.savefig('Note/figures/recchargehist.pgf')
plt.show()
t = np.arange(0, 1000, 0.1) / gmu
pdf = np.zeros_like(t)
b = 0.5
tlist = np.arange(-50, 200, b)
for mu in tqdm(25 * wff.convolve_exp_norm(tlist, Tau, Sigma) * b):
for i in range(1, 15):
pdf += mu * poisson.pmf(i, mu) * norm.pdf(t, loc=1, scale=gsigma / gmu / np.sqrt(i))
methods = ['lucyddm', 'xiaopeip', 'takara', 'fbmp']
colors = {'truth':'k', 'lucyddm':'y', 'xiaopeip':'c', 'takara':'C0', 'fbmp':'r'}
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
ax = fig.add_axes((.1, .12, .85, .80))
ax.plot(t, norm.pdf(t, loc=1, scale=gsigma / gmu) / (1 - norm.cdf(0, loc=1, scale=gsigma / gmu)), color=colors['truth'], alpha=0.2)
ax.plot(t, pdf / pdf.sum() / np.diff(t)[0], label='$\mathrm{ChargePDF}$', color=colors['truth'])
# th = 160 * 5 * 1e-4
th = 10 / gmu
labels = {'truth':'\mathrm{Truth}', 'lucyddm':'\mathrm{LucyDDM}', 'xiaopeip':'\mathrm{Fitting}', 'takara':'\mathrm{CNN}', 'fbmp':'\mathrm{FSMP}', 'fbmpwave':'\mathrm{FSMP}'}
for m in methods:
ch = h5py.File('result/' + m + '/char/15.0-20-5' + '.h5', 'r', libver='latest', swmr=True)
cha = ch['photoelectron']['Charge'] / gmu
ax.hist(cha[cha > th], bins=np.linspace(th, 400 / gmu, 101), label='$'+labels[m]+'$', histtype='step', density=True, color=colors[m], linewidth=2.)
ax.set_xlim(10 / gmu, 310 / gmu)
ax.set_yticks([])
# ax.yaxis.get_major_formatter().set_powerlimits((0, 1))
# ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1f'))
ax.legend()
# ax.set_xlabel('$\mathrm{Charge}/\si{mV\cdot ns}$')
ax.set_xlabel('$\mathrm{Charge}$')
ax.set_ylabel(r'$\mathrm{Normalized\ Count}$')
plt.savefig('Note/figures/recchargehist25.png')
# plt.savefig('Note/figures/recchargehist.pdf')
# plt.savefig('Note/figures/recchargehist.pgf')
plt.show()
```
| github_jupyter |
```
# default_exp utils
```
# Utils
> General purpose utils functions
```
#hide
from nbdev.showdoc import *
#export
import os
from typing import Dict, Any
#export
def dest(destination_dataset_table, prefix_dataset='tmp', return_dataset_only=False, return_table_only=False) -> Dict[str, Any]:
"""If `AIRFLOW_ENV != PROD` then write results to `prefix_dataset` instead.
:param destination_dataset_table: destination to write results to.
:return: destination_dataset_table: destination to write results to with prefix added if needed.
"""
AIRFLOW_ENV = os.environ.get("AIRFLOW_ENV", "UNK")
if AIRFLOW_ENV != 'PROD':
destination_dataset_table_list = destination_dataset_table.replace(':', '.').split('.')
destination_project = destination_dataset_table_list[0]
destination_dataset = prefix_dataset
destination_table = f'{destination_dataset_table_list[1]}_{destination_dataset_table_list[2]}'
destination_dataset_table = f'{destination_project}.{destination_dataset}.{destination_table}'
destination_parts = destination_dataset_table.split('.')
if return_dataset_only == True:
return destination_parts[1]
elif return_table_only == True:
return destination_parts[2]
else:
return destination_dataset_table
def dest_dict(destination_dataset_table, prefix_dataset='tmp') -> Dict[str, str]:
"""Wrapper for `dest()` but to return as dict.
"""
destination_dataset_table = dest(destination_dataset_table, prefix_dataset)
destination_parts = destination_dataset_table.split('.')
return {
"projectId": destination_parts[0],
"datasetId": destination_parts[1],
"tableId": destination_parts[2]
}
def sched(schedule: Any) -> Any:
"""If AIRFLOW_ENV != PROD then schedule should be `@once`.
:param schedule: schedule for prod.
:return: schedule: `schedule` if prod else `@once`.
"""
AIRFLOW_ENV = os.environ.get("AIRFLOW_ENV", "UNK")
if AIRFLOW_ENV == 'PROD':
return schedule
else:
return '@once'
#tests
#hide
os.environ.pop('AIRFLOW_ENV', None)
assert sched('foo') == '@once'
os.environ['AIRFLOW_ENV'] = 'PROD'
assert sched('foo') == 'foo'
assert dest_dict('p.d.t', prefix_dataset='') == {'projectId': 'p', 'datasetId': 'd', 'tableId': 't'}
os.environ.pop('AIRFLOW_ENV', None)
assert dest('p.d.t') == 'p.tmp.d_t'
assert dest('p.d.t', return_dataset_only=True) == 'tmp'
assert dest('p.d.t', return_table_only=True) == 'd_t'
```
| github_jupyter |
# Machine Learning Engineer Nanodegree
## Unsupervised Learning
## Project: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
## Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
```
## Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
```
# Display a description of the dataset
display(data.describe())
data.head()
data.info()
```
### Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
```
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [181, 93, 85]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
data.rank(pct=True).iloc[indices]
import seaborn as sns
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,5))
sns.heatmap((100 * data.rank(pct=True)).iloc[indices], vmin=1, vmax = 100, annot=True, ax=ax)
```
### Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
*What kind of establishment (customer) could each of the three samples you've chosen represent?*
**Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant.
**Answer:**
From the heat map, it can be seen that:
* Customer #1 has maximum spendings on Fresh in it's category. But can also be seen that this customer has high spending across all the categories (80% +). This is most probably a super market.
* Customer #2 has high spendings on Frozen which is 11 times the mean of Frozen category and maximum in its category. Seems to be primarily an ice cream parlor. (Has moderate spendings on Deli, seems keeps perishable items)
* Customer #3 has highest spendings on Grocery (11 times the mean), Detergents_Paper (14 times the mean). Seems to be primarily a grocery store, also stocking misc items like Milk, detergents and paper.
### Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function.
- Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`.
- Import a decision tree regressor, set a `random_state`, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's `score` function.
```
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
from sklearn.cross_validation import train_test_split
from sklearn import tree
import operator
def dt_predict(spending_head):
new_data = data.copy()
target_label = new_data.ix[:, [spending_head]].copy()
new_data.drop([spending_head], axis = 1, inplace = True)
new_data = new_data.values
target_label = target_label.values
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, target_label, test_size=0.25, random_state=1)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = tree.DecisionTreeRegressor()
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
#print("Score for predicting '{0}' is {1}".format(spending_head, score))
return score
prediction_ranking = {}
prediction_ranking['Fresh'] = dt_predict('Fresh')
prediction_ranking['Milk'] = dt_predict('Milk')
prediction_ranking['Grocery'] = dt_predict('Grocery')
prediction_ranking['Frozen'] = dt_predict('Frozen')
prediction_ranking['Detergents_Paper'] = dt_predict('Detergents_Paper')
prediction_ranking['Delicatessen'] = dt_predict('Delicatessen')
print("\nRanking (asc)")
print("===============================")
sorted_prediction_ranking = sorted(prediction_ranking.items(), key=operator.itemgetter(1))
for item, score in sorted_prediction_ranking:
print(score,item)
```
### Question 2
*Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?*
**Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data.
**Answer:**
*Which feature did you attempt to predict? What was the reported prediction score?*
I tried all. The 'Grocery' feature could be predicted with 84% accuracy.
*Is this feature necessary for identifying customers' spending habits?*
I would argue that __yes it is necessary__. Though 83% is the highest score amongst all, it is not high enough to ignore this feature. The Grocery spending is not fully explained by the other features.
Also it is not deterministic.
a. I experimented with the 'random_state' parameter. When set to 0, I got accuracy of 73%. When set to 42, gives accuracy of 68%. And when set to 1, both Grocery and Detergents_Paper gets an accuracy of 82%! So, the accuracy depends on the train/test split.
b. Everytime the cell is refreshed, I get different values of the scores. 'Detergents_Paper' comes quite close to Grocery.
We don't have enough data to conclude that Grocery is predictable.
__Score is not consistent and is highly sensitive to the train/test split.__
### Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
```
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
```
### Question 3
*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?*
**Hint:** Is the data normally distributed? Where do most of the data points lie?
**Answer:**
*Are there any pairs of features which exhibit some degree of correlation? *
Indeed, Grocery, Milk and Detergents_Paper show high degree of correlation between them. As can be seen from the plots between Grocery and Detergents_Paper, the plot is quite cohesive along the diagonal, which expnains their high score. Milk comes distant 3rd, with the dots scatterd more.
*Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict?*
Yes. These 3 has been the top 3 in the ranking of scores in the previous exercise. But ofcourse, Grocery and Detergent_Paper are close together, with Milk a distant 3rd.
*How is the data for those features distributed?*
The distribution is not normal. The distributions suggest that there are many customers spending less on these items, and lesser customers spending more. Among all these items, 'Fresh' has the highest spread (variance).
## Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
### Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this.
- Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`.
```
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = pd.DataFrame(log_data.loc[indices], columns = log_data.keys()).reset_index(drop = True)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# Exploration
X = data.ix[:, ['Detergents_Paper']].values
import matplotlib.pyplot as plt
X_log = np.log(X)
plt.hist(X_log, bins=20)
plt.show()
```
### Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
```
# Display the log-transformed sample data
display(log_samples)
```
### Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this.
- Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`.
- Assign the calculation of an outlier step for the given feature to `step`.
- Optionally remove data points from the dataset by adding indices to the `outliers` list.
**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
```
index_counts = {}
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
outliers = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
for item in outliers.index.values:
if item in index_counts.keys():
index_counts[item] += 1
else:
index_counts[item] = 1
display(outliers)
the_outliers = []
print('Data points considered outliers for more than one feature:')
for item in index_counts.keys():
if index_counts[item] > 1:
print(item)
the_outliers.append(item)
# OPTIONAL: Select the indices for data points you wish to remove
outliers = the_outliers
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
```
### Question 4
*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.*
**Answer:**
*Are there any data points considered outliers for more than one feature based on the definition above? *
Yes, there were 5 such datapoints: 128, 154, 65, 66, 75
*Should these data points be removed from the dataset? *
Yes, they should be removed, else they will misguide the clustering model. For e.g., K-means algorithm heavily depends on the distance from datapoints to the centroid. Any outliers will shift the center of gravity towards itself. This will lead to misrepresentation of the cluster.
*If any data points were added to the outliers list to be removed, explain why.*
I added the outliers identified above to the outlier's list. This is because down below I am using K Means clustering and don't want the outliers to affect the clustering.
## Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
### Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`.
- Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
```
from sklearn.decomposition.pca import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6)
pca.fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
```
### Question 5
*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.*
**Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the individual feature weights.
**Answer:**
The total variation explained by first and second component is: 0.4424 + 0.2766 = 0.719
The total variation explained by first four components is: 0.4424 + 0.2766 + 0.1162 + 0.0962 = 0.9314
Thus, the first four components could explain more than 90% of variation.
Breaking down individually,
* Component 1: The dominant feature explained by this component is Detergents_Paper (about 0.75 weightage). Milk and Grocery come close second (around 0.45). This agrees with our earlier observation that there is a coorelation between DetergentPaper, Milk and Grocery.
* Component 2: The dominant feature explained by this component is Fresh (0.7), followed by Deli and Frozen (0.5). This component suggests that these group of items are often sold together.
* Component 3: The dominant feature explained by this component is Fresh (0.7) and Deli (-0.7), and they are inversely weighted. This component differentiates between exclusive fresh outlets and exclusive deli outlets.
* Component 4: The dominant feature explained by this component is Frozen (0.8), with Deli (-0.5) coming second. They are inversely weighted. This component differentiates between exclusive Ice cream parlours/Frozen food outlets and exclusive Delis.
### Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
```
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
```
### Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with `good_data` to `pca`.
- Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`.
- Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
```
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
```
### Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
```
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
```
## Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
```
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
```
### Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
## Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
### Question 6
*What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?*
**Answer:**
* K Means uses hard assignment to assign a data point to a cluster. It starts with the random cluster centers (the number of clusters is determined by the user). For each iteration, we do two things: a) The datapoints in the set are assigned to the nearest center of cluster. b) The center of clusters are recalculated based on center of mass of the datapoints assigned to them. KMeans is faster to train than GMM, better for higher dimentional data and easy to interpret. However, it's approach of hard assignment may lead to wrong groupings and may not work well with certain shapes of clusters.
* GMM works with probability. The datapoints are not assigned directly. Rather the probability of each datapoint to be part of a cluster is calculated. This is a soft assignment. It works well with non linear geometric distribution. However it is difficult to initialize for high dimentional data, is slower (many parameters to be fitted to the data) and is difficult to interpret.
For our dataset, I will go with K Means, since there seems to be a pattern of spending involved. Hence there can be a clear demarcations between clusters.
Reference:
[1] https://www.quora.com/What-is-the-difference-between-K-means-and-the-mixture-model-of-Gaussian
### Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`.
- Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`.
- Find the cluster centers using the algorithm's respective attribute and assign them to `centers`.
- Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`.
- Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`.
- Assign the silhouette score to `score` and print the result.
```
# TODO: Apply your clustering algorithm of choice to the reduced data
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
def calculate_clusters(num_clusters):
clusterer = KMeans(n_clusters=num_clusters, random_state=42)
clusterer.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.cluster_centers_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, clusterer.labels_, metric='euclidean')
print(num_clusters, score)
return preds, centers, sample_preds
for i in range(9):
calculate_clusters(i + 2)
# Final cluster size of 2
preds, centers, sample_preds = calculate_clusters(2)
```
### Question 7
*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?*
**Answer:**
Silhouette scores for cluster size from 2 to 10 were tried, with the best score obtained for size of 2 (0.41).
### Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
```
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
```
### Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`.
- Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
```
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
import seaborn as sns
sns.heatmap((true_centers-data.mean())/data.std(ddof=1), annot=True, cbar=False, square=True)
```
### Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?*
**Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`.
**Answer:**
The normalized cluster expenditures is given by the heat map. Interpreting these numbers:
* Segment 0: The spendings in this segment is dominated by Fresh & Frozen, followed by almost equal but less spending on Milk, Grocery and Detergent_Paper. This seems to be profile of an outlet selling perishable items.
* Segment 1: The spendings in this segment is dominated by Grocery, Milk and Detergents_Paper. Definitely characterizes a grocer who sells retail stuff.
### Question 9
*For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*
Run the code block below to find which cluster each sample point is predicted to be.
```
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
```
**Answer:**
This was my earlier prediction (copy pasted from above)
* Customer #1 has maximum spendings on Fresh in it's category. But can also be seen that this customer has high spending across all the categories (80% +). This is most probably a super market.
* Customer #2 has high spendings on Frozen which is 11 times the mean of Frozen category and maximum in its category. Seems to be primarily an ice cream parlor. (Has moderate spendings on Deli, seems keeps perishable items)
* Customer #3 has highest spendings on Grocery (11 times the mean), Detergents_Paper (14 times the mean). Seems to be primarily a grocery store, also stocking misc items like Milk, detergents and paper
Post clustering, this is what the prediction was:
* Customer #1: Grocer.
* Customer #2: Sells perishable items.
* Customer #3: Grocer.
Customer #2 and #3 are consistent with what my earlier guess was. Customer #1 is off the chart.
## Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships.
### Question 10
Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?*
**Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
**Answer:**
From cluster analysis, we found that segment 1 deals with perishable items. Hence one can argue that changing the delivery service from 5 to 3 days will be difficult for this segment. These items cannot be stored for more than a day or two without degradation.
It might be doable without any decrease in customer satisfcation, for segment 0. This is where we have grocery and milk. Milk can be refrigerated for 2-3 days.
Hence, going by our analysis, our hunch is that segment 0 customers will react positively to the service, but not segment 1 customers.
To test out, we can do a couple of things:
* We initially carry out a survey to gauge customer's willingness to the change
* We carry out A/B testing with a smaller representative sample of customers
### Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service.
*How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?*
**Hint:** A supervised learner could be used to train on the original customers. What would be the target variable?
**Answer:**
We can treat this as a classification problem where we have to classify a new customer into segment 0 or 1.
We can use many of the classification algorithms like SVM, KNN or Decision Tree/Random forest.
The parameters are the estimates and the output is the segment.
The model can be trained on the existing customer data.
### Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
```
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
```
### Question 12
*How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?*
**Answer:**
Data points 1 and 2 are consistant with this new classification between HoReCa and Retailer. Datapoint 1 indeed represent segment dealing with perishable items, which is the HoReCa case. And Datapoint 2 was our grocer, which here is the retailer.
Datapoint 0 has been misclassified here as HoReCa, though earlier we had classified it as a Grocer.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| github_jupyter |
## Movie Review Sentiment Analysis - The Lion King (2019) EDA and Visualization
## Agenda
- Data Extraction (Web-scrapping)
- Visualization
- Regular Expression for special character removal
- Removal of accented characters and expanding contractions
- Tokenisation
- Stop Word Removal
- Stemming and Lemmatization
- TF-IDF Matrix
- Clustering
- SVD using scikitlearn
#### Import Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import requests
import time
import re
import random
random.seed(123)
#ignore warnings
import warnings
warnings.filterwarnings('ignore')
```
Data Extraction/ Web scrapping
------
#### NOTE: Time to execute below chunk = 25 min
#### Load from local file 'data_0.csv'
```
data=pd.read_csv('data_0.csv')
data.head()
# headers = {
# 'Referer': 'https://www.rottentomatoes.com/m/the_lion_king_2019/reviews?type=user',
# 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36',
# 'X-Requested-With': 'XMLHttpRequest',
# }
# url = 'https://www.rottentomatoes.com/napi/movie/9057c2cf-7cab-317f-876f-e50b245ca76e/reviews/user'
# s = requests.Session() # A Requests session. Provides cookie persistence, connection-pooling, and configuration.
# start_time = time.time()
# data = pd.DataFrame() #initializing empty dataframe, payload parameter values
# end = ''
# start = ''
# for i in list(range(1,301)):
# payload = {
# 'direction': 'next',
# 'endCursor': end,
# 'startCursor': start,
# }
# r = s.get(url, headers=headers, params=payload) # GET Call. Sends a GET request. Returns :class:`Response` object.
# data= data.append(r.json()['reviews'], ignore_index=True) #append Review info to dataframe
# end=r.json()['pageInfo']['endCursor'] # update 'start' key for new page
# time.sleep(5)
# print('Web scrap completed in %s s' % (str(time.time()-start_time)))
# Store to Data local to local file
# data.to_csv('data.csv', index=False)
```
Data Pre-processing and EDA
---------
```
data.shape
data.dtypes
data.describe()
data.describe(exclude='float64')
import ast
data['userId'] = data['user'].apply(lambda x: ast.literal_eval(x)['userId']) # string literal evaluation --> Data structure (dictionary)
data.describe(exclude='float64')
# Setting 'userId' as Index
data.set_index('userId', inplace=True)
data.head()
data.drop(['displayName','displayImageUrl','hasProfanity','hasSpoilers','rating','isSuperReviewer',
'isVerified','timeFromCreation','updateDate','user',],
axis=1,inplace=True)
data.columns
data.head()
```
### Feature Engineering
1. Convert *score* --> **sentiment**:\
i. if score > 3, **sentiment** = 0 [POSITIVE Review]\
ii. if score =< 3, **sentiment** = 1 [NEGATIVE Review]\
2. Organize *createDate* --> **date**, **time**
**NOTE**: our Target Level for prediction is Negative Review.
```
# Predic
data['sentiment'] = data['score'].apply(lambda x: 0 if x>3 else 1)
data.describe()
print('Count of Review Sentiments:\n')
print(data['sentiment'].value_counts(),'\n')
print('Frequency % of Review Sentiments:\n')
print(data['sentiment'].value_counts(normalize=True)*100) # <30% of All reviews are Negative
data.drop('score', axis=1,inplace=True)
data.head()
from datetime import datetime as dt
data['createDate'] = pd.to_datetime(data['createDate'], infer_datetime_format=True)
data['date'] = data['createDate'].dt.date # Date of review
data['time'] = data['createDate'].dt.time # Time of review
data['weekday'] = data['createDate'].dt.weekday #weekday number : Monday=0, Sunday=6
data['weekday'].value_counts()
data.dtypes
data.describe(exclude='int64')
data.drop('createDate', axis=1,inplace=True) #Drop original Datetime variable
data.dtypes
# Reviews begin on 1st August 2018, collected till 18th August 2018
x = data['date'].value_counts().sort_index().index.values
y = data['date'].value_counts().sort_index().values
print(pd.DataFrame({'Date':x,'Review Count': y}).head(),'\n')
print(pd.DataFrame({'Date':x,'Review Count': y}).tail(),)
with sns.axes_style('white'):
sns.set(rc={'figure.figsize':(10,5)})
plt.plot(list(range(1,19)),y, color='blue')
plt.xlabel('Day since Release', fontsize=15)
plt.ylabel('Review Count', fontsize=15)
xtick_location = list(range(1,19))
xtick_labels = list(range(1,19))
plt.xticks(ticks=xtick_location, labels=xtick_labels, rotation=0, fontsize=12, horizontalalignment='center', alpha=.9)
plt.yticks(fontsize=12,)
plt.title("Reviews Added Daily", fontsize=18)
plt.grid(axis='both', alpha=.3)
plt.show()
# Plot shows a Rise in Reviews during initial weeks, followed by a decline post Day 7
with sns.axes_style('white'):
sns.set(rc={'figure.figsize':(10,5)})
g = sns.countplot(x='weekday', data=data, palette='Set1', saturation=0.7)
v_list = [str(round(i,1))+' %' for i in (data['weekday'].value_counts(normalize=True)*100).sort_index()]
for v, p in zip(v_list, g.patches): # Annotate the point 'xy' with Frequency%
g.annotate(v, (p.get_x() + p.get_width() / 2., p.get_height()), ha='center', va='center',\
xytext = (0, 10), textcoords = 'offset points')
xtick_location = list(range(0,7))
xtick_labels = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun']
plt.xticks(ticks=xtick_location, labels=xtick_labels, rotation=0, fontsize=12, horizontalalignment='center', alpha=.9)
plt.xlabel('Weekday(s)',fontsize=15)
plt.ylabel('Movie Review(s)',fontsize=15)
plt.title('Frequency % of Reviews by Weekday', fontsize=18)
plt.show()
# Shows a dip in reviews(viewership) on Tuesday and Wednesdays
# Trend of Positive and Negative Reviews grouped by Weekday(s)
with sns.axes_style('white'):
sns.set(rc={'figure.figsize':(10,5)})
g = sns.countplot("weekday", data=data, hue='sentiment',hue_order=[1,0], palette='Set1')
v_list = [str(int(i))+' %' for i in data.groupby('weekday')['sentiment'].value_counts(normalize=True)*100]
v_1 = [y for x,y in enumerate(v_list) if x%2==0] # Frequency% of Positive Review(s)
v_2 = [y for x,y in enumerate(v_list) if x%2!=0] # Frequency% of Negative Review(s)
v_list = v_2+v_1
for v, p in zip(v_list, g.patches): # Annotate the point 'xy' with Frequency%
g.annotate(v, (p.get_x() + p.get_width() / 2.,
p.get_height()), ha='center', va='center', xytext = (0, 10), textcoords = 'offset points')
xtick_location = list(range(0,7))
xtick_labels = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun']
plt.xticks(ticks=xtick_location, labels=xtick_labels, rotation=0, fontsize=12, horizontalalignment='center', alpha=.9)
plt.xlabel('Weekday(s)',fontsize=15)
plt.ylabel('Movie Review(s)',fontsize=15)
plt.title('Frequency % of Reviews by Weekday, Sentiment', fontsize=18)
g.legend(['1-Negative', '0-Positive'], loc=0)
```
#### Calendar Heat Map of Count of Negative Review(s) over 18 days of Available data
```
# Preparing Dateframe for Calendar Heat Map
neg_review=data[data['sentiment']>0].groupby('date')['sentiment'].value_counts() # Count of Negative Reviews
data_cmap = pd.DataFrame(data={'neg_review':neg_review})
data_cmap.index = data_cmap.index.droplevel(1) # Drop Second Index level
index=list(data_cmap.index.values)
index = [np.datetime64(i) for i in index] # converting timestamp to np.datetime64()
data_cmap.index = index
data_cmap.head() # Date indexed Dataframe
# !pip install calmap
import calmap
# Plot
plt.figure(figsize=(16,10), dpi= 80)
calmap.calendarplot(data=data_cmap['2019']['neg_review'] ,fig_kws={'figsize': (16,10)},\
yearlabel_kws={'color':'black', 'fontsize':18})
plt.title('Calendar Heat Map of Negative Reviews (last 18 days)', fontsize=18)
plt.xlabel('Month', fontsize=15)
plt.show()
```
**NOTE:** The First 7 days had the largest contribution of Negative Reviews ()
```
data.dtypes
# Dropping 'date', 'time' info
data.drop(['date','time','weekday'], axis=1, inplace=True)
data.head()
```
Text Preprocessing
--------
```
data.dtypes
# !pip install -U spacy
# !python -m spacy download en_core_web_sm
import spacy
nlp = spacy.load("en_core_web_sm")
import re
import random
random.seed(123)
#Duplicating the original text extracted before proceeeding with preprocessing steps
original_data = data.copy()
print(data.keys())
print(original_data.keys())
```
### LowerCase all text
```
data['review'] = [text.strip().lower() for text in data['review']] # remove Trailing/Leading whitespaces
data['review'][:10]
# eg of 'review' to preprocess
data['review'][100]
```
### Removal/Replacement of: Contractions, Accented Characters, Symbols/Markdown Characters
**Contraction-Expansion Map:**
```
CONTRACTION_MAP = {
"ain't": "is not", "aren't": "are not", "can't": "cannot", "can't've": "cannot have", "'cause": "because", "could've": "could have",
"couldn't": "could not", "couldn't've": "could not have", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not",
"hadn't've": "had not have", "hasn't": "has not", "haven't": "have not", "he'd": "he would", "he'd've": "he would have",
"he'll": "he will", "he'll've": "he will have", "he's": "he is","how'd": "how did","how'd'y": "how do you","how'll": "how will","how's": "how is",
"I'd": "I would","I'd've": "I would have","I'll": "I will","I'll've": "I will have","I'm": "I am","I've": "I have","i'd": "i would",
"i'd've": "i would have","i'll": "i will","i'll've": "i will have","i'm": "i am","i've": "i have","isn't": "is not","it'd": "it would",
"it'd've": "it would have","it'll": "it will","it'll've": "it will have","it's": "it is","let's": "let us","ma'am": "madam","mayn't": "may not",
"might've": "might have","mightn't": "might not","mightn't've": "might not have","must've": "must have","mustn't": "must not",
"mustn't've": "must not have","needn't": "need not","needn't've": "need not have","o'clock": "of the clock","oughtn't": "ought not",
"oughtn't've": "ought not have","shan't": "shall not","sha'n't": "shall not","shan't've": "shall not have","she'd": "she would",
"she'd've": "she would have","she'll": "she will","she'll've": "she will have","she's": "she is","should've": "should have","shouldn't": "should not",
"shouldn't've": "should not have","so've": "so have","so's": "so as","that'd": "that would","that'd've": "that would have","that's": "that is",
"there'd": "there would","there'd've": "there would have","there's": "there is","they'd": "they would","they'd've": "they would have",
"they'll": "they will","they'll've": "they will have","they're": "they are","they've": "they have","to've": "to have","wasn't": "was not",
"we'd": "we would","we'd've": "we would have","we'll": "we will","we'll've": "we will have","we're": "we are","we've": "we have","weren't": "were not",
"what'll": "what will","what'll've": "what will have","what're": "what are","what's": "what is","what've": "what have","when's": "when is",
"when've": "when have","where'd": "where did","where's": "where is","where've": "where have","who'll": "who will","who'll've": "who will have",
"who's": "who is","who've": "who have","why's": "why is","why've": "why have","will've": "will have","won't": "will not","won't've": "will not have",
"would've": "would have","wouldn't": "would not","wouldn't've": "would not have","y'all": "you all","y'all'd": "you all would",
"y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have","you'd": "you would","you'd've": "you would have",
"you'll": "you will","you'll've": "you will have","you're": "you are","you've": "you have"
}
# from contractions import CONTRACTION_MAP
import unicodedata
def expand_contractions(text, contraction_mapping=CONTRACTION_MAP):
# Create 're' object by re.compile(pattern, repl, string)
contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())),
flags=re.IGNORECASE|re.DOTALL)
# re.IGNORECASE : Make search case-insensiitive
# re.DOTALL: Make the '.' special character match any character at all, including a newline
# To Expand the Contracted Words
def expand_match(contraction):
match = contraction.group(0)
expanded_contraction = contraction_mapping.get(match)\
if contraction_mapping.get(match)\
else contraction_mapping.get(match.lower())
return expanded_contraction # match, 'replaced by -->',expanded_contraction
# string substitution: regex.sub(replacement, subject)
expanded_text = contractions_pattern.sub(expand_match, text)
expanded_text = re.sub(pattern="'", repl="", string=expanded_text) # Remove apostrophe
return expanded_text # Returns expanded text
# Removes accented characters and emojis too
def remove_accented_chars(text):
text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore')
return text
def scrub_words(text):
#Replace \xao characters in text
text = re.sub('\xa0', ' ', text)
#Replace non ascii, non-Words and Digits/Numerals
text = re.sub("(\\W|\\d)",' ',text) # \W: non-alphanumeric character, \d: decimal digit
#Replace new line characters and following text untill space
text = re.sub('\n(\w*?)[\s]', '', text) # \n: newline char, \w: any alphanumeric character,
# *: matches zero or more occurrences, ?: matches Zero or One occurrence of the pattern left to it.
# (a|b|c)xz: group sub-patterns to match, [abc]: set of characters to match, \s: whitespace
# |: is used for alternation a|b
#Remove html markup
text = re.sub("<.*?>", ' ', text)
return text
# Test: expand_contractions()
txt = "They aren't sick, you shouldn't worry!"
print(expand_contractions(txt),'\n')
# Test: remove_accented_chars()
txt = 'Demain, dès l’aube, à l’heure où blanchit la campagne, Je partirai. J’irai par la forêt, j’irai par la montagne.'
print('Non-Accented Text:',remove_accented_chars(txt),'\n')
# Test: scrub_words()
txt = "Love, Love, \n\n\t, Love this movie!!😍😍😍❤️❤️❤️,&*(@)$&Lion King is the best#(@#$)"
print('Scrubbed Text:',scrub_words(txt))
print('Average Review length:', np.mean([len(i) for i in data['review']]))
```
#### Invoking the above defined functions
```
# data['review']= [expand_contractions(re.sub('’', "'", text)) for text in data['review']]
data['review'] = data['review'].apply(lambda x: expand_contractions(re.sub('’', "'", x)))
# Apply remove_accented_chars()
data['review'] = data['review'].apply(lambda x: remove_accented_chars(re.sub('’', "'", x)))
# Apply scrub_words()
data['review'] = data['review'].apply(lambda x: scrub_words(re.sub('’', "'", x)))
```
#### Checking the integrity of the data after initial preprocessing steps
```
print(len(data['review']))
print(len(original_data['review']),'\n')
print('Original Text:', original_data['review'][2784])
print("-"*20)
print('Processed Text:',data['review'][2784])
```
### Adding new column "word_count" which specifies the number of tokens in each document
```
data['word_count'] = data['review'].apply(lambda x: len(x.split(' '))) # tokenize words separated by single space
data[['review','word_count']].iloc[1000:1005,:]
print('Mean Review Length:',data['word_count'].mean())
print('Minimum Review Length:',data['word_count'].min())
print('Max Review Length:',data['word_count'].max())
```
### Lemmatization, Stemming, Tokenization and Stopwords.
```
## load spacy's English stopwords as variable called 'stopwords'
stopwords = spacy.lang.en.stop_words.STOP_WORDS
print('Number of stop words: %d' % len(stopwords))
print('First ten stop words: %s' % list(stopwords)[:10])
# stopwords.remove('no')
# stopwords.remove('not')
len(stopwords) # stopwords is a set()
## Adding Custom stopwords to the spacy stopword list
for w in stopwords:
nlp.vocab[w].is_stop = True
## Use NLTK for stemming.
## load nltk's SnowballStemmer as variable 'stemmer'
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english")
# Here I define a tokenizer and stemmer which returns the set of stems (excluding stop words) in the text that it is passed
def tokenize_and_stem(doc, remove_stopwords = True):
# first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token
if remove_stopwords:
tokens = [word.text for word in doc if not word.is_stop]
else:
tokens = [word.text for word in doc]
#print(tokens[:5])
filtered_tokens = []
# filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation)
for token in tokens:
if re.search('[a-zA-Z]', token):
filtered_tokens.append(token)
#print("ended re.search")
stems = [stemmer.stem(t) for t in filtered_tokens]
#print("returning stems")
return stems
def tokenize_and_lemmatize(doc, remove_stopwords = True):
# spaCy will convert word to lower case and changing past tense,
# gerund form (other tenses as well) to present tense. Also, “they” normalize to “-PRON-” which is pronoun.
if remove_stopwords:
tokens = [word for word in doc if not word.is_stop]
else:
tokens = [word for word in doc]
#print("Completed tokenization")
filtered_tokens = []
# filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation)
for token in tokens:
if re.search('[a-zA-Z]', token.text):
filtered_tokens.append(token)
#print("ended re.search")
lemma = [t.lemma_ for t in filtered_tokens]
#print("returning lemms")
return lemma
def tokenize_only(doc, remove_stopwords = True):
# first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token
if remove_stopwords:
tokens = [word.text for word in doc if not word.is_stop]
else:
tokens = [word.text for word in doc]
filtered_tokens = []
# filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation)
for token in tokens:
if re.search('[a-zA-Z]', token):
filtered_tokens.append(token)
return filtered_tokens
```
We are trying to create four seperate lists:
1. Clean Review Lemmatized (w/o stopwords)
2. Clean Review Stemmed (w/o stop words)
3. Review Lemmatized (w stopwords)
4. Review Stemmed (w stopwords)
# NOTE: Time to execute below chunk = 503.5 s (9 min)
## Load from local file 'data_txt_preprocessed.csv'
```
data = pd.read_csv('data_txt_preprocessed.csv', index_col='userId')
# data.head()
import ast
# string literal evaluation --> data structure
data['clean_review_stemmed'] = data['clean_review_stemmed'].apply(lambda x: ast.literal_eval(x))
data['clean_review_lemmatized'] = data['clean_review_lemmatized'].apply(lambda x: ast.literal_eval(x))
data['clean_review_tokenized'] = data['clean_review_tokenized'].apply(lambda x: ast.literal_eval(x))
data['review_stemmed'] = data['review_stemmed'].apply(lambda x: ast.literal_eval(x))
data['review_lemmatized'] = data['review_lemmatized'].apply(lambda x: ast.literal_eval(x))
data['review_tokenized'] = data['review_tokenized'].apply(lambda x: ast.literal_eval(x))
data.iloc[1000:1005,:]
```
```
# Vocab List w.o stopwords
clean_vocab_lemmatized = []
clean_vocab_stemmed = []
clean_vocab_tokenized = []
# Vocab List w stopwords
all_vocab_lemmatized = []
all_vocab_tokenized = []
for i,j,k in zip(data['clean_review_lemmatized'],data['clean_review_stemmed'], data['clean_review_tokenized']):
clean_vocab_lemmatized.extend(i)
clean_vocab_stemmed.extend(j)
clean_vocab_tokenized.extend(k)
for i,j in zip(data['review_lemmatized'], data['review_tokenized']):
all_vocab_lemmatized.extend(i)
all_vocab_tokenized.extend(j)
print(len(clean_vocab_lemmatized))
print(len(clean_vocab_stemmed))
print(len(clean_vocab_tokenized),'\n')
print(len(all_vocab_lemmatized))
print(len(all_vocab_tokenized))
print(data['review'][1000],'\n')
print(data['clean_review_lemmatized'][1000],'\n')
print(data['clean_review_stemmed'][1000],'\n')
print(data['review_stemmed'][1000],'\n')
print(data['review_lemmatized'][1000],'\n')
```
Text Data Visualization
----------
```
# Creating Dataframe for tokens in Review's Vocabulary
all_vocab_frame = pd.DataFrame({'words': all_vocab_tokenized}, index = all_vocab_lemmatized)
print ('There are ' + str(all_vocab_frame.shape[0]) + ' words in all_vocab_frame')
clean_vocab_frame = pd.DataFrame({'words': clean_vocab_tokenized}, index = clean_vocab_lemmatized)
print ('There are ' + str(clean_vocab_frame.shape[0]) + ' words in clean_vocab_frame')
```
### Plotting Most frequent words before and after stopword removal
```
values, counts = np.unique(clean_vocab_frame, return_counts=True)
all_values, all_counts = np.unique(all_vocab_frame, return_counts=True)
sorted_indices = np.argsort(-counts)
print(sorted_indices)
all_sorted_indices = np.argsort(-all_counts)
print(all_sorted_indices)
values = values[sorted_indices]
counts = counts[sorted_indices]
all_values = all_values[all_sorted_indices]
all_counts = all_counts[all_sorted_indices]
font = {'family' : 'DejaVu Sans',
'weight' : 'bold'}
plt.rc('font', **font)
plt.figure(figsize=(15,10))
# Frequency plot of words w/o stopwords
plt.subplot(1,2,1)
plt.barh(values[:15], counts[:15], color='blue')
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Word Frequency: w/o Stopwords', fontsize=20)
# Frequency plot of words with stopwords
plt.subplot(1,2,2)
plt.barh(all_values[:15], all_counts[:15], color='mediumspringgreen')
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Word Frequency: w. Stopwords', fontsize=20)
plt.show()
```
#### Observations from the Frequency Plots
1) The most occuring words present in both the graphs are quite different.\
2) Words in graph 1 (without stopwords) better describes the themes within the Reviews written
### Wordcloud of Review words (Lemmatized)
```
# Word Cloud string
clean_review_wordcloud=[]
for i in data['clean_review_lemmatized']:
clean_review_wordcloud+=i
clean_string = " ".join(clean_review_wordcloud)
# !pip install wordcloud
from wordcloud import WordCloud
wordcloud = WordCloud(max_font_size=100, width = 600,height=300,max_words=50, background_color="white").generate(clean_string)
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(30,50))
plt.imshow(wordcloud)
plt.axis(False)
plt.show()
```
### Word Frequency by Sentimentiment Groups
```
data.head(2)
# grouby sentiment
grouped_text = data.groupby('sentiment')['clean_review_tokenized']
# Fetch entire tokenized text for specific group
from itertools import chain
frequent_words_sentiment_df = pd.DataFrame(columns={"values", "counts", "sentiment"})
for num in range(2): # 2 Sentiment levels
values, counts = np.unique(list(chain.from_iterable(grouped_text.get_group(num))), return_counts=True)
# Create single List of Tokenized Reviews; lazily evaluates by taking a single iterable argument at a time
sorted_indices = np.argsort(-counts) # returns indices of sorted 'counts' in reversed order
frequent_words_sentiment_df = frequent_words_sentiment_df.append({"values":values[sorted_indices], "counts":counts[sorted_indices], "sentiment": num}, ignore_index=True)
# Append word values in decreasing count order grouped by sentiment
frequent_words_sentiment_df.head() # words sorted by counts order
font = {'family' : ' DejaVu Sans', 'weight' : 'bold', 'size': 15}
plt.rc('font', **font)
plt.figure(figsize=(18,10))
plt.subplot(1,2,1)
plt.barh(frequent_words_sentiment_df.loc[1,'values'][:15], frequent_words_sentiment_df.loc[1,'counts'][:15], color='royalblue')
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Words Frequency: Sentiment 1', fontsize='20')
plt.subplot(1,2,2)
plt.barh(frequent_words_sentiment_df.loc[0,'values'][:15], frequent_words_sentiment_df.loc[0,'counts'][:15], color='blue')
plt.gca().invert_yaxis()
plt.title('Words Frequency: Sentiment 0', fontsize='20')
plt.yticks(fontsize=15)
plt.show()
```
#### Observations:
1. Generic words common to both Sentiment reviews cloud the differences between the classes (top words common to both: 'movie','original', 'like' etc.
2. Difference in count of occurences of key words:
```
print('Count of "good" in Sentiment-1:',frequent_words_sentiment_df.loc[1,'counts'][np.where(frequent_words_sentiment_df.loc[1,'values']=='good')][0])
print('Count of "good" in Sentiment-0:',frequent_words_sentiment_df.loc[0,'counts'][np.where(frequent_words_sentiment_df.loc[0,'values']=='good')][0],'\n')
print('Count of "no" in Sentiment-1:',frequent_words_sentiment_df.loc[1,'counts'][np.where(frequent_words_sentiment_df.loc[1,'values']=='no')][0])
print('Count of "no" in Sentiment-0:',frequent_words_sentiment_df.loc[0,'counts'][np.where(frequent_words_sentiment_df.loc[0,'values']=='no')][0],'\n')
print('Count of "bad" in Sentiment-1:',frequent_words_sentiment_df.loc[1,'counts'][np.where(frequent_words_sentiment_df.loc[1,'values']=='bad')][0])
print('Count of "bad" in Sentiment-0:',frequent_words_sentiment_df.loc[0,'counts'][np.where(frequent_words_sentiment_df.loc[0,'values']=='bad')][0],'\n')
print('Count of "boring" in Sentiment-1:',frequent_words_sentiment_df.loc[1,'counts'][np.where(frequent_words_sentiment_df.loc[1,'values']=='boring')][0])
print('Count of "boring" in Sentiment-0:',frequent_words_sentiment_df.loc[0,'counts'][np.where(frequent_words_sentiment_df.loc[0,'values']=='boring')][0],'\n')
print('Count of "lacked" in Sentiment-1:',frequent_words_sentiment_df.loc[1,'counts'][np.where(frequent_words_sentiment_df.loc[1,'values']=='lacked')][0])
print('Count of "lacked" in Sentiment-0:',frequent_words_sentiment_df.loc[0,'counts'][np.where(frequent_words_sentiment_df.loc[0,'values']=='lacked')][0],'\n')
```
### Word Frequency of Pure Negative, Pure Positive tokens of the Sentiment groups
- To better understand the divergence of the sentiments established between the two groups, we must remove the intersecting tokens present in both classes.
```
# Review Words in sentiment=1 not in sentiment=0
neg_tokens = list(set(frequent_words_sentiment_df.loc[1,'values'])-set(frequent_words_sentiment_df.loc[0,'values']))
# 1136 Pure Negative Words found
neg_index = np.array([list(frequent_words_sentiment_df.loc[1,'values']).index(i) for i in neg_tokens]) # index location
neg_counts = frequent_words_sentiment_df.loc[1,'counts'][neg_index] # counts of words
neg_tokens = np.array(neg_tokens)
# Sort Tokens by Descending Count order
index = np.argsort(-neg_counts)
neg_counts = neg_counts[index]
neg_tokens = neg_tokens[index]
# Review Words in sentiment=0 not in sentiment=1
pos_tokens = list(set(frequent_words_sentiment_df.loc[0,'values'])-set(frequent_words_sentiment_df.loc[1,'values']))
# 1136 Pure positive Words found
pos_index = np.array([list(frequent_words_sentiment_df.loc[0,'values']).index(i) for i in pos_tokens]) # index location
pos_counts = frequent_words_sentiment_df.loc[0,'counts'][pos_index] # counts of words
pos_tokens = np.array(pos_tokens)
# Sort Tokens by Descending Count order
index = np.argsort(-pos_counts)
pos_counts = pos_counts[index]
pos_tokens = pos_tokens[index]
font = {'family' : ' DejaVu Sans', 'weight' : 'bold'}
plt.rc('font', **font)
plt.figure(figsize=(30,18))
plt.subplot(1,2,1)
plt.barh(neg_tokens[:15], neg_counts[:15], color='tomato')
plt.gca().invert_yaxis()
plt.yticks(fontsize=25)
plt.xticks(fontsize=20)
plt.title('Pure Negative Word Frequency: Sentiment 1', fontsize='30')
plt.subplot(1,2,2)
plt.barh(pos_tokens[:15], pos_counts[:15], color='royalblue')
plt.gca().invert_yaxis()
plt.yticks(fontsize=25)
plt.xticks(fontsize=20)
plt.title('Pure Positive Word Frequency: Sentiment 0', fontsize='30')
plt.show()
```
#### Observations:
1. Note the Top Words, Nouns and adjacectives in the classes <u>correlate</U> to the **Negative and Positive sentiments** expressed by the reviewers.
# TF-IDF
# TF-IDF Explaination:
- TF: Term Frequency, which measures how frequently a term occurs in a document. The term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of <u>normalization</u>:
**TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document)**
- IDF: Inverse Document Frequency, which measures how important a term is.<u>Frequent Terms are weighed down, while Rare Terms are scaled up</u>, by computing the following:
**IDF(t) = log_e(Total number of documents / Number of documents with term t in it)**
```
## tfidf vectorizer needs sentence and not token. Hence we need to combine all the tokens back to form a string
data['clean_review_stemmed'] = [' '.join(text) for text in data['clean_review_stemmed']]
data['clean_review_lemmatized'] = [' '.join(text) for text in data['clean_review_lemmatized']]
data['clean_review_lemmatized'][0]
```
### Creating the `tfidf_matrix`
```
from sklearn.feature_extraction.text import TfidfVectorizer
#define vectorizer parameters
# max_df : cutoff to exclude highly populated words in all doc eg: stopwords
# min_df : cutoff to exclude highly rare words in all doc eg: rarewords, no semantic value across corpus
# ngram_range : type of ngrams to include (min_ngram, max_ngram) (default=(1, 1))
# max_features : features dimension cutoff
tfidf_vectorizer = TfidfVectorizer(max_df=0.9, max_features=1500, #(0.05, 0.001)
min_df=0.2,
use_idf=True, ngram_range=(1,1))
tfidf_matrix = tfidf_vectorizer.fit_transform(data['clean_review_lemmatized'])
print(tfidf_matrix.shape)
# Terms: Main latent themes of the Text
# vocabulary_: Main latent Features of the Text
# tfidf_vectorizer.vocabulary_
terms = tfidf_vectorizer.get_feature_names()
terms
tfidf_matrix.todense() # todense() : Return a dense matrix representation of matrix.
```
Unsupervised Learning
-------
### 1. K-mean Clustering
### Fitting the elbow curve to identify right number of clusters/topics
```
from sklearn import metrics
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist #cluster distance
import joblib
Sum_of_squared_distances = []
K = range(1,6)
for k in K:
kmeanModel = KMeans(n_clusters=k, random_state=123)
kmeanModel.fit(tfidf_matrix)
Sum_of_squared_distances.append(kmeanModel.inertia_)
Sum_of_squared_distances
# Plot the elbow
# Distortion, on the y-axis, corresponds to our cost function:
# the sum of squared difference between each data point and the centroid, i.e., the cluster centre.
# As K increases the corresponding distortion value will tend to zero,
# because you end up having just one data point per cluster. With only one data point in per cluster,
# the centroid is the data point itself, so the distortion will be equal to zero.
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 10}
plt.rc('font', **font)
plt.plot(K, Sum_of_squared_distances, 'bx-')
plt.xlabel('k')
plt.ylabel('Sum_of_squared_distances')
plt.title('Elbow Method For Optimal k')
plt.show()
# Based on Elbow cure, we choose 4 clusters
num_clusters = 4
km = KMeans(n_clusters=num_clusters)
km.fit(tfidf_matrix)
#km.labels_
clusters = km.labels_.tolist()
#km.cluster_centers
centers = km.cluster_centers_
print(f"the cluster centers are {centers}")
```
### Getting the top words from each cluster
```
print(km.cluster_centers_)
print(km.cluster_centers_.shape)
# Sort Index of original list
km.cluster_centers_.argsort()
## Reversing the list so that index of max element is in 0th index
km.cluster_centers_.argsort()[:,::-1]
print("Top terms per cluster:")
#sort cluster centers by proximity to centroid and picking the top 6 words per cluster
order_centroids = km.cluster_centers_.argsort()[:, ::-1] # Reverse the ndarray column order, returns same 'n' col array
for i in range(num_clusters):
print()
print("Top words in Cluster-%d :" % i, end='')
print()
for ind in order_centroids[i, :3]: #replace 6 with n words per cluster
print('%s' % terms[ind].split(' '), end=',')
data['cluster_group'] = clusters
# data.pop('clean_text', None)
data.head()
data.keys()
cluster_df = pd.DataFrame(data)
cluster_df['cluster_group'].value_counts()
```
#### Fetching the most frequent words among each cluster
Step 1) Tokenize the entire text <br>
Step 2) Group the tokenized text by cluster id (output is list of lists: [[],[],[]])<br>
Step 3) Unlist the array of lists for each cluster group using chain function from itertools
```
cluster_df.groupby('sentiment')['cluster_group'].value_counts()
##Step 1
cluster_df['clean_review_tokenized'] = [text.split(' ') for text in cluster_df['clean_review_lemmatized']]
##Step 2: Create pandas SeriesGroupBy object
## Fetch entire tokenized text for specific group
grouped_text = cluster_df.groupby('cluster_group')['clean_review_tokenized']
from itertools import chain
frequent_words_df = pd.DataFrame(columns={"values", "counts", "cluster_id"})
for num in range(num_clusters):
values, counts = np.unique(list(chain.from_iterable(grouped_text.get_group(num))), return_counts=True)
# eg: returns an 1D array of unique words from tokenized reviews
# chain() constructor taking a single iterable argument that evaluates lazily;
sorted_indices = np.argsort(-counts) # returns indices of sorted list in reversed order
# Create Cluster df of values(word list) sorted by counts
frequent_words_df = frequent_words_df.append({"values":values[sorted_indices], "counts":counts[sorted_indices], "cluster_id": num}, ignore_index=True)
frequent_words_df.head() # words sorted by counts order
```
### Plotting Top Words in Clusters 0, 1, 2, 3
```
font = {'family' : 'DejaVu Sans',
'weight' : 'bold',
'size' : 35}
plt.rc('font', **font)
fig = plt.figure(figsize=(15,20))
plt.subplot(2,2,1)
plt.barh(frequent_words_df.loc[0,'values'][:8], frequent_words_df.loc[0,'counts'][:8])
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Words Frequency: Cluster 0', fontsize=20)
plt.subplot(2,2,2)
plt.barh(frequent_words_df.loc[1,'values'][:8], frequent_words_df.loc[1,'counts'][:8])
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Words Frequency: Cluster 1', fontsize=20)
plt.subplot(2,2,3)
plt.barh(frequent_words_df.loc[2,'values'][:8], frequent_words_df.loc[2,'counts'][:8])
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Words Frequency: Cluster 2', fontsize=20)
plt.subplot(2,2,4)
plt.barh(frequent_words_df.loc[3,'values'][:8], frequent_words_df.loc[3,'counts'][:8])
plt.gca().invert_yaxis()
plt.yticks(fontsize=15)
plt.title('Words Frequency: Cluster 3', fontsize=20)
plt.show()
```
#### Observations:
Words/Themes populated in the Clusters' Reviews describe:
Cluster 0: describes 'love' for the movie, again consisting of potentially shorter reviews
Cluster 1: positive adjectives like 'great', 'like' etc
Cluster 2: 'movie', also consist of possible shorter reviews that describe the movie in 1 sentence
Cluster 3: talks about originality of the movie/remake of the 2019 Lion King version
### 2. Truncated SVD (Latent Semantic Analysis - LSA) using Scikitlearn
Topic Modelling by Matrix Decomposition
Upon Truncated SVD processing, we obtain 2 Matrices
1. U ∈ ℝ^(m ⨉ t) emerges as our Document-specific Topic allocation matrix : m-document vector, t-topic
2. V ∈ ℝ^(n ⨉ t) becomes our Topic-specific Term allocation matrix : n-term vector, t-topic
<u>In both U and V, the columns correspond to one of our t topics. </u>
#### Import Libraries
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.9, max_features=1500, #(0.05, 0.001)
min_df=0.2,
use_idf=True, ngram_range=(1,1))
tfidf_matrix = tfidf_vectorizer.fit_transform(data['clean_review_lemmatized'])
print(tfidf_matrix.shape)
from sklearn.decomposition import TruncatedSVD
# Importing tfidf vectorized documents
print(tfidf_matrix.shape)
tfidf_matrix.todense()
```
### Creating the `svd_matrix` from the `tfidf_matrix`
```
# Select No. of Latent Themes to extract from text
n_components = 2
svd_model = TruncatedSVD(n_components=n_components, algorithm='randomized',n_iter=20,random_state=143)
svd_matrix = svd_model.fit(tfidf_matrix)
svd_matrix
# explained_variance_ratio_
print(f"Explained Variance Ratio : {svd_matrix.explained_variance_ratio_}")
print(f"Total Explained Variance : {round(svd_matrix.explained_variance_ratio_.sum() * 100, 2)} %")
# singular_values_ : explains the Top 2 Latent Topics found in Text
print(f"The singular values are {svd_matrix.singular_values_}")
```
i.e
C-1 explains 30% of variation\
C-2 explains 38% of variation
### Picking the few most important words in each topic
The components of svd_model are our topics and we can access them using svdmodel.components.<br>
let's print a few most important words in each of the 4 topics and see how our model has done.
```
# Components describe the Theme of Text (represented by Singular Values, Singular Vectors)
# Theme = 2,
svd_model.components_
# Term vs Topic Strength
for i, comp in enumerate(svd_model.components_):
print(f"The component is {comp} and shape is {comp.shape}") # Expl
terms_comp = zip(terms, comp)
sorted_terms = sorted(terms_comp, key= lambda x:x[1], reverse=True)[:6]
print("Topic "+str(i)+": ")
for t in sorted_terms:
print(f"{t[0]} -- {t[1]}")
print(" ")
```
### Tagging each document with a topic
### Creating the `doc_topic_matrix`
`doc_topic_matrix` is the resultant SVD Output
```
# 2 Singular Values, 2 Components (Eigenvalues, Eigenvectors - Strength of Variation)
# Documents - 3000, Topic - 2
doc_topic_matrix = svd_matrix.transform(tfidf_matrix)
print(doc_topic_matrix,'\n')
svd_categories = np.argmax(doc_topic_matrix, axis=1) # Returns the indices of the maximum values along an axis.
print(doc_topic_matrix.shape,'\n')
print(svd_categories)
data['SVD_group'] = svd_categories
pd.DataFrame(data).head(6)
print(data.groupby('sentiment')['SVD_group'].value_counts())
```
| github_jupyter |
## Distortion Correction
This notebook will explain about correcting Distortion and generate an undistorted image.
There are two main steps to this process: use chessboard images to obtain image points and object points, and then use the OpenCV functions `cv2.calibrateCamera()` and `cv2.undistort()` to compute the calibration and undistortion.
### Generating Imagepoints and ObjectPoints
This below cell will generate image points and object points as we did in earlier notebook.
```
#Import required libraries
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import patches
import glob
import cv2
import os
# These arrays will be used to store object points and image points from all input images.
objpts = [] # 3d points in real world space
imgpts = [] # 2d points in image plane
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Making a list of calibration images from input images
files = os.listdir('camera_cal')
# looking through the list and searching for chessboard corners
for fname in files:
if fname.startswith('calibration'):
img = mpimg.imread('camera_cal/'+fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Finding the chessboard corners using inbuilt opencv feature findChessboardCorners
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)
# If found,we add object points and image points to arrays imagepts and objpts
if ret == True:
imgpts.append(corners)
objpts.append(objp)
# Draw and display the corners using opencv
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
# Save the image in a premade folder
plt.imsave('./output_images/ChessboardCorners/'+fname, img)
# Display the last image with Chessboard corners drawn
plt.imshow(img)
import pickle
# Lets define a simple function to generate undistorted images
def cal_undistort(img, objpoints,imgpoints):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints,imgpoints,gray.shape[::-1],None, None)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump(dist_pickle, open("camera_cal/dist_pickle.p","wb"))
undist = cv2.undistort(img,mtx,dist, None, mtx)
return undist
# lets read in the image
img = cv2.imread('camera_cal/calibration1.jpg')
#We will use objpoints and imgpoints from above
undistorted_img = cal_undistort(img,objpts,imgpts)
fig,ax = plt.subplots(1,2, figsize = (15,30))
fig.tight_layout()
ax[0].imshow(img)
ax[0].set_title('Original', fontsize= 20, color = 'g')
ax[1].imshow(undistorted_img)
ax[1].set_title('Undistorted', fontsize= 20, color = 'r')
```
| github_jupyter |
```
# default_exp utils
```
# Utils
> Collection of useful functions.
```
#hide
from nbdev.showdoc import *
#export
import os
import numpy as np
from typing import Iterable, TypeVar, Generator
from plum import dispatch
from pathlib import Path
from functools import reduce
function = type(lambda: ())
T = TypeVar('T')
```
## Basics
```
#export
def identity(x: T) -> T:
"""Indentity function."""
return x
#export
def simplify(x):
"""Return an object of an iterable if it is lonely."""
@dispatch
def _simplify(x):
if callable(x):
try:
return x()
except TypeError:
pass
return x
@dispatch
def _simplify(i: Iterable): return next(i.__iter__()) if len(i) == 1 else i
return _simplify(x)
```
The simplify function is used to de-nest an iterable with a single element in it, as for instance [1], while leaving everything else constant. It can also exchange a function for its default argument.
```
simplify({1})
simplify(simplify)(lambda x='lul': 2*x)
#export
def listify(x, *args):
"""Convert `x` to a `list`."""
if args:
x = (x,) + args
if x is None:
result = []
elif isinstance(x, list): result = x
elif isinstance(x, str) or hasattr(x, "__array__") or hasattr(x, "iloc"):
result = [x]
elif isinstance(x, (Iterable, Generator)):
result = list(x)
else:
result = [x]
return result
```
What's very convenient is that it leaves lists invariant (it doen't nest them into a new list).
```
listify([1, 2])
listify(1, 2, 3)
#export
def setify(x, *args):
"""Convert `x` to a `set`."""
return set(listify(x, *args))
setify(1, 2, 3)
#export
def tuplify(x, *args):
"""Convert `x` to a `tuple`."""
return tuple(listify(x, *args))
tuplify(1)
#export
def merge_tfms(*tfms):
"""Merge two dictionnaries by stacking common key into list."""
def _merge_tfms(tf1, tf2):
return {
k: simplify(listify(setify(listify(tf1.get(k)) + listify(tf2.get(k)))))
for k in {**tf1, **tf2}
}
return reduce(_merge_tfms, tfms, dict())
merge_tfms(
{'animals': ['cats', 'dog'], 'colors': 'blue'},
{'animals': 'cats', 'colors': 'red', 'OS': 'i use arch btw'}
)
#export
def compose(*functions):
"""Compose an arbitrary number of functions."""
def _compose(fn1, fn2):
return lambda x: fn1(fn2(x))
return reduce(_compose, functions, identity)
#export
def pipe(*functions):
"""Pipe an arbitrary number of functions."""
return compose(*functions[::-1])
#export
def flow(data, *functions):
"""Flow `data` through a list of functions."""
return pipe(*functions)(data)
```
## File manipulation helper
```
#export
def get_files(path, extensions=None, recurse=False, folders=None, followlinks=True):
"""Get all those file names."""
path = Path(path)
folders = listify(folders)
extensions = setify(extensions)
extensions = {e.lower() for e in extensions}
def simple_getter(p, fs, extensions=None):
p = Path(p)
res = [
p / f
for f in fs
if not f.startswith(".")
and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)
]
return res
if recurse:
result = []
for i, (p, d, f) in enumerate(os.walk(path, followlinks=followlinks)):
if len(folders) != 0 and i == 0:
d[:] = [o for o in d if o in folders]
else:
d[:] = [o for o in d if not o.startswith(".")]
if len(folders) != 0 and i == 0 and "." not in folders:
continue
result += simple_getter(p, f, extensions)
else:
f = [o.name for o in os.scandir(path) if o.is_file()]
result = simple_getter(path, f, extensions)
return list(map(str, result))
# export
from fastcore.all import *
@patch
def decompress(self: Path, dest='.'):
pass
#export
@patch
def compress(self: Path, dest='.', keep_copy=True):
pass
#export
def save_array(array, fname, suffix):
"""Save an array with the given name and suffix."""
if not suffix.startswith("."):
suffix = "." + suffix
fname = Path(fname)
return np.save(array, fname.with_suffix(suffix))
def save_dataset(data):
return 'NotImplementedError'
```
| github_jupyter |
# Module 7.2 | Apply Data Storytelling to Lending Club loan data
Guiding Principles for EDA/ Visualizations
1. Graphical Integrity
2. Keep it simple
3. Use the right display
4. Use color strategically
5. Tell a story with Data
Guiding Principles for Effective Storytelling
1. Audience (Know Your Audience)
2. Engaging & Memorable
3. Answer concise questions
4. Carefully designed story (Beginning, Middle, End)
5. Moves audience (call to action)
Guiding Principles for Effective Presentations
1. Clarity of Message (1 big idea)
2. Clarity of Slides
3. Clarity of Delivery
IMAC
(Inferential Goal,
Model,
Algorithms,
Conclusion & Checking)
All models are wrong, some are useful.
Data Source -- https://www.kaggle.com/wendykan/lending-club-loan-data
```
%autosave 60
#import necessary modules
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('max_columns', None)
import nltk
import collections as co
from wordcloud import WordCloud, STOPWORDS
%matplotlib inline
#read loans.csv as a dataframe
loans_df = pd.read_csv('loan.csv',low_memory=False, engine='c')
loans_df.info()
loans_df.head()
loans_df.describe()
```
## Q1. What is the the distribution of loans by loan amount?
```
sns.set_style("ticks")
fig, axs = plt.subplots(2,1,figsize=(20,20))
sns.distplot(loans_df.loan_amnt, ax=axs[0], hist=True, kde=True, bins=40)
axs[0].set(xlabel='Loan Amount',
ylabel='% Distribution',title='Density Plot of Loan Amount')
sns.violinplot(loans_df.loan_amnt, ax=axs[1], color='0.6')
axs[1].set(xlabel='Loan Amount',
ylabel='Distribution',title='Violin Plot of Loan Amount')
sns.despine()
plt.show()
loans_df['loan_status'].unique()
#define a function to classify loan status into one of the following bins ('Fully Paid', 'Default', 'Current')
def loan_status_bin(text):
if text in ('Fully Paid', 'Does not meet the credit policy. Status:Fully Paid'):
return 'Fully Paid'
elif text in ('Current', 'Issued'):
return 'Current'
elif text in ('Charged Off', 'Default', 'Does not meet the credit policy. Status:Charged Off'):
return 'Default'
elif text in ('Late (16-30 days)', 'Late (31-120 days)', 'In Grace Period'):
return 'Late'
else:
'UNKNOWN STATUS'
#create a new attribute 'loan_status_bin' in the dataframe
loans_df['loan_status_bin']=loans_df['loan_status'].apply(loan_status_bin)
loans_df['loan_status_bin'].unique()
```
## Q2. What is the distribution of loans by loan status represented as a pie plot, and a violin plot?
```
sns.set_style("ticks")
fig, axs = plt.subplots(1,2,figsize=(18,8))
loans_df.groupby('loan_status_bin').size().plot(kind='pie', ax=axs[0]);
axs[0].set(title='Pie Plot of Loan Status bin')
sns.violinplot(x=loans_df['term'], y=loans_df['loan_amnt'], hue=loans_df['loan_status_bin'], ax=axs[1])
axs[1].set(xlabel='Loan Status bin',
ylabel='Loan Amount',title='Violin Plot of Loan Term, Loan Status and Loan Amount')
axs[1].legend(loc=4)
sns.despine()
plt.show()
```
## Q3. Why are people borrowing money?
```
plt.rcParams['figure.figsize'] = (16,16)
list_wc = list()
loans_df['title'].apply(lambda x: list_wc.append(x))
string_wc=str(list_wc)
wordcloud = WordCloud(stopwords=STOPWORDS, background_color='white', max_words=200, width=800, height=400).generate(string_wc)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
```
## Q4. Who is borrowing money?
```
int_rate, emp_length, home_ownership, addr_state, dti, term
```
| github_jupyter |
### CoreBx_island_v6 - Try to process entire N. Core Banks
Interpolate the North Core Banks DEMs onto rotated 1-m grid and save each as a .nc file.
Versioning jumped from v2 to v5, trying to be consistent with versions in processing notebooks.
New invV5
* Files are switched to the "merged DEMs" that Jin-Si made, so the rapid iteration can occur.
* Box is re-adjusted to accomodate the whole island. The resulting array is huge, but manageble.
New in v2
* Now 4D maps, two made made during visit to Santa Cruz and two ftp'd from Andy
* Apr. 9 - changed to _v3 for Sep map
* Now does the interpolation without the loop
* Apr. 21 - moved origin to SE to accomodate curvature in NE end of island. Add 400 m to size of array.
* Watch file names, esp. underline (or not) after "1m_DEM"
New in v6
* Added maps through Sep 28 2020
TODO: The alongshore/cross-shore names are switched.
```
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
# from dask.distributed import LocalCluster
from scipy import interpolate, signal
%matplotlib inline
# define all of the functions by runnng this python file
%run -i CoreBx_funcs
def make_grid(name=None,e0=None,n0=None,xlen=None,ylen=None,dxdy=None,theta=None):
nx = int((1./dxdy)*xlen)
ny = int((1./dxdy)*ylen)
xcoords = np.linspace(0.5*dxdy,xlen-0.5*dxdy,nx)
ycoords = np.linspace(0.5*dxdy,ylen-0.5*dxdy,ny)
# these will be the coordinates in rotated space
xrot, yrot = np.meshgrid(xcoords, ycoords ,sparse=False, indexing='xy')
print('Shape of xrot, yrot: ',np.shape(xrot),np.shape(yrot))
shp = np.shape(xrot)
xu, yu = box2UTMh(xrot.flatten(), yrot.flatten(), e0, n0, theta)
xu=np.reshape(xu,shp)
yu=np.reshape(yu,shp)
# write the UTM coords of the corners to an ASCII file
corners = np.asarray( [[xu[0][0],yu[0][0]],\
[xu[0][-1],yu[0][-1]],\
[xu[-1][-1],yu[-1][-1]],\
[xu[-1][0],yu[-1][0]],\
[xu[0][0],yu[0][0]]])
print('corners x, corners y]n',corners[0,:],corners[1,:])
print(corners)
fn = name+'.csv'
np.savetxt(fn, corners, delimiter=",")
return xu, yu, xrot, yrot, xcoords, ycoords
# April 9, 2020: Replaced "2019-09-12-13_1m_DEM_4D_crop.tif",\
# with _v3 and re-ran on my desktop
# May 4 - Changed to use Jin-Si's merged dems
fdir = "C:/crs/proj/2019_DorianOBX/Santa_Cruz_Products/merged_dems/"
#fdir = "D:/crs/proj/2019_DorianOBX/Santa_Cruz_Products/clipped_dems/"
fnames = (\
"C:/crs/proj/2019_DorianOBX/Santa_Cruz_Products/merged_dems/2019-08-30_1m_DEM_4D_crop2_m.tif",\
"C:/crs/proj/2019_DorianOBX/Santa_Cruz_Products/merged_dems/2019-09-12-13_1m_DEM_4D_v3_m.tif",\
"C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20191011_DEM_1m_lidarMerge_NAD83_2011_UTM18N_NAVD88_crs.tif",\
"C:/crs/proj/2019_DorianOBX/Santa_Cruz_Products/merged_dems/2019-11-26_1m_DEM_4D_crop_m.tif",\
"C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20200208-09_DEM_1m_4D_NAD83_2011_UTM18N_NAVD88_crs.tif",\
# "C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20200508-09_DEM_1m_4D_NAD83_2011_UTM18N_NAVD88_crs.tif",\
# "C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20200802_DEM_1m_4D_NAD83_2011_UTM18N_NAVD88_cog.tif",\
# "C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20200805-09_DEM_1m_4D_NAD83_2011_UTM18N_NAVD88_cog.tif",\
# "C:/crs/proj/2019_DorianOBX/SfM_OBX_results/dems/NCB_20200928_DEM_1m_4D_NAD83_2011_UTM18N_NAVD88_cog.tif")
titles = ([\
"Aug 30 2019 pre-Dorian",\
"Sep 12-13 2019 post-Dorian",\
"Oct 11 2019 lidar merge",\
"Nov 26 2019 post-Nor'easter"])
# "Feb 8-9 2020",\
# "May 8-9 2020",\
# "Aug 2 2020 pre-Isaias",\
# "Aug 5-9 2020 post-Isaias",\
# "Sep 28 2020 post-Teddy"])
nf = len(fnames)
fill_fnames = ('EBK_201909_YesLidar_Comb_Extent_m.tif')
fill_titles = ('Sep_fill')
# optional median-filter smoothing of original maps
smooth = False
# kernal size...this should be an odd number >= dxy/0.1
ksize = 3
# Make an array of dicts, where analysis region is defined by:
# name
# e0 - UTM Easting of origin [m]
# n0 - UTM Northing of origin [m]
# xlen - Length of alongshore axis [m]
# ylen - Length of cross-shore axis [m]
# dxdy - grid size (must be isotropic right now) [m]
# theta - rotation CCW from x-axis [deg]
r = {'name':"ncorebx","e0": 378500.,"n0": 3856350.,"xlen": 36000.,"ylen": 1100.,"dxdy": 1.,"theta": 42.}
# move the origin 400 m SE
xo,yo = xycoord(400.,42.+90)
print(xo,yo)
r['e0']=r['e0']+xo
r['n0']=r['n0']+yo
# add 400 m to ylen
r['ylen']=r['ylen']+400.
# that was called ncorebx_v4
# move that origin 460 m sw
xo,yo = xycoord(460., 42.+180.)
r['e0']=r['e0']+xo
r['n0']=r['n0']+yo
# add 650 m to ylen
r['xlen']=r['xlen']+650.
r['name']='ncorebx_v6'
print(r)
print(r['name'])
xu,yu,xrot,yrot,xcoords,ycoords = make_grid(**r)
ny,nx = np.shape(xu)
print(ny,nx)
%%time
dslist=[]
for i, fn in enumerate(fnames):
iswarned = False
print(i, fn)
# open the tif with XArray as a DataArray
da = xr.open_rasterio(fn)
print( np.shape(np.flipud(da['y'].values)), np.shape(da['x'].values), np.shape( np.flipud(da.values)) )
x = da['x'].values
y = np.flipud(da['y'].values)
# Not sure how da.values got a singleton dimension, but squeeze gets rid of it.
# However, make sure to squeeze before flipping
z = np.flipud(np.squeeze(da.values))
print(np.shape(x),np.shape(y),np.shape(z))
if(smooth):
# smooth with 2D running median
zs = signal.medfilt2d(z, kernel_size=ksize)
else:
zs = z
f = interpolate.RegularGridInterpolator( (y, x), zs, method='linear')
# Array for interpolated elevations
zi=np.NaN*np.ones((ny,nx))
# this is the fast iteration, which only works when all of the source points fall inside the target box
try:
zi=f((yu,xu))
# this is a slow iteration through all of the points, but allows us to skip ones that are outside
except:
if(not iswarned):
print("Warning: using slow iteration.")
iswarned = True
for ij in np.ndindex(zi.shape):
try:
zi[ij]=f((yu[ij],xu[ij]))
except:
zi[ij]=np.NaN
da = xr.DataArray(zi,dims=['Alongshore','Cross-shore'],coords={'Alongshore': ycoords, 'Cross-shore':xcoords })
da = da.chunk()
dslist.append(da)
dsa = xr.concat(dslist, dim='map')
fn = r['name']+'.nc'
dsa.to_netcdf(fn)
%%time
# Read in the fill map and make netcdf files
fn = fdir+fill_fnames
print(fn)
# open the tif with XArray as a DataArray
daf = xr.open_rasterio(fn)
print( np.shape(np.flipud(daf['y'].values)), np.shape(daf['x'].values), np.shape( np.flipud(daf.values)) )
x = daf['x'].values
y = np.flipud(daf['y'].values)
# Not sure how da.values got a singleton dimension, but squeeze gets rid of it.
# However, make sure to squeeze before flipping
z = np.flipud(np.squeeze(daf.values))
print(np.shape(x),np.shape(y),np.shape(z))
f = interpolate.RegularGridInterpolator( (y, x), z, method='linear')
# Array for interpolated elevations
zi=np.NaN*np.ones((ny,nx))
# this is a slow iteration through all of the points, but allows us to skip ones that are outside
# for ij in np.ndindex(zi.shape):
# try:
# zi[ij]=f((yu[ij],xu[ij]))
# except:
# zi[ij]=np.NaN
# this is the fast technique.
zi=f((yu,xu))
da = xr.DataArray(zi,dims=['Alongshore','Cross-shore'],coords={'Alongshore': ycoords, 'Cross-shore':xcoords })
da = da.chunk()
fno = r['name']+'_Sep_fill.nc'
da.to_netcdf(fno)
```
| github_jupyter |
# Notebook setup
```
import nltk
from nltk import sent_tokenize, word_tokenize
import os
import string
import re
from nltk.sentiment.vader import SentimentIntensityAnalyzer
nltk.download('punkt')
nltk.download('vader_lexicon');
# Disable output to reduce execution time.
output = False
outPath = "../training_set/ocr_output/"
for (dirpath, dirnames, filenames) in os.walk(outPath):
break
if '.DS_Store' in filenames :
filenames.remove('.DS_Store')
```
# Features
## Useful functions
```
def readFile(filename):
f = open(outPath+filename, 'r', encoding="cp1252") #for MAC ?
rawText = f.read()
text = rawText.replace("\n\n", "%EOL%").replace("\n"," ").replace("%EOL%","\n")
return text
def removePunctuation(text):
return text.translate(str.maketrans('', '', string.punctuation))
def findWithKeywords(text, anyKeywords=[], allKeywords=[], excludedKeywords=[]):
text = text.replace("\n\n", "%EOL%").replace("\n"," ").replace("%EOL%","\n")
sentences = sent_tokenize(text)
matched = []
for sentence in sentences:
if len(anyKeywords) > 0 and not any(keyword in sentence.lower() for keyword in anyKeywords):
continue
if len(allKeywords) and not all(keyword in sentence.lower() for keyword in allKeywords):
continue
if not any(keyword in sentence.lower() for keyword in excludedKeywords):
matched.append(sentence)
return "\n\n".join(matched)
def findWithKeywordsSentenceWindow(text, anyKeywords=[], allKeywords=[], excludedKeywords=[], windowSize=1):
text = text.replace("\n\n", "%EOL%").replace("\n"," ").replace("%EOL%","\n")
sentences = sent_tokenize(text)
matched = []
for index in range(0, len(sentences) - windowSize):
sentence = sentences[index] + '\n\n' + sentences[index + 1]
if len(anyKeywords) > 0 and not any(keyword in sentence.lower() for keyword in anyKeywords):
continue
if len(allKeywords) and not all(keyword in sentence.lower() for keyword in allKeywords):
continue
if not any(keyword in sentence.lower() for keyword in excludedKeywords):
matched.append(sentence)
return "\n\n".join(matched)
def findSentencesWithAnyKeywords(text, keywords, excludedKeywords=[]):
return findWithKeywords(text, anyKeywords=keywords, excludedKeywords=excludedKeywords)
def findSentencesWithAllKeywords(text, keywords, excludedKeywords=[]):
return findWithKeywords(text, allKeywords=keywords, excludedKeywords=excludedKeywords)
def findDirectorNumberText(text):
return findSentencesWithAllKeywords(text,["number of directors"], ["chair", "vacancy", "vacancies", "quorum"])
def findFirstNumberAfterWord(text, paramWord=""):
numWords = [
"zero", "one", "two", "three", "four", "five", "six", "seven", "eight",
"nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen",
"sixteen", "seventeen", "eighteen", "nineteen", "twenty"]
listWords = word_tokenize(text)
for word in listWords[listWords.index(paramWord):]:
word = removePunctuation(word)
if word in numWords:
return numWords.index(word)
if word.isdigit():
return word
return ""
```
## Is the company empowered to borrow?
```
def findCanBorrowText(text):
return (
findSentencesWithAnyKeywords(text, ["any business", "issue debt", "indebtedness"])
+ " "
+ findWithKeywords(text, anyKeywords=["borrow", "raise"], allKeywords=["money"])
)
def canBorrow(text):
canBorrowText = findCanBorrowText(text)
if canBorrowText.strip() == "":
return "no"
return getSentiment(canBorrowText)
def getSentiment(text):
if text.strip() == "":
return ""
sentimentAnalyzer = SentimentIntensityAnalyzer()
scores = sentimentAnalyzer.polarity_scores(text)
aggregated_score = scores["compound"]
return "yes" if aggregated_score > 0 else "no"
for filename in filenames:
text = readFile(filename)
if output:
print(filename)
print(canBorrow(text))
print("\n")
```
## What is the size of the board of directors? Minimum and maximum.
```
def findMinDirectors(fullText):
directorText = findDirectorNumberText(fullText)
if "no minimum" in directorText:
return "noMin"
if "minimum" in directorText:
return findFirstNumberAfterWord(directorText, "minimum")
if "less" in directorText: # for cases of "not less than" and "shall not be less than"
return findFirstNumberAfterWord(directorText, "less")
return "1"
def findMaxDirectors(fullText):
directorText = findDirectorNumberText(fullText)
if "no maximum" in directorText:
return "noMax"
if "maximum" in directorText:
return findFirstNumberAfterWord(directorText, "maximum")
if "more" in directorText: # for cases of "not more than" and "shall not be more than"
return findFirstNumberAfterWord(directorText, "more")
return "noMax" # TODO: Use noMax if ran out of ideas
for filename in filenames:
text = readFile(filename)
if output:
print(filename)
print(findDirectorNumberText(text))
print(findMinDirectors(text))
print(findMaxDirectors(text))
print("\n")
```
## Are the directors empowered to borrow?
```
def findDirectorsCanBorrowText(text):
return (
findWithKeywords(text, anyKeywords=["borrow", "debt", "incur", "indebtedness"], allKeywords=["directors may"])
+ " "
+ findWithKeywords(text, anyKeywords=["borrow", "debt", "incur", "indebtedness"], allKeywords=["directors can"])
)
def findBoardCanBorrowText(text):
return findWithKeywords(text, anyKeywords=["borrow", "debt", "incur", "indebtedness"], allKeywords=["the board may"])
def canDirectorsBorrow(text):
directorsText = findDirectorsCanBorrowText(text)
if directorsText.strip() != "":
return getSentiment(directorsText)
boardText = findBoardCanBorrowText(text)
if boardText.strip() != "":
return "no"
return "yes"
```
## Is a resolution of directors required to borrow?
```
def resolutionNeeded(text):
directorsText = findDirectorsCanBorrowText(text);
if canDirectorsBorrow(directorsText):
if "resolution" in directorsText.lower():
return "yes"
else:
return "no"
else:
return "no"
for filename in filenames:
text = readFile(filename)
if output:
print(filename)
print(findDirectorsCanBorrowText(text))
print(canDirectorsBorrow(text))
print(resolutionNeeded(text))
print("\n")
```
## What is the quorum for such a resolution?
```
def findQuorumText(text, keywords=["quorum", "number"]):
return findWithKeywordsSentenceWindow(text, allKeywords=keywords, anyKeywords=["directors", "shareholders"], windowSize=2)
def findQuorum(fullText):
quorumText = findQuorumText(fullText)
if quorumText.strip() == "":
quorumText = findQuorumText(text, keywords=["quorum", "meeting"])
match = re.search(r'not less than (.*?) of the', quorumText)
if match:
matched = match.group(1)
return matched.translate(str.maketrans('-—',' '))
else:
return "2"
for filename in filenames:
text = readFile(filename)
if output:
print(filename)
print(findQuorumText(text))
print("quorum : " + findQuorum(text))
print("\n")
```
| github_jupyter |
# Generate Region of Interests (ROI) labeled arrays for simple shapes
This example notebook explain the use of analysis module "skbeam/core/roi" https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/roi.py
```
import skbeam.core.roi as roi
import skbeam.core.correlation as corr
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LogNorm
import xray_vision.mpl_plotting as mpl_plot
```
### Easily switch between interactive and static matplotlib plots
```
interactive_mode = False
import matplotlib as mpl
if interactive_mode:
%matplotlib notebook
else:
%matplotlib inline
backend = mpl.get_backend()
cmap='viridis'
```
## Draw annular (ring-shaped) regions of interest
```
center = (100., 100.) # center of the rings
# Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (200, 205)
first_q = 10.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 7 # number of Q rings
# step or spacing, spacing between rings
one_step_q = 5.0 # one spacing between rings
step_q = [2.5, 3.0, 5.8] # differnt spacing between rings
```
### Test when there is same spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=one_step_q,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Same spacing between rings")
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Test when there is different spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=step_q,
num_rings=4)
print("edges when there is different spacing between rings", edges)
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Different spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Test when there is no spacing between rings
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.rings(edges, center, img_shape)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("There is no spacing between rings")
axes.set_xlim(50, 150)
axes.set_ylim(50, 150)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
### Generate a ROI of Segmented Rings¶
```
center = (75, 75) # center of the rings
#Image shape which is used to determine the maximum extent of output pixel coordinates
img_shape = (150, 140)
first_q = 5.0 # inner radius of the inner-most ring
delta_q = 5.0 #ring thickness
num_rings = 4 # number of rings
slicing = 4 # number of pie slices or list of angles in radians
spacing = 4 # margin between rings, 0 by default
```
#### find the inner and outer radius of each ring
```
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=delta_q, spacing=spacing,
num_rings=num_rings)
edges
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Segmented Rings")
axes.set_xlim(38, 120)
axes.set_ylim(38, 120)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
## Segmented rings using list of angles in radians
```
slicing = np.radians([0, 60, 120, 240, 300])
slicing
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Segmented Rings")
axes.set_xlim(38, 120)
axes.set_ylim(38, 120)
im = mpl_plot.show_label_array(axes, label_array, cmap="gray")
plt.show()
```
### Generate a ROI of Pies
```
first_q = 0
# inner and outer radius for each ring
edges = roi.ring_edges(first_q, width=50, num_rings=1)
edges
slicing = 10 # number of pie slices or list of angles in radians
#Elements not inside any ROI are zero; elements inside each
#ROI are 1, 2, 3, corresponding to the order they are specified in edges.
label_array = roi.segmented_rings(edges, slicing, center,
img_shape, offset_angle=0)
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Pies")
axes.set_xlim(20, 140)
axes.set_ylim(20, 140)
im = mpl_plot.show_label_array(axes, label_array, cmap)
plt.show()
```
## Rectangle region of interests.
```
# Image shape which is used to determine the maximum extent of output pixel coordinates
shape = (15, 26)
# coordinates of the upper-left corner and width and height of each rectangle
roi_data = np.array(([2, 2, 6, 3], [6, 7, 8, 5], [8, 18, 5, 10]),
dtype=np.int64)
#Elements not inside any ROI are zero; elements inside each ROI are 1, 2, 3, corresponding
# to the order they are specified in coords.
label_array = roi.rectangles(roi_data, shape)
roi_inds, pixel_list = roi.extract_label_indices(label_array)
```
## Generate Bar ROI's
```
edges = [[3, 4], [5, 7], [12, 15]]
edges
```
## Create Horizontal bars and Vertical bars
```
h_label_array = roi.bar(edges, (20, 25)) # Horizontal Bars
v_label_array = roi.bar(edges, (20, 25), horizontal=False) # Vertical Bars
```
## Create Box ROI's
```
b_label_array = roi.box((20, 25), edges)
```
## Plot bar rois, box rois and rectangle rois
```
fig, axes = plt.subplots(2, 2, figsize=(12, 10))
axes[1, 0].set_title("Horizontal Bars")
im = mpl_plot.show_label_array(axes[1, 0], h_label_array, cmap)
axes[0, 1].set_title("Vertical Bars")
im = mpl_plot.show_label_array(axes[0, 1], v_label_array, cmap)
axes[1, 1].set_title("Box Rois")
im = mpl_plot.show_label_array(axes[1, 1], b_label_array, cmap)
axes[0, 0].set_title("Rectangle Rois")
im = mpl_plot.show_label_array(axes[0, 0], label_array, cmap)
plt.show()
```
# Create line ROI's
```
label_lines= roi.lines(([0, 45, 50, 256], [56, 60, 80, 150]), (150, 250))
# plot the figure
fig, axes = plt.subplots(figsize=(6, 5))
axes.set_title("Lines")
im = mpl_plot.show_label_array(axes, label_lines, cmap)
plt.show()
import skbeam
print(skbeam.__version__)
```
| github_jupyter |
# Neural Networks with Keras
513/513 - 0s - loss: 1.7734 - acc: 0.2710
Loss: 1.7734287705337792, Accuracy: 0.2709551751613617
uses minmaxscaler
ref: 21-Machine-Learning/3/Activities/02-Evr_First_Neural_Network/Solved/First_Neural_Network.ipynb#Model-Summary
```
# Dependencies
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import numpy as np
from sklearn import tree
import os
from sklearn.datasets import make_classification
```
Import csv that has been optimized for 10 freatures
```
# import processed data
path = "data/"
file = "Best_disposition_data.csv"
path_file = path + file
df = pd.read_csv(path_file)
df = df.drop("Unnamed: 0", axis=1)
df
```
# Data Preprocessing
It is really important to scale our data before using multilayer perceptron models.
Without scaling, it is often difficult for the training cycle to converge
```
X = df.drop("disposition", axis=1)
y = df["disposition"]
print(X.shape, y.shape)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from tensorflow.keras.utils import to_categorical
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=1)
# MinMaxScaler is used
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Step 1: Label-encode data set
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
#n= 0
#for label, original_class in zip(encoded_y, y):
# print('Original Class: ' + str(original_class))
# print('Encoded Label: ' + str(label))
# n=n+1
# print(n)
# print('-' * 12)
# Step 2: Convert encoded labels to one-hot-encoding
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
```
## Creating our Model
Decide what kind of model to apply to our data.
For numerical data, we use a regressor model.
For categorical data, we use a classifier model.
In this example, we will use a classifier to build the following network:
## Defining our Model Architecture (the layers)
Create a sequential model
```
from tensorflow.keras.models import Sequential
model = Sequential()
from tensorflow.keras.layers import Dense
from tensorflow.python.ops.init_ops import VarianceScaling
number_inputs = 11
number_hidden_nodes = 12
model.add(Dense(units=number_hidden_nodes,
activation='relu', input_dim=number_inputs))
number_classes = 6
model.add(Dense(units=number_classes, activation='softmax'))
```
## Model Summary
```
model.summary()
```
## Compile the Model
Now that we have our model architecture defined, we must compile the model using a loss function and optimizer. We can also specify additional training metrics such as accuracy.
```
# Use categorical crossentropy for categorical data and mean squared error for regression
# Hint: your output layer in this example is using software for logistic regression (categorical)
# If your output layer activation was `linear` then you may want to use `mse` for loss
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
## Training the Model
Finally, we train our model using our training data
Training consists of updating our weights using our optimizer and loss function. In this example, we choose 1000 iterations (loops) of training that are called epochs.
We also choose to shuffle our training data and increase the detail printed out during each training cycle.
```
# Fit (train) the model
model.fit(
X_train_scaled,
y_train_categorical,
epochs=1000,
shuffle=True,
verbose=2
)
```
# Save the Trained Model
```
#Save the mod#el
model.save("z3_Nueral_network_model.h5")
```
# Quantifying the Model
Testing data to validate our model. Determine the validity of model (i.e. the ability to predict new and previously unseen data points)
```
# Evaluate the model using the testing data
model_loss, model_accuracy = model.evaluate(
X_test_scaled, y_test_categorical, verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
## Making Predictions with new data
Use trained model to make predictions using `model.predict`
## Making Predictions with new data
<b>Label asignments</b>
<b>Test Data was taken from the first row:</b>
new_data2
```
import numpy as np
new_data = np.array([[2012,12, 49, 7, 23, 0, 281.34, 70, 1016, 2]])
new_data_2 = np.array([[2017,7, 28, 6, 13, 27, 292.06, 88, 1017, 1]])
# Test 1
print(f"Predicted class: {model.predict_classes(new_data)}")
# Test 2
print(f"Predicted class: {model.predict_classes(new_data)}")
```
# Evaluate the Model
```
# Load the model
#from tensorflow.keras.models import load_model
#from tensorflow.python.ops.init_ops import Zeros
#model = load_model("z_Nueral_network_model.h5")
```
| github_jupyter |
<p align="center">
<img src="http://www.di.uoa.gr/themes/corporate_lite/logo_el.png" title="Department of Informatics and Telecommunications - University of Athens"/> </p>
---
<h1 align="center">
Artificial Intelligence
</h1>
<h1 align="center" >
Deep Learning for Natural Language Processing
</h1>
---
<h2 align="center">
<b>Konstantinos Nikoletos</b>
</h2>
<h3 align="center">
<b>Winter 2020-2021</b>
</h3>
---
---
### __Task__
This exercise is about developing a document retrieval system to return titles of scientific
papers containing the answer to a given user question. You will use the first version of
the COVID-19 Open Research Dataset (CORD-19) in your work (articles in the folder
comm use subset).
For example, for the question “What are the coronaviruses?”, your system can return the
paper title “Distinct Roles for Sialoside and Protein Receptors in Coronavirus Infection”
since this paper contains the answer to the asked question.
To achieve the goal of this exercise, you will need first to read the paper Sentence-BERT:
Sentence Embeddings using Siamese BERT-Networks, in order to understand how you
can create sentence embeddings. In the related work of this paper, you will also find other
approaches for developing your model. For example, you can using Glove embeddings,
etc. In this link, you can find the extended versions of this dataset to test your model, if
you want. You are required to:
<ol type="a">
<li>Preprocess the provided dataset. You will decide which data of each paper is useful
to your model in order to create the appropriate embeddings. You need to explain
your decisions.</li>
<li>Implement at least 2 different sentence embedding approaches (see the related work
of the Sentence-BERT paper), in order for your model to retrieve the titles of the
papers related to a given question.</li>
<li>Compare your 2 models based on at least 2 different criteria of your choice. Explain
why you selected these criteria, your implementation choices, and the results. Some
questions you can pose are included here. You will need to provide the extra questions
you posed to your model and the results of all the questions as well.</li>
</ol>
### __Notebook__
Same implementation as Sentence Bert notebook but with adding CrossEncoders that I read that they perform even better
---
---
__Import__ of essential libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys # only needed to determine Python version number
import matplotlib # only needed to determine Matplotlib version
import nltk
from nltk.stem import WordNetLemmatizer
import pprint
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext import data
import logging
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('averaged_perceptron_tagger')
```
Selecting device (GPU - CUDA if available)
```
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
```
# Loading data
---
```
# Opening data file
import io
from google.colab import drive
from os import listdir
from os.path import isfile, join
import json
drive.mount('/content/drive',force_remount=True)
```
Loading the dictionary if it has been created
```
#@title Select number of papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000","3000", "6000","9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)
```
OR the summary of the papers
```
#@title Select number of summarized papers that will be feeded in the model { vertical-output: true, display-mode: "both" }
number_of_papers = "9000" #@param ["1000", "3000", "6000", "9000"]
import pickle
CORD19_Dataframe = r"/content/drive/My Drive/AI_4/CORD19_SentenceMap_Summarized_"+number_of_papers+".pkl"
with open(CORD19_Dataframe, 'rb') as drivef:
CORD19Dictionary = pickle.load(drivef)
```
## Queries
---
```
query_list = [
'What are the coronoviruses?',
'What was discovered in Wuhuan in December 2019?',
'What is Coronovirus Disease 2019?',
'What is COVID-19?',
'What is caused by SARS-COV2?', 'How is COVID-19 spread?',
'Where was COVID-19 discovered?','How does coronavirus spread?'
]
proposed_answers = [
'Coronaviruses (CoVs) are common human and animal pathogens that can transmit zoonotically and cause severe respiratory disease syndromes. ',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'Coronavirus Disease 2019 (COVID-19) is an emerging disease with a rapid increase in cases and deaths since its first identification in Wuhan, China, in December 2019.',
'COVID-19 is a viral respiratory illness caused by a new coronavirus called SARS-CoV-2.',
'Coronavirus disease (COVID-19) is caused by SARS-COV2 and represents the causative agent of a potentially fatal disease that is of great global public health concern.',
'First, although COVID-19 is spread by the airborne route, air disinfection of cities and communities is not known to be effective for disease control and needs to be stopped.',
'In December 2019, a novel coronavirus, called COVID-19, was discovered in Wuhan, China, and has spread to different cities in China as well as to 24 other countries.',
'The new coronavirus was reported to spread via droplets, contact and natural aerosols from human-to-human.'
]
myquery_list = [
"How long can the coronavirus survive on surfaces?",
"What means COVID-19?",
"Is COVID19 worse than flue?",
"When the vaccine will be ready?",
"Whats the proteins that consist COVID-19?",
"Whats the symptoms of COVID-19?",
"How can I prevent COVID-19?",
"What treatments are available for COVID-19?",
"Is hand sanitizer effective against COVID-19?",
"Am I at risk for serious complications from COVID-19 if I smoke cigarettes?",
"Are there any FDA-approved drugs (medicines) for COVID-19?",
"How are people tested?",
"Why is the disease being called coronavirus disease 2019, COVID-19?",
"Am I at risk for COVID-19 from mail, packages, or products?",
"What is community spread?",
"How can I protect myself?",
"What is a novel coronavirus?",
"Was Harry Potter a good magician?"
]
```
# Results dataframes
```
resultsDf = pd.DataFrame(columns=['Number of papers','Embeddings creation time'])
queriesDf = pd.DataFrame(columns=['Query','Proposed_answer','Model_answer','Cosine_similarity'])
queriesDf['Query'] = query_list
queriesDf['Proposed_answer'] = proposed_answers
myQueriesDf = pd.DataFrame(columns=['Query','Model_answer','Cosine_similarity'])
myQueriesDf['Query'] = myquery_list
queriesDf
```
# SBERT
---
```
!pip install -U sentence-transformers
```
# Selecting transformer and Cross Encoder
```
from sentence_transformers import SentenceTransformer, util, CrossEncoder
import torch
import time
encoder = SentenceTransformer('msmarco-distilbert-base-v2')
cross_encoder = CrossEncoder('cross-encoder/ms-marco-TinyBERT-L-6')
```
# Initializing corpus
```
corpus = list(CORD19Dictionary.keys())
```
# Creating the embeddings
Encoding the papers
```
%%time
corpus_embeddings = encoder.encode(corpus, convert_to_tensor=True, show_progress_bar=True,device='cuda')
```
# Saving corpus as tensors to drive
```
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
torch.save(corpus_embeddings,corpus_embeddings_path)
```
# Loading embeddings if have been created and saved
---
```
corpus_embeddings_path = r"/content/drive/My Drive/AI_4/corpus_embeddings_6000_CrossEncoder.pt"
with open(corpus_embeddings_path, 'rb') as f:
corpus_embeddings = torch.load(f)
```
# Evaluation
---
```
import re
from nltk import tokenize
from termcolor import colored
def paperTitle(answer,SentenceMap):
record = SentenceMap[answer]
print("Paper title:",record[1])
print("Paper id: ",record[0])
def evaluation(query_list,top_k,resultsDf):
query_answers = []
scores = []
for query in query_list:
#Encode the query using the bi-encoder and find potentially relevant corpus
start_time = time.time()
question_embedding = encoder.encode(query, convert_to_tensor=True,device='cuda')
hits = util.semantic_search(question_embedding, corpus_embeddings, top_k=top_k)
hits = hits[0] # Get the hits for the first query
#Now, score all retrieved corpus with the cross_encoder
cross_inp = [[query, corpus[hit['corpus_id']]] for hit in hits]
cross_scores = cross_encoder.predict(cross_inp)
#Sort results by the cross-encoder scores
for idx in range(len(cross_scores)):
hits[idx]['cross-score'] = cross_scores[idx]
hits = sorted(hits, key=lambda x: x['cross-score'], reverse=True)
end_time = time.time()
#Output of top-5 hits
print("\n\n======================\n\n")
print("Query:",colored(query,'green') )
print("Results (after {:.3f} seconds):".format(end_time - start_time))
iter=0
for hit in hits[0:top_k]:
print("\n-> ",iter+1)
answer = ' '.join([re.sub(r"^\[.*\]", "", x) for x in corpus[hit['corpus_id']].split()])
if len(tokenize.word_tokenize(answer)) > 1:
print("Score: {:.4f}".format(hit['cross-score']))
paperTitle(corpus[hit['corpus_id']],CORD19Dictionary)
print("Anser size: ",len(tokenize.word_tokenize(answer)))
print("Anser: ")
if iter==0:
query_answers.append(answer)
scores.append(hit['cross-score'].item())
iter+=1
print(colored(answer,'yellow'))
resultsDf['Model_answer'] = query_answers
resultsDf['Cosine_similarity'] = scores
top_k = 3
evaluation(query_list,top_k,queriesDf)
top_k = 3
evaluation(myquery_list,top_k,myQueriesDf)
```
# Overall results
## 6000 papers with no summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user 13min 10s
- sys: 5min 40s
- total: 18min 51s
- Wall time: 18min 26s
### Remarks
Best results among the notebooks so far, almost 5/7 questions are answered and from mine 7/17. I expected better results since Cross Encoders enhance much the performance of Sentence Bert.
__Top-k__
Top-2 and 3 have lots of answers, as I noticed that are better that the first one. Also good results and with some tunning would be nearly to the wanted.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
## 9000 papers with no summarization
---
Session crashed due to RAM
## 6000 papers with paraphrase-distilroberta-base-v1 model and summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 18s
- sys: 22.8 s
- total: 1min 37s
- Wall time: 1min 37s
### Remarks
Not good results. From these results I think that the BERT summarizer parameters were not the appropriate and I should experiment with them. I shouldn't have so strict summarization and I may over summarized the papers.
__Top-k__
Not good.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
## 9000 papers with summarization
---
### Time needed for creating the embeddings:
- CPU times:
- user: 1min 48s
- sys: 32.6 s
- total: 2min 20s
- Wall time: 2min 16s
### Remarks
Again not good results and this is due my summarization tunning.
** Again I didn't have the time to re run and process again.
### Results
```
with pd.option_context('display.max_colwidth', None):
display(queriesDf)
with pd.option_context('display.max_colwidth', None):
display(myQueriesDf)
```
# References
[1] https://colab.research.google.com/drive/1l6stpYdRMmeDBK_vw0L5NitdiAuhdsAr?usp=sharing#scrollTo=D_hDi8KzNgMM
[2] https://www.sbert.net/docs/package_reference/cross_encoder.html
| github_jupyter |
# Setup
Before attending the workshp you should set up a scientific Python computing environment using the [Anaconda python distribution by Continuum Analytics](https://www.continuum.io/downloads). This page describes how. If this doesn't work, let [me](mailto:neal.caren@gmail.com) know and I will set you up with a virtual environment you can use on my server.
## Why Python?
As is true in human language, there are hundreds of computer programming languages. While each has its own merit, the major languages for scientific computing are C, C++, R, MATLAB, Python, Java, and Fortran. MATLAB and Python are similar in syntax and typically read as if they were written in plain english. This makes both languages a useful tool for teaching but they are also very powerful languages and are very actively used in real-life research. MATLAB is proprietary while Python is open source. A benefit of being open source is that anyone can write and release Python packages. For science, there are many wonderful community-driven packages such as NumPy, SciPy, scikit-image, and Pandas just to name a few.
## Installing Python 3.7 with Anaconda
There are several scientific Python distributions available for MacOS, Windows, and Linux. The most popular, [Anaconda](https://www.continuum.io/why-anaconda), is specifically designed for scientific computing and data science work. For this course, we will use the Anaconda Python 3.7 distribution. To install the correct version, follow the instructions below.
1. Navigate to the [Anaconda download page](https://www.anaconda.com/distribution/) and download the Python 3.7 graphical installer.
2. Launch the installer and follow the onscreen instructions.
3. Congratulations! You now have the beginnings of a scientific Python distribution.
## What is a Jupyter notebook?
[Jupyter](http://jupyter.org/) is a browser-based system to write code, math, and text in the same document so you can clearly explain the concepts and practices used in your program. Jupyter is not only for Python, but can be used with R, Juila, MATLAB, and about 35 other languages as of this writing. All files are saved as a [JSON](http://www.json.org/) formatted text file with the extension `.ipynb`.
## How to launch the notebook
A Jupyter Notebook server can either be launched from the command line or from a GUI program installed along with anaconda called Navigator.
### Launching from the Anaconda Navigator
Installing Python 3 from Anaconda should also install a GUI application called [Anaconda Navigator](https://docs.continuum.io/anaconda/navigator). From here, you can launch several applications such as a QTconsole, the Spyder IDE, and a data visualization software called GlueViz. We are interested in the Jupyter Notebook application tab, which is shown boxed in red below:

By clicking on 'Launch', you will instantiate a Jupyter notebook server which should open in a new window.
### Launching from the terminal
To launch a notebook server from the command line, simply open a terminal emulator (Terminal.app on OSX or gitbash on windows) and navigate to the directory you would like to set up a server by typing `cd path/to/folder`
Once you are in the correct folder, you can launch a notebook server by typing:
```
jupyter notebook
```
This will open a screen in your default internet browser with a server containing your notebooks. Its address will be [`http://localhost:8888`](http://localhost:8888/) and is only available on your computer. **Note that once you start a server, you must keep the terminal window open.** This is where the 'guts' of the python kernel is.
## Interacting with the notebook
If everything launched correctly, you should be able to see a screen which looks something like this:

To start a new python window, click on the right-hand side of the application window and select `New`. This will give you a bunch of options for new notebook kernels. In the above screen shot, there are two available Python kernels and one Matlab kernel. When starting a notebook, you should choose `Python 3` if it is available. If you have just a tab that says "Python", choose that one.
Once you start a new notebook, you will be brought to the following screen.

Welcome to the Jupyter notebook! There are many available buttons for you to click. However, the three most important components of the notebook are highlighted in colored boxes. In blue is the name of the notebook. By clicking this, you can rename the notebook. In red is the cell formatting assignment. By default, it is registered as code, but it can also be set to markdown as described later.
Finally, in purple, is the code cell. In this cell, you can type an execute Python code as well as text that will be formatted in a nicely readable format.
## Writing code
All code you write in the notebook will be in the code cell. You can write single lines, to entire loops, to complete functions. As an example, we can write and evaluate a print statement in a code cell, as is shown below. To exectue the code, we can simply hit `shift + enter` while our cursor is in the code cell.
```
# This is a comment and is not read by Python
print('Hello! This is the print function. Python will print this line below')
```
The box with the gray background contains the python code while the output is in the box with the white background.
## Next Steps
Now that you have a Python environment up and running, proceed to the [Python] notebook to learn the basics of the language.
*Note: This is a modified version of Griffin Chure's [Setting Up Python For Scientific Computing for Bi 1 - Principles of Biology](http://bi1.caltech.edu/code/t0a_setting_up_python.html). This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).*
| github_jupyter |
### Distributed MCMC Retrieval
This notebook runs the MCMC retrievals on a local cluster using `ipyparallel`.
```
import ipyparallel as ipp
c = ipp.Client(profile='gold')
lview = c.load_balanced_view()
```
## Retrieval Setup
```
%%px
%env ARTS_BUILD_PATH=/home/simonpf/build/arts
%env ARTS_INCLUDE_PATH=/home/simonpf/src/atms_simulations/:/home/simonpf/src/arts/controlfiles
%env ARTS_DATA_PATH=/home/simonpf/src/arts_xml/
%env OMP_NUM_THREADS=1
import sys
sys.path.insert(1,"/home/simonpf/src/atms_simulations/")
sys.path.insert(1, "/home/simonpf/src/typhon/")
import os
os.chdir("/home/simonpf/src/atms_simulations")
# This is important otherwise engines just crash.
import matplotlib; matplotlib.use("agg")
from typhon.arts.workspace import Workspace
import atms
import numpy as np
ws = Workspace()
channels = [0,15,16,17,19]
atms.setup_atmosphere(ws)
atms.setup_sensor(ws, channels)
atms.checks(ws)
ws.yCalc()
%%px
from typhon.arts.workspace import Workspace
import atms
import numpy as np
ws = Workspace()
channels = [0,15,16,17,19]
atms.setup_atmosphere(ws)
atms.setup_sensor(ws, channels)
atms.checks(ws)
ws.yCalc()
```
## A Priori State
The simulations are based on the a priori assumptions, that the profiles of specific humidity, temperature and ozone vary independently and that the relative variations can be described by Log-Gaussian distributions.
```
%%px
qt_mean = np.load("data/qt_mean.npy").ravel()
qt_cov = np.load("data/qt_cov.npy")
qt_cov_inv = np.linalg.inv(qt_cov)
```
## Jumping Functions
The jumping functions are used inside the MCMC iteration and propose new atmospheric states for specific humidity, temperature and ozone, respectively. The proposed states are generated from random walks that use scaled versions of the a priori covariances.
```
%%px
import numpy as np
from typhon.retrieval.mcmc import RandomWalk
c = (1.0 / np.sqrt(qt_mean.size)) ** 2
rw_qt = RandomWalk(c * qt_cov)
def j_qt(ws, x, revert = False):
if revert:
x_new = x
else:
x_new = rw_qt.step(x)
q_new = (np.exp(x_new[14::-1]).reshape((15,)))
q_new = atms.mmr2vmr(ws, q_new, "h2o")
ws.vmr_field.value[0, :, 0, 0] = q_new
ws.t_field.value[:, 0, 0] = x_new[:14:-1]
ws.sst = np.maximum(ws.t_field.value[0, 0, 0], 270.0)
return x_new
```
## A Priori Distributions
These functions return the likelihood (up to an additive constant) of a given state for each of the variables. Note that the states of specific humidity, temperature and ozone are given by the logs of the relative variations.
```
%%px
def p_a_qt(x):
dx = x - qt_mean
l = - 0.5 * np.dot(dx, np.dot(qt_cov_inv, dx))
return l
```
## Measurement Uncertainty
We assume that uncertainty of the measured brightness temperatures can be described by independent Gaussian error with a standard deviation of $1 K$.
```
%%px
covmat_y = np.diag(np.ones(len(channels)))
covmat_y_inv = np.linalg.inv(covmat_y)
def p_y(y, yf):
dy = y - yf
l = - 0.5 * np.dot(dy, np.dot(covmat_y_inv, dy))
return l
```
# Running MCMC
### The Simulated Measurement
For the simulated measurement, we sample a state from the a priori distribution of atmsopheric states and simulate the measured brightness temperatures.
A simple heuristic is applied to ensure that reasonable acceptance rates are obtained during the MCMC simulations. After the initial burn-in phase, 1000 simulation steps are performed. If the acceptance rates during this simulation are too low/high that covariance matrices of the corresponding random walks are scaled by a factor 0.1 / 9.0, respectively.
```
%%px
def adapt_covariances(a):
if (np.sum(a[:, 0]) / a.shape[0]) < 0.2:
rw_qt.covmat *= 0.7
if (np.sum(a[:, 0]) / a.shape[0]) > 0.4:
rw_qt.covmat *= 1.5
%%px
from typhon.retrieval.mcmc import MCMC
from atms import vmr2cd
dist = atms.StateDistribution()
n_burn_in = 500
n_prod = 5000
drop = 10
from typhon.retrieval.mcmc import MCMC
from atms import vmr2cd
def run_retrieval(i):
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
# Generate True State
dist.sample(ws)
ws.yCalc()
y_true = np.copy(ws.y)
q_true = np.copy(ws.vmr_field.value[0, :, 0, 0].ravel())
t_true = np.copy(ws.t_field.value[:, 0, 0].ravel())
cwv_true = atms.vmr2cd(ws)
dist.a_priori(ws)
qt = np.zeros(qt_mean.size)
# Add Noise
y_true += np.random.randn(*y_true.shape)
#try:
mcmc = MCMC([[qt, p_a_qt, j_qt]], y_true, p_y, [vmr2cd])
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_1, s_1, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_2, s_2, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_3, s_3, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_4, s_4, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_5, s_5, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_6, s_6, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_7, s_7, _, _ = mcmc.run(ws, n_prod)
# Reset covariance matrices.
rw_qt.covmat = np.copy(c * qt_cov)
qt_0 = dist.sample_factors()
_, _, _, a = mcmc.warm_up(ws, [qt_0], n_burn_in)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, 200)
adapt_covariances(a)
_, _, _, a = mcmc.run(ws, n_burn_in)
hist_8, s_8, _, _ = mcmc.run(ws, n_prod)
profiles_q = np.stack([hist_1[0][::drop, :15],
hist_2[0][::drop, :15],
hist_3[0][::drop, :15],
hist_4[0][::drop, :15],
hist_5[0][::drop, :15],
hist_6[0][::drop, :15],
hist_7[0][::drop, :15],
hist_8[0][::drop, :15]])
profiles_t = np.stack([hist_1[0][::drop, 15:],
hist_2[0][::drop, 15:],
hist_3[0][::drop, 15:],
hist_4[0][::drop, 15:],
hist_5[0][::drop, 15:],
hist_6[0][::drop, 15:],
hist_7[0][::drop, 15:],
hist_8[0][::drop, 15:]])
cwv = np.stack([s_1[::drop], s_2[::drop], s_3[::drop], s_4[::drop],
s_5[::drop],s_6[::drop],s_7[::drop],s_8[::drop]], axis=0)
return y_true, q_true, cwv_true, profiles_q, profiles_t, cwv
```
## Running the Retrievals
```
import numpy as np
ids = np.arange(3500)
rs = lview.map_async(run_retrieval, ids)
from atms import create_output_file
root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 15)
for y_true, h2o_true, cwv_true, profiles_q, profiles_t, cwv in rs:
if not y_true is None:
t = v_cwv_true.shape[0]
print("saving simulation: " + str(t))
steps=cwv.size
v_y_true[t,:] = y_true
ws.vmr_field.value[0,:,:,:] = h2o_true.reshape(-1,1,1)
v_cwv_true[t] = cwv_true
v_cwv[t, :steps] = cwv[:]
v_h2o[t, :steps,:] = profiles_q.ravel().reshape(-1, 15)
else:
print("failure in simulation: " + str(t))
print(h2o_true)
print(cwv_true)
print(profiles)
import matplotlib_settings
import matplotlib.pyplot as plt
root_group.close()
root_group, v_y_true, v_cwv_true, v_cwv ,v_h2o = create_output_file("data/mcmc_retrievals_5.nc", 5, 27)
for i in range(1000, 1100):
plt.plot(v_cwv[i, :])
plt.gca().axhline(v_cwv_true[i], c = 'k', ls = '--')
v_h2o[118, 250:500, :].shape
plt.plot(np.mean(profs_t[2, 0:200], axis = 0), p)
plt.plot(np.mean(profs_t[2, 200:400], axis = 0), p)
plt.title("Temperature Profiles")
plt.xlabel("T [K]")
plt.ylabel("P [hPa]")
plt.gca().invert_yaxis()
p = np.load("data/p_grid.npy")
profiles_t[1, :, :].shape
plt.plot(np.mean(np.exp(profs_q[1, 0:200]) * 18.0 / 28.9, axis = 0), p)
plt.plot(np.mean(np.exp(profs_q[1, 200:400]) * 18.0/ 28.9, axis = 0), p)
plt.gca().invert_yaxis()
plt.title("Water Vapor Profiles")
plt.xlabel("$H_2O$ [mol / mol]")
plt.ylabel("P [hPa]")
```
| github_jupyter |
## Validated boundaries to government unit incident density comparison
The backing theory for this notebook is proving that we will be able to use the highest-density (fire count vs government unit area) government unit to determine a department's boundary for departments that do not have boundaries.
```
import psycopg2
from psycopg2.extras import RealDictCursor
import pandas as pd
# import geopandas as gpd
# from shapely import wkb
# from shapely.geometry import mapping as to_geojson
# import folium
pd.options.display.max_columns = None
pd.options.display.max_rows = None
#pd.set_option('display.float_format', lambda x: '%.3f' % x)
%matplotlib inline
conn = psycopg2.connect('service=firecares')
nfirs = psycopg2.connect('service=nfirs')
```
### DB migration/setup
```
# Create materialized view of all usgs govt units in FireCARES
q = """
create materialized view if not exists usgs_governmentunits as
(
select id, county_name as name, 'countyorequivalent' as source, geom from usgs_countyorequivalent where geom is not null
union
select id, place_name as name, 'incorporatedplace' as source, geom from usgs_incorporatedplace where geom is not null
union
select id, minorcivildivision_name as name, 'minorcivildivision' as source, geom from usgs_minorcivildivision where geom is not null
union
select id, name, 'nativeamericanarea' as source, geom from usgs_nativeamericanarea where geom is not null
union
select id, name, 'reserve' as source, geom from usgs_reserve where geom is not null
union
select id, state_name as name, 'stateorterritoryhigh' as source, geom from usgs_stateorterritoryhigh where geom is not null
union
select id, place_name as name, 'unincorporatedplace' as source, geom from usgs_unincorporatedplace where geom is not null
);
create unique index on usgs_governmentunits (id, source);
create index on usgs_governmentunits using gist (geom);
"""
with conn.cursor() as c:
c.execute(q)
conn.commit()
# Link remote firecares usgs_governmentunits view to nfirs-local usgs_government units
q = """
create foreign table usgs_governmentunits (id integer, name character varying(120), source text, geom geometry)
server firecares
options (table_name 'usgs_governmentunits');
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
# Old nfirs.firestation_firedepartment foreign table columns needed to be synced
q = """
alter foreign TABLE firestation_firedepartment add column archived boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column domain_name character varying(255);
alter foreign TABLE firestation_firedepartment add column owned_tracts_geom public.geometry(MultiPolygon,4326);
alter foreign TABLE firestation_firedepartment add column display_metrics boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column boundary_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column cfai_accredited boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column ems_transport boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column staffing_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column stations_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column census_override boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column additional_fdids character varying(255);
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
q = """
create foreign table if not exists firecares_core_address (id integer NOT NULL,
address_line1 character varying(100) NOT NULL,
address_line2 character varying(100),
city character varying(50) NOT NULL,
state_province character varying(40) NOT NULL,
postal_code character varying(10) NOT NULL,
geom public.geometry(Point,4326),
geocode_results text,
country_id character varying(2) NOT NULL)
server firecares
options (table_name 'firecares_core_address');
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
```
### Processing
```
q = """
select id, fdid, state, name
from firestation_firedepartment
where boundary_verified = true;
"""
with nfirs.cursor(cursor_factory=RealDictCursor) as c:
c.execute(q)
fds = c.fetchall()
q = """
with fires as (select * from joint_buildingfires
inner join joint_incidentaddress
using (fdid, inc_no, inc_date, state, exp_no)
where state = %(state)s and fdid = %(fdid)s
),
govt_units as (
select gu.name, gu.source, gu.id, gu.geom, fd.id as fc_id, fd.geom as fd_geom, ST_Distance(addr.geom, ST_Centroid(gu.geom)) as distance_to_headquarters
from firestation_firedepartment fd
inner join firecares_core_address addr
on addr.id = fd.headquarters_address_id
join usgs_governmentunits gu
on ST_Intersects(ST_Buffer(addr.geom, 0.05), gu.geom)
where
fd.fdid = %(fdid)s and fd.state = %(state)s and source != 'stateorterritoryhigh'
)
select gu.fc_id, count(fires) / ST_Area(gu.geom) as density, count(fires), ST_Area(ST_SymDifference(gu.fd_geom, gu.geom)) / ST_Area(gu.fd_geom) as percent_difference_to_verified_boundary, ST_Area(gu.geom), gu.distance_to_headquarters, gu.name, gu.id, gu.source from fires
inner join govt_units gu
on ST_Intersects(fires.geom, gu.geom)
group by gu.name, gu.id, gu.geom, gu.source, gu.distance_to_headquarters, gu.fd_geom, gu.fc_id
order by ST_Area(gu.geom) / count(fires) asc;
"""
for fd in fds:
with nfirs.cursor(cursor_factory=RealDictCursor) as c:
print 'Analyzing: {} (id: {} fdid: {} {})'.format(fd['name'], fd['id'], fd['fdid'], fd['state'])
c.execute(q, dict(fdid=fd['fdid'], state=fd['state']))
items = c.fetchall()
df = pd.DataFrame(items)
df.to_csv('./boundary-analysis-{}.csv'.format(fd['id']))
```
### Results
```
from glob import glob
df = None
for f in glob("boundary-analysis*.csv"):
if df is not None:
df = df.append(pd.read_csv(f))
else:
df = pd.read_csv(f)
df.rename(columns={'Unnamed: 0': 'rank'}, inplace=True)
selected_government_units = df[df['rank'] == 0].set_index('fc_id')
total_validated_department_count = len(selected_government_units)
perfect_fits = len(selected_government_units[selected_government_units['percent_difference_to_verified_boundary'] == 0])
print 'Perfect fits: {}/{} ({:.2%})'.format(perfect_fits, total_validated_department_count, float(perfect_fits) / total_validated_department_count)
print 'Machine-selected government unit area difference mean: {:.2%}'.format(df[df['rank'] == 0].percent_difference_to_verified_boundary.mean())
selected_government_units['percent_difference_to_verified_boundary'].hist(bins=50)
selected_government_units
df.set_index('fc_id')
df.to_csv('./validated-boundary-vs-government-unit-density.csv')
pd.read_csv('./validated-boundary-vs-government-unit-density.csv')
```
| github_jupyter |
# Search for best parameters for Random Forest classifier
## Read data
```
# Pandas is used for data manipulation
import pandas as pd
time='80_100'
# Read in data as a dataframe
features = pd.read_csv('../features/features_training1/features_{}.csv'.format(time))
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# One Hot Encoding
features_num=features.to_numpy()
features[:] = np.nan_to_num(features_num)
np.where(pd.isnull(features_num))
features.describe(include='all')
# Extract features and labels and print feature names
labels = features['quality']
features = features.drop('quality', axis = 1)
labels[1:6]
names=features.columns
print(names)
y = labels.map({'native':1,"non-native":0})
x = features.values
# Convert to numpy arrays
features = np.array(x)
labels = np.array(y)
```
## Specify training and test sets
```
# Training and Testing Sets
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(features, labels,
test_size = 0.25, random_state = 42)
```
## Set a base model with RF classifier
```
from sklearn.ensemble import RandomForestClassifier
base_model = RandomForestClassifier(n_estimators = 10,random_state = 42)
from pprint import pprint
# Look at parameters used by our current forest
print('Parameters currently in use:\n')
pprint(base_model.get_params())
from sklearn import metrics
base_model.fit(train_features,train_labels);
pred_labels=base_model.predict(test_features)
base_accuracy=metrics.accuracy_score(test_labels, pred_labels)
print("Base model Accuracy:",metrics.accuracy_score(test_labels, pred_labels))
```
## Random Search with Cross Validation
```
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
pprint(random_grid)
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestClassifier(random_state = 42)
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid,
n_iter = 100, scoring='neg_mean_absolute_error',
cv = 3, verbose=2, random_state=42, n_jobs=-1,
return_train_score=True)
# Fit the random search model
rf_random.fit(train_features, train_labels);
rf_random.best_params_
```
### Evaluate the Best Random Search Model
```
best_random = rf_random.best_estimator_
best_random.fit(train_features,train_labels);
pred_labels=best_random.predict(test_features)
random_accuracy=metrics.accuracy_score(test_labels, pred_labels)
print("Best random model Accuracy:",metrics.accuracy_score(test_labels, pred_labels))
print('Improvement of {:0.2f}%.'.format( 100 * (random_accuracy - base_accuracy) / base_accuracy))
```
## Grid Search
We can now perform grid search building on the result from the random search.
We will test a range of hyperparameters around the best values returned by random search.
```
from sklearn.model_selection import GridSearchCV
# Create the parameter grid based on the results of random search
param_grid = {
'bootstrap': [True],
'max_depth': [5, 10, 50, 110],
'min_samples_leaf': [1, 3, 5],
'min_samples_split': [2, 8, 12],
'n_estimators': [100, 300, 1000, 1500],
'max_features' : ['auto', 'sqrt'],
'oob_score' : [ True],
'warm_start' : [False, True]
}
# Create a base model
rf = RandomForestClassifier(random_state = 42)
# Instantiate the grid search model
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 5, n_jobs = -1, verbose = 2, return_train_score=True)
# Fit the grid search to the data
gid_search.fit(train_features, train_labels);
grid_search.best_params_
```
### Test RF classifier with the best parameters
```
rf_param = RandomForestClassifier(bootstrap= True, max_depth=50, max_features='auto', min_samples_leaf=1, min_samples_split=2, n_estimators = 1000,oob_score= True,
random_state = 42)
rf_param.fit(train_features, train_labels);
pred_labels_best=rf.param.predict(test_features)
best_accuracy=metrics.accuracy_score(test_labels, pred_labels_best)
grid_accuracy=metrics.accuracy_score(test_labels, pred_labels)
print("Best Grid model Accuracy:",metrics.accuracy_score(test_labels, pred_labels))
```
#### Evaluate the Best Model from Grid Search
```
pred_labels_best=rf_param.predict(test_features)
best_accuracy=metrics.accuracy_score(test_labels, pred_labels_best)
print("Base model Accuracy:",metrics.accuracy_score(test_labels, pred_labels_best))
print(rf_param.oob_score_)
d = grid_search.best_estimator_
grid_accuracy =metrics.accuracy_score(test_labels, pred_labels)
print("Best Grid model Accuracy:",metrics.accuracy_score(test_labels, pred_labels))
print(best_grid.oob_score_)
best_grid = grid_search.best_estimator_
grid_accuracy =metrics.accuracy_score(test_labels, pred_labels)
print("Best Grid model Accuracy:",metrics.accuracy_score(test_labels, pred_labels))
print(best_grid.oob_score_)
```
| github_jupyter |
# ランダムフォレストのデモプログラム
ランダムフォレストのデモプログラムです。
ランダムフォレストの中身に関してはこちら↓で解説しています。
https://yuyumoyuyu.com/2021/02/21/ensembledecisiontree/
```
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.figure as figure
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import time
%matplotlib inline
```
# クラス分類
### 決定木との比較
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier as RFC
from sklearn.datasets import make_moons
# sklearnのデータセットを用いる
X, y = make_moons(n_samples=500, noise=0.3, random_state=6)
plt.figure()
cmap = ListedColormap(('red', 'blue'))
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y, cmap=cmap, s=8)
plt.xlabel("x1")
plt.ylabel("x2")
plt.show()
# 入力データは2次元の座標データ
print("X =\n", X[:10])
# 学習データをtrain/test分割
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y,
random_state=42)
# 分類木の学習
tree = DecisionTreeClassifier(max_depth=5, random_state=0)
tree.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(tree.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(tree.score(X_test, y_test)))
# ランダムフォレスト分類器の学習
rfc = RFC(max_depth=5, n_estimators=10, random_state=0)
rfc.fit(X_train, y_train)
print("Accuracy on training set: {:.3f}".format(rfc.score(X_train, y_train)))
print("Accuracy on test set: {:.3f}".format(rfc.score(X_test, y_test)))
# 学習結果を可視化
x1 = np.linspace(-2.0, 3.0, 100)
x2 = np.linspace(-1.5, 2.0, 100)
x1_mesh, x2_mesh = np.meshgrid(x1, x2)
z1 = tree.predict(np.array([x1_mesh.ravel(), x2_mesh.ravel()]).T)
z1 = z1.reshape(x1_mesh.shape)
z2 = rfc.predict(np.array([x1_mesh.ravel(), x2_mesh.ravel()]).T)
z2 = z2.reshape(x1_mesh.shape)
z_list = [z1, z2]
titles = ['Decision Tree', 'Random Forest']
fig, axes = plt.subplots(1, 2, figsize=(8,4))
for ax, z, title in zip(axes, z_list, titles):
ax.contourf(x1_mesh, x2_mesh, z, cmap=cmap, alpha=0.1, linestyles=None)
ax.scatter(X[:, 0], X[:, 1], marker='o', c=y, cmap=cmap, s=5)
ax.set_xlabel("x1")
ax.set_ylabel("x2")
ax.set_title(title)
fig.tight_layout()
```
アンサンブル学習により,高精度な分類が可能に
### ハイパーパラメータ最適化
```
from sklearn.datasets import load_iris
# Irisデータセットを用いる
iris = load_iris()
# 学習データをtrain/test分割
X_train_i, X_test_i, y_train_i, y_test_i = train_test_split(iris.data,
iris.target,
stratify=iris.target,
random_state=0)
# グリッドサーチによるハイパーパラメータ最適化
t1 = time.time()
print("RFC\n")
params = {
'max_depth': [2,3,5,10], # treeの深さの最大値
'max_features': [1,3,'auto'], # treeの構築に使用する特徴量の数
'min_samples_split': [2,3,5], # ノード分割に必要な最小サンプルサイズ
'min_samples_leaf': [1,3,5], # 葉を構成するのに必要な最低サンプル数
'n_estimators': [10,30,50,100,300] # treeの数
}
print("parameters: \n{}\n".format(params))
grid_search = GridSearchCV(RFC(), params, cv=5, return_train_score=True)
grid_search.fit(X_train_i, y_train_i)
print("best parameters: {}".format(grid_search.best_params_))
print("best cross-validation score: {:.3f}".format(grid_search.best_score_))
print("\nelapsed time: {:.3f} sec".format(time.time()-t1))
rfc = RFC(**grid_search.best_params_).fit(X_train_i, y_train_i)
print("==Training set==")
print("Score: {:.3f}".format(rfc.score(X_train_i, y_train_i)))
print("Confusion matrix:\n", confusion_matrix(y_train_i,rfc.predict(X_train_i),labels=sorted(set(y_train_i))))
print("\n==Test set==")
print("Score: {:.3f}".format(rfc.score(X_test_i, y_test_i)))
print("Confusion matrix:\n", confusion_matrix(y_test_i,rfc.predict(X_test_i),labels=sorted(set(y_test_i))))
```
# 回帰木
```
from sklearn.ensemble import RandomForestRegressor as RFR
from sklearn.datasets import load_boston
# Bostonデータセットを用いる
boston = load_boston()
# 学習データをtrain/test分割
X_train_b, X_test_b, y_train_b, y_test_b = train_test_split(boston.data,
boston.target,
random_state=0)
# グリッドサーチによるハイパーパラメータ最適化
t1 = time.time()
print("RFR\n")
params = {
'max_depth': [10,20,30], # treeの深さの最大値
'max_features': [3,5,10,'auto'], # treeの構築に使用する特徴量の数
'min_samples_split': [2,3,5], # ノード分割に必要な最小サンプルサイズ
'min_samples_leaf': [1,3,5], # 葉を構成するのに必要な最低サンプル数
'n_estimators': [10,30,50,100] # treeの数
}
print("parameters: \n{}\n".format(params))
grid_search = GridSearchCV(RFR(), params, cv=5, return_train_score=True)
grid_search.fit(X_train_b, y_train_b)
print("best parameters: {}".format(grid_search.best_params_))
print("best cross-validation score: {:.3f}".format(grid_search.best_score_))
print("\nelapsed time: {:.3f} sec".format(time.time()-t1))
# 可視化
rfr = RFR(**grid_search.best_params_).fit(X_train_b, y_train_b)
# trainデータ
y_train_est = rfr.predict(X_train_b)
plt.figure(figsize=figure.figaspect(1))
plt.scatter(y_train_b, y_train_est)
y_max = max( y_train_b.max(), y_train_est.max() )
y_min = min( y_train_b.min(), y_train_est.min() )
plt.plot([y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min)],
[y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min)], 'k-')
plt.ylim(y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min))
plt.xlim(y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min))
plt.xlabel('Actual y')
plt.ylabel('Estimated y')
plt.show()
print(" Training set score: {:.3f}".format(rfr.score(X_train_b, y_train_b)))
# testデータ
y_test_pred = rfr.predict(X_test_b)
plt.figure(figsize=figure.figaspect(1))
plt.scatter(y_test_b, y_test_pred)
y_max = max( y_test_b.max(), y_test_pred.max() )
y_min = min( y_test_b.min(), y_test_pred.min() )
plt.plot([y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min)],
[y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min)], 'k-')
plt.ylim(y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min))
plt.xlim(y_min - 0.05 * (y_max - y_min), y_max + 0.05 * (y_max - y_min))
plt.xlabel('Actual y')
plt.ylabel('Predicted y')
plt.show()
print(" Test set score: {:.3f}".format(rfr.score(X_test_b, y_test_b)))
```
| github_jupyter |
# Face Transformation from Human to Anime
This notebook demonstrates how to transform a human face to an anime face using StyleGAN2.
Contact: alienncheng@gmail.com
## Data Preparation
First of all, we need a dataset to train the model.
I downloaded the [danbooru2019-portrait](https://www.gwern.net/Crops#danbooru2019-portraits) dataset and manually pick 1000 images from it.
Then we run the dataset tool provided by StyleGAN2 to create TensorFlow records.
#### Note
1. As [Doron Adler's work](https://twitter.com/Buntworthy/status/1297976798236598274) shows, 317 images should be enough.
2. Beware of mode issues. The model never learn something not (or seldomly) appearing in the dataset. Due to the extreme gender ratio of the anime portraits, the model would eventually learn how to generate anime faces excluding male faces.
```
!python dataset_tool.py create_from_images datasets/custom_512 datasets_img/
```
## Train a StyleGAN2 network to generate anime portrait
We need a network to generate anime portrait.
I suggest transfer learning from a well-trained network so that we can not only save the time but also somehow keep the latent space of the source network.
Here I fine-tuned the model [stylegan2-ffhq-512-config-f](https://twitter.com/AydaoAI/status/1269689136019116032?s=20) with the dataset above.
```
!python run_training.py --result-dir=D://Anime/results/ --data-dir=datasets/ --dataset=custom_512 --config=config-f --total-kimg=200
```
## Align a human face image
An aligned face image is more understandable for model. Furthermore, we need the image cropped in a suitable manner.
```
!python align_images.py images/raw images/aligned
```
So far we have the essential materials prepared, a human face image, a ffhq model and a anime face model, both of which share the latent space (or similar at least).
Then what we need to do next is to tranform the human face to an anime face.
Here we have some choices to get our works done:
1. Extract the latent code of the given human face image, then simply insert it to the anime model.
2. Blend the human model and the anime model to get a mixture of them. That would be closer to the original image but the models might conflict and generate a hybrid.
3. With both models generating paired images, learn a pix2pix model.
## Extract the latent code of the face image
We need to attain the latent code corresponding to the given human image so that we can make use of it, and [rolux have done this](https://github.com/rolux/stylegan2encoder)!
> Note that we should replace *training/dataset.py* by *training/dataset-toonify.py*, and replace *dataset_tool.py* by *dataset_tool-toonify.py* here. The **-s2.py* files are backups.
```
!python project_images.py --num-steps 500 images/aligned images/generated --network-pkl=pretrained_models/stylegan2-ffhq-512-config-f.pkl
```
## Generate Anime face with the latent code
We get the latent code of the given image now.
The simplest way to generate the anime face image is by inserting the code directly into the anime model.
> In this way, we assume the latent space between the human face model and the anime face model was the same.
>
> However, in fact, we just fine-tuned it so we could not guarantee such assumption would hold.
>
> So the output image would be a little different from the original image.
```
!python generate_from_latent.py
```
## Blend the models
As [Justin Pinkney's work](https://colab.research.google.com/drive/1tputbmA9EaXs9HL9iO21g7xN7jz_Xrko?usp=sharing) shows, StyleGAN2 models can be blended easily.
We can get a blended model to generate an image between a human face and an anime face.
> Similarly we can transform a human face to a blended face.
>
> If you want to do so, you need to revise the file *generate_from_latent.py*.
>
> Replace *pretrained_models/ffhq-to-anime-512-config-f.pkl* by *pretrained_models/blended.pkl*.
```
!python blend_models.py --low_res_pkl=pretrained_models/stylegan2-ffhq-512-config-f.pkl --high_res_pkl=pretrained_models/ffhq-to-anime-512-config-f.pkl --resolution=32 --output_pkl=pretrained_models/blended.pkl
```
## Train a pix2pix model
WIP
## Reference
[Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958)
[Toonify yourself by Justin Pinkney](https://www.justinpinkney.com/toonify-yourself/)
[stylegan2encoder by rolux](https://github.com/rolux/stylegan2encoder)
[Making Anime Faces With StyleGAN](https://www.gwern.net/Faces)
[malnyun_faces by bryandlee](https://github.com/bryandlee/malnyun_faces)
| github_jupyter |
# Introduction

This notebook provides a demo to use the methods used in the paper with new data. If new to collaboratory ,please check the following [link](https://medium.com/lean-in-women-in-tech-india/google-colab-the-beginners-guide-5ad3b417dfa) to know how to run the code.
### Import the required libraries:
```
#import
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
import os
import joblib
import json
import pandas as pd
import numpy as np
###ipywigets
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from sklearn import *
from sklearn.model_selection import *
from sklearn.metrics import *
import nltk
nltk.download('stopwords')
#copy the git clone address here
!git clone https://github.com/binny-mathew/Countering_Hate_Speech.git
#Best binary classifier was XGBclassifier
#Best multilabel classifier was XGBclassifier
best_binary_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_1.joblib.pkl')
best_multiclass_classifier = joblib.load('Countering_Hate_Speech/Best_model/XGB_classifier_task_3.joblib.pkl')
best_black_classifier = joblib.load('Countering_Hate_Speech/Best_model/black_XGB_classifier_task_2.joblib.pkl')
best_jew_classifier = joblib.load('Countering_Hate_Speech/Best_model/jew_XGB_classifier_task_2.joblib.pkl')
best_lgbt_classifier = joblib.load('Countering_Hate_Speech/Best_model/lgbt_XGB_classifier_task_2.joblib.pkl')
```
###Word Embeddings Loaded Here
```
####downloading the word embeddings
!wget http://nlp.stanford.edu/data/glove.840B.300d.zip
!unzip glove.840B.300d.zip
####extracting the glove model file
#import zipfile
#archive = zipfile.ZipFile('glove.840B.300d.zip', 'r')
GLOVE_MODEL_FILE ='glove.840B.300d.txt'
import numpy as np
## change the embedding dimension according to the model
EMBEDDING_DIM = 300
###change the method type
### method two
def loadGloveModel2(glove_file):
tmp_file = get_tmpfile("test_crawl_200.txt")
# call glove2word2vec script
# default way (through CLI): python -m gensim.scripts.glove2word2vec --input <glove_file> --output <w2v_file>
glove2word2vec(glove_file, tmp_file)
model=KeyedVectors.load_word2vec_format(tmp_file)
return model
word2vec_model = loadGloveModel2(GLOVE_MODEL_FILE)
```
## Dataset is loaded here
```
#@title Select the type of file used
type_of_file = 'X.json' #@param ['X.json','X.csv']
```
### File type information
If the file type is **.json** then each element should contain the following fields:-
1. Community
2. CounterSpeech
3. Category
4. commentText
5. id
If the file type is **.csv** then it must have the following columns:-
1. Community
2. CounterSpeech
3. Category
4. commentText
5. id
Note:- If you don't have the Category or Community add an dummy element or column
```
####CHANGE THE PATH OF THE FILE
path_of_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json'
def convert_class_label(input_text):
if input_text:
return 'counter'
else:
return 'noncounter'
if(type_of_file=='X.json'):
with open(path_of_file) as fp:
train_data = json.load(fp)
pd_train = pd.DataFrame(columns=['id','class','community','category','text'])
for count, each in enumerate(train_data):
try:
pd_train.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']]
except:
pass
print('Training Data Loading Completed...')
elif(type_of_file=='X.csv'):
pd_train=pd.read_csv(path_of_the_file)
pd_train.head()
#@title How your dataframe should look like after extraction {display-mode: "form"}
# This code will be hidden when the notebook is loaded.
path_of_data_file='Countering_Hate_Speech/Data/Counterspeech_Dataset.json'
def convert_class_label(input_text):
if input_text:
return 'counter'
else:
return 'noncounter'
with open(path_of_data_file) as fp:
train_data = json.load(fp)
pd_train_sample = pd.DataFrame(columns=['id','class','community','category','text'])
for count, each in enumerate(train_data):
try:
pd_train_sample.loc[count] = [each['id'], convert_class_label(each['CounterSpeech']), each['Community'],each['Category'],each['commentText']]
except:
pass
print('Training Data Loading Completed...')
pd_train_sample.head()
pd_train['text'].replace('', np.nan, inplace=True)
pd_train.dropna(subset=['text'], inplace=True)
import sys
####features module has the necessary function for feature generation
from Countering_Hate_Speech.utils import features
from Countering_Hate_Speech.utils import multi_features
###tokenize module has the tokenization funciton
from Countering_Hate_Speech.utils.tokenize import *
###helper prints confusion matrix and stores results
from Countering_Hate_Speech.utils.helper import *
###common preprocessing imports
from Countering_Hate_Speech.utils.commen_preprocess import *
```
#### Next few sections cover three different classifiers namely -
* Binary classification
* Multlabel classification
* Cross community
You can run the cells corresponding to the result you want to analyse.
### **Binary Classification**
```
X,y= features.combine_tf_rem_google_rem_embed(pd_train,word2vec_model)
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y:
temp.append(label_map[data])
y=np.array(temp)
y_pred=best_binary_classifier.predict(X)
report = classification_report(y, y_pred)
cm=confusion_matrix(y, y_pred)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('Confusion_matrix.png')
df_result=pandas_classification_report(y,y_pred)
df_result.to_csv('Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
```
### **Multilabel Classification**
```
import scipy
pd_train_multilabel =pd_train.copy()
pd_train_multilabel =pd_train_multilabel[pd_train_multilabel['category']!='Default']
list1=[[],[],[],[],[],[],[],[],[],[]]
for ele in pd_train_multilabel['category']:
temp=[]
if type(ele) is int:
ele =str(ele)
for i in range(0,len(ele),2):
temp.append(ord(ele[i])-ord('0'))
#print(temp)
if(len(temp)==0):
print(temp)
for i in range(0,10):
if i+1 in temp:
list1[i].append(1)
else:
list1[i].append(0)
y_train=np.array([np.array(xi) for xi in list1])
### final dataframe for the task created
pd_train_multilabel = pd.DataFrame({'text':list(pd_train_multilabel['text']),'cat0':list1[0],'cat1':list1[1],'cat2':list1[2],'cat3':list1[3],'cat4':list1[4],'cat5':list1[5],'cat6':list1[6],'cat7':list1[7],'cat8':list1[8],'cat9':list1[9]})
### drop the entries having blank entries
pd_train_multilabel['text'].replace('', np.nan, inplace=True)
pd_train_multilabel.dropna(subset=['text'], inplace=True)
X,y= multi_features.combine_tf_rem_google_rem_embed(pd_train_multilabel,word2vec_model)
path='multilabel_res'
os.makedirs(path, exist_ok=True)
X = np.array(X)
y = np.array(y)
y_pred = best_multiclass_classifier.predict(X)
if(scipy.sparse.issparse(y_pred)):
ham,acc,pre,rec,f1=calculate_score(y,y_pred.toarray())
accuracy_test=accuracy_score(y,y_pred.toarray())
else:
ham,acc,pre,rec,f1=calculate_score(y,y_pred)
accuracy_test=my_accuracy_score(y,y_pred)
for i in range(10):
df_result=pandas_classification_report(y[:,i],y_pred[:,i])
df_result.to_csv(path+'/report'+str(i)+'.csv')
f = open(path+'/final_report.txt', "w")
f.write("best_model")
f.write("The hard metric score is :- " + str(accuracy_test))
f.write("The accuracy is :- " + str(acc))
f.write("The precision is :- " + str(pre))
f.write("The recall is :- " + str(rec))
f.write("The f1_score is :- " + str(f1))
f.write("The hamming loss is :-" + str(ham))
f.close()
!zip -r mulitlabel_results.zip multilabel_res
```
### **Cross CommunityClassification**
```
pd_cross=pd_train.copy()
part_j=pd_cross.loc[pd_train['community']=='jews']
part_b=pd_cross.loc[pd_train['community']=='black']
part_l=pd_cross.loc[pd_train['community']=='lgbt']
X_black,y_black= features.combine_tf_rem_google_rem_embed(part_b,word2vec_model)
X_jew,y_jew= features.combine_tf_rem_google_rem_embed(part_j,word2vec_model)
X_lgbt,y_lgbt= features.combine_tf_rem_google_rem_embed(part_l,word2vec_model)
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_black:
temp.append(label_map[data])
y_black=np.array(temp)
y_pred_black=best_black_classifier.predict(X_black)
report = classification_report(y_black, y_pred_black)
cm=confusion_matrix(y_black, y_pred_black)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('black_Confusion_matrix.png')
df_result=pandas_classification_report(y_black,y_pred_black)
df_result.to_csv('black_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_jew:
temp.append(label_map[data])
y_jew=np.array(temp)
y_pred_jew=best_jew_classifier.predict(X_jew)
report = classification_report(y_jew, y_pred_jew)
cm=confusion_matrix(y_jew, y_pred_jew)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('jew_Confusion_matrix.png')
df_result=pandas_classification_report(y_jew,y_pred_jew)
df_result.to_csv('jew_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
label_map = {
'counter': 0,
'noncounter': 1
}
temp=[]
for data in y_lgbt:
temp.append(label_map[data])
y_lgbt=np.array(temp)
y_pred_lgbt=best_lgbt_classifier.predict(X_lgbt)
report = classification_report(y_lgbt, y_pred_lgbt)
cm=confusion_matrix(y_lgbt, y_pred_lgbt)
plt=plot_confusion_matrix(cm,normalize= True,target_names = ['counter','non_counter'],title = "Confusion Matrix")
plt.savefig('lgbt_Confusion_matrix.png')
df_result=pandas_classification_report(y_lgbt,y_pred_lgbt)
df_result.to_csv('lgbt_Classification_Report.csv', sep=',')
print("You can download the files from the file directory now ")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import numpy as np
import cvxpy as cp
from scipy import stats
import torch
from torch import nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.utils.data import DataLoader
import torch.optim as optim
import os
import random
import sys
sys.path.insert(0, './mlopt-micp')
sys.path.insert(0, './mlopt-micp/cartpole')
import optimizer
from problem import Cartpole
from src.ae import Encoder, get_cartpole_encoder
def euclidean_dist(x,y):
# x: NxD
# y: MxD
n = x.size(0)
m = y.size(0)
d = x.size(1)
assert d == y.size(1)
x = x.unsqueeze(1).expand(n, m, d)
y = y.unsqueeze(0).expand(n, m, d)
return torch.pow(x-y, 2).sum(2)
pp = Cartpole()
print('Total number of classes: {}'.format(pp.n_strategies))
print('Length of feature vector: {}'.format(pp.n_features))
dim_in, dim_z = pp.n_features, 4#pp.n_strategies
enc = get_cartpole_encoder(dim_in, dim_z).cuda()
enc(torch.from_numpy(pp.features[:2]).float().cuda())
# training parameters
TRAINING_ITERATIONS = int(5000)
BATCH_SIZE = int(10)
CHECKPOINT_AFTER = int(1250)
SAVEPOINT_AFTER = int(2500)
rand_idx = list(np.arange(0, pp.n_strategies-1))
indices = [rand_idx[ii * BATCH_SIZE:(ii + 1) * BATCH_SIZE] for ii in range((len(rand_idx) + BATCH_SIZE - 1) // BATCH_SIZE)]
random.shuffle(indices)
enc_dict = {}
str_dict = {}
for ii in range(len(pp.features)):
str_idx = int(pp.labels[ii,0])
str_dict[ii] = str_idx
if str_idx in enc_dict.keys():
enc_dict[str_idx] += [ii]
else:
enc_dict[str_idx] = [ii]
feats = torch.from_numpy(pp.features).float().cuda()
pp.training_batch_percentage = 1.
pp.construct_strategies()
strat_lookup = {}
for k, v in pp.strategy_dict.items():
strat_lookup[v[0]] = v[1:]
pp.training_batch_percentage = 0.9
pp.n_evals = 5
pp.training_batch_percentage=0.9
#nearest neighbors
pp.training_batch_percentage=0.9
train_set_length = int(pp.training_batch_percentage*pp.n_probs)
Y = feats[:train_set_length,:]
#classifier
def nn_classifier(x,Y,k=1):
dist_inds = torch.argsort(torch.cdist(Y,x[None,:]),dim=0).cpu().numpy()
strats_sorted = pp.labels[dist_inds,0].astype(int)
return int(stats.mode(strats_sorted[:k])[0])
#_, unique_inds = np.unique(strats_sorted,return_index=True)
#return np.concatenate([strats_sorted[index] for index in sorted(unique_inds)])
nn_classifier(feats[train_set_length+8,:],Y,k=5)
#test script
n_train_strategies = pp.n_strategies #store how many strats in train set
c_k = torch.zeros((n_train_strategies,4))
embeddings = enc(feats) #embed training points
for ii in range(n_train_strategies): #compute train centroids
inds = enc_dict[ii]
c_k[ii,:] = torch.mean(embeddings[inds,:],axis=0).cuda()
#compute strategy dictionary for all problems
pp.training_batch_percentage = 1.
pp.construct_strategies()
strat_lookup = {}
for k, v in pp.strategy_dict.items():
strat_lookup[v[0]] = v[1:]
#setup for test
test_feats = torch.from_numpy(pp.features[int(0.9*pp.n_probs):,:]).float().cuda()
test_enc = enc(test_feats).cuda()
test_dists = torch.cdist(test_enc,c_k.cuda()).detach().cpu().numpy()
test_start = int(0.9*pp.n_probs)
n_test = int(0.1*pp.n_probs)
ind_max = np.argsort(test_dists)[:,:pp.n_evals]
feasible = np.zeros(n_test)
costs = np.zeros(n_test)
prob_success = False
pp.n_evals = 1
k=5
for ii in range(n_test):
#strats_sorted = nn_classifier(feats[test_start+ii,:],Y);
#for jj in range(pp.n_evals):
y_guess = strat_lookup[nn_classifier(feats[test_start+ii,:],Y,k=k)]
#y_guess = strat_lookup[int(pp.labels[ii,0])]
try:
prob_success, cost, solve_time = pp.solve_mlopt_prob_with_idx(ii+test_start, y_guess)
if prob_success:
feasible[ii] = 1.
costs[ii] = cost
print('Succeded at {} with {} tries'.format(ii,1))
except (KeyboardInterrupt, SystemExit):
raise
except:
print('mosek failed at '.format(ii))
global_acc = sum(sum(np.equal(ind_max,pp.labels[test_start:,0][:,None])))/(0.1*pp.n_probs)
global_acc
sum(feasible)/ii
```
| github_jupyter |
```
from IPython.display import HTML
css_file = './custom.css'
HTML(open(css_file, "r").read())
```
# Homogeneity Metrics
© 2018 Daniel Voigt Godoy
## 1. Definitions
### Entropy
***Entropy*** is a measure of ***uncertainty*** associated with a given ***distribution q(y)***.
From Wikipedia:
...is the average rate at which information is produced by a stochastic source of data.
...when a low-probability event occurs, the event carries more "information" ("surprisal")...
$$
H(q) = -\sum_{c=1}^{C}{q(y_c) \cdot log(q(y_c))}
$$
where:
- ***q*** is the ***distribution*** (as in the distribution of red and green balls)
- ***y*** are the ***labels*** (the respective colors of each ball)
- ***C*** is the number of ***classes*** (as in ***red*** and ***green*** - 2 classes)
- ***q(yc) represents the proportion of balls having the same color c***
### Gini Impurity
***Gini Impurity*** is a measure of ***heterogeneity*** associated with a given ***distribution q(y)***.
$$
G(q) = \sum_{c=1}^{C}{q(y_c) \cdot (1 - q(y_c))}
$$
From Wikipedia:
...is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset.
```
from intuitiveml.Information import *
X, y = data(10)
myinfo = plotInfo(X, y)
vb = VBox(build_figure(myinfo), layout={'align_items': 'center'})
```
## 2. Experiment
There are 10 balls (data points) of two possible colors (***classes***). Each ball has its own color (***label***), red or green.
The slider control at the bottom allows you to change the number of red balls and, consequently, the number of green balls (the total stays the same) - so, you are changing the ***distribution***.
This change will have an impact on both ***entropy*** and ***gini impurity*** measures.
Use the slider to play with different configurations and answer the ***questions*** below.
```
vb
```
#### Questions:
1. How to maximize (minimize) Entropy?
2. How to maximize (minimize) Gini Impurity?
3. What's the entropy when all balls have the same color?
4. What kind of distribution yields the maximum Entropy?
5. Using the formula, compute the ***entropy*** if you had 3 red balls
6. Using the formula, compute the ***gini impurity*** if you had 7 red balls
3.) Zero
4.) What kind of distribution yields the maximum Entropy? Uniform
#### This material is copyright Daniel Voigt Godoy and made available under the Creative Commons Attribution (CC-BY) license ([link](https://creativecommons.org/licenses/by/4.0/)).
#### Code is also made available under the MIT License ([link](https://opensource.org/licenses/MIT)).
```
from IPython.display import HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
```
| github_jupyter |
```
import logging
from gensim.models import ldaseqmodel
from gensim.corpora import Dictionary, bleicorpus, textcorpus
import numpy as np
from gensim.matutils import hellinger
import time
import pickle
import pyLDAvis
import matplotlib.pyplot as plt
from scipy.stats import entropy
import pandas as pd
from numpy.linalg import norm
alldata_new = pickle.load(open('output/dtm_processed_output.p', 'rb'))
# load data
doc_year=alldata_new['docs_per_year']
doc_ids =[0]+list(np.cumsum(doc_year))
term_topic = alldata_new['term_topic']# term_topic is n_years*n_topics*n_terms
terms = alldata_new['terms']
doc_topicyrs = alldata_new['doc_topic']
doc_topic = []
for year in range(len(term_topic)):
doc_topic.append(alldata_new['doc_topic'][doc_ids[year]:doc_ids[year+1]])# doc_topic is nyear*n_docs given year*n_topics
# rename topics by the hand-picked names
topic_labels = pickle.load(open('topicnames.p','rb'))
def totvar(p,q):
maxdist=np.max(abs(p-q))
maxid=np.argmax(abs(p-q))
return [maxdist,maxid]
def JSD(P, Q):
_P = P / norm(P, ord=1)
_Q = Q / norm(Q, ord=1)
_M = 0.5 * (_P + _Q)
dist=0.5 * (entropy(_P, _M) + entropy(_Q, _M))
return dist
# entropy change within a topic -- which topic's content has changed most in the past years
epsilon = 1e-15
ntopics = 20
topicdelta=np.empty((ntopics,len(term_topic))) # distance from previous year: jenson-shannon distance
topicshift=np.empty(ntopics) # distance from 2000 to 2017
topicdelta_tv=np.empty((ntopics,len(term_topic))) # distance from previous year: total variance
topicshift_tv=np.empty(ntopics) # distance from 2000 to 2017:total variance
deltaterm=[]
shiftterm=[]
for k in range(ntopics):
sftterms=[]
for iyear in range(len(term_topic)):
topic = term_topic[iyear][k]
# why not using KL: 1) avoid asymetry 2) avoid inf
topic = topic/sum(topic)
topicdelta[k,iyear] = JSD(topic,term_topic[max(iyear-1,0)][k]) # jensen-shannon distance
[topicdelta_tv[k,iyear],maxterm]=totvar(topic,term_topic[max(iyear-1,0)][k]) # maxterm: term of biggest change from previous year
sftterms.append(terms[maxterm])
topicshift[k] = JSD(term_topic[-1][k],term_topic[0][k])
[topicshift_tv[k],maxterm]=totvar(term_topic[-1][k],term_topic[0][k])
shiftterm.append(terms[maxterm]) # biggest shift from 2017 to 2000
deltaterm.append(sftterms) # biggest delta from prev year: max term for every year
deltaterm[4]
shiftidx=np.argsort(-topicshift)
for idx in shiftidx:
print(topic_labels[idx]+': %.3f'%topicshift[idx])
print('total variance:')
shiftidx=np.argsort(-topicshift_tv)
for idx in shiftidx:
print(topic_labels[idx]+': %.3f'%topicshift_tv[idx]+' max shift word:'+shiftterm[idx])
#TODO: get the raise and fall terms for each topic...just copy the other code; set the jsd as titles
# calculate the topic distribution for each year (should correspond to the topic evolution trend...can't find that code right now)
ntopics = len(topic_labels)
ptop_years = []
entrop_years = []
for iyear in range(len(term_topic)):
ptopics = np.zeros(ntopics)
for doc in doc_topic[iyear]:
ptopics+=doc
ptopics = ptopics/sum(ptopics)
ptop_years.append(ptopics)
entrop_years.append(entropy(ptopics))
print(entrop_years)
# plot the entropy change across years
years = np.arange(len(term_topic))+2000
plt.plot(years,entrop_years,'-o')
plt.xlabel('year')
plt.title('entropy of topic distribution')
plt.show()
# could be done: find the paper with highest / lowest entropy; find the topic with highest/lowest entropy
# KL-divergence across years
kl_years = []
gap=1
for iyear in range(len(term_topic)-gap):
# kl_years.append(entropy(ptop_years[iyear],ptop_years[iyear+gap]))
kl_years.append(entropy(ptop_years[iyear+gap],ptop_years[iyear]))# sanity check: reverse the direction of KL. not differen
plt.plot(years[gap:],kl_years,'-o')
plt.xlabel('year')
plt.title('KL div with the previous year')
plt.show()
# TODO: eye-balling the distribution overlayed
```
**tentative conclusion**
- the diversity of topics seem to increase over years
- 2002 has a relatively less diverse topic distribution while 2013 was pretty diverse.
- the year-to-year difference has been decreasing across years...it's like the field is changing more slowly? doesn't make sense to me...
```
# entropy of topics
for iyear in range(len(term_topic)):
print('\n Year='+str(years[iyear]))
entrop_terms=[]
for k in range(ntopics):
topic = term_topic[iyear][k] # already normalized
entrop_terms.append(entropy(topic))
sorted_H = np.sort(entrop_terms)
idx = np.argsort(entrop_terms)
[print(topic_labels[idx[j]]+':'+str(sorted_H[j])) for j in range(len(idx))]
# turns out the ranking of entropy is pretty stable over the years.
sum(term_topic[iyear][3])
```
| github_jupyter |
"""
This file was created with the purpose of developing
a random forest classifier to identify market squeeze
This squeeze classification depends of the comparison of 2 indicators:
2 std of a 20 period bollinger bands and 2 atr of a 20 period keltner channel
our definition of squeeze:
when the upper bollinger band (bbup) is less or equal to upper keltner band (kcup)
AND lower bollinger band (bblo) is above or equal to lower keltner channel (kclo)
"""
"""
To develop the random forest model, a csv file was prepared extracting prices, bollinger bands and squeeze
classification from tradestation.
A custom squeeze_id indicator was developed in easylanguage to obtain a column with values ranging 0
or 1 depending upon the market being on a squeeze or not (based on the requirements specified above)
"""
```
# Import libraries and dependencies
import pandas as pd
import numpy as np
from pathlib import Path
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
csv_path = Path('../Resources/ts_squeeze_jpm.csv')
csv_path
ts_file_df = pd.read_csv(csv_path, parse_dates=[['Date', 'Time']])
ts_file_df.tail()
# set index as Date_Time and drop MidLine.1 column (it is a duplicate of MidLine)
ts_file_df.set_index(pd.to_datetime(ts_file_df['Date_Time'], infer_datetime_format=True), inplace=True)
ts_file_df.drop(columns=['Date_Time', 'MidLine.1'], inplace=True)
ts_file_df.head()
# Set a variable list of features to feed to our model
x_var_list = ['Open', 'High', 'Low', 'Close', 'Up', 'Down', 'kcup', 'kclo', 'MidLine', 'bbup', 'bblo', 'FastEMA', 'SlowEMA']
ts_file_df[x_var_list].head()
# Shift DataFrame values by 1
ts_file_df[x_var_list] = ts_file_df[x_var_list].shift(1)
ts_file_df[x_var_list].head()
ts_file_df.head()
ts_file_df.dropna(inplace=True)
ts_file_df.head()
# Construct training start and training end dates
training_start = ts_file_df.index.min().strftime(format='%Y-%m-%d')
training_end = '2019-01-11'
# Construct test start and test end dates
testing_start = '2019-01-12'
testing_end = '2019-06-12'
# Construct validating start and validating end dates
vali_start = '2019-06-13'
vali_end = '2020-01-12'
# Confirming training, testing and validating dates
print(f"Training Start: {training_start}")
print(f"Training End: {training_end}")
print(f"Testing Start: {testing_start}")
print(f"Testing End: {testing_end}")
print(f"validating Start: {vali_start}")
print(f"validating end: {vali_end}")
# Construct the X_train and y_train datasets
X_train = ts_file_df[x_var_list][training_start:training_end]
y_train = ts_file_df['squeeze'][training_start:training_end]
X_train.head()
y_train.tail()
# Construct the X test and y test datasets
X_test = ts_file_df[x_var_list][testing_start:testing_end]
y_test = ts_file_df['squeeze'][testing_start:testing_end]
X_test.head()
y_test.head()
# Construct the X valid and y validation datasets
X_vali = ts_file_df[x_var_list][vali_start:vali_end]
y_vali = ts_file_df['squeeze'][vali_start:vali_end]
X_vali.head()
y_vali.tail()
# Import SKLearn library and Classes
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
# Fit SKLearn regression with training datasets:
model = RandomForestClassifier(n_estimators=1000, max_depth=5, random_state=1)
model.fit(X_train, y_train)
# Make predictions of "y" values from the X_test dataset
predictions = model.predict(X_test)
# Assemble actual y_test with predicted values
compare_predict_df = y_test.to_frame()
compare_predict_df["predict_squeeze"] = predictions
compare_predict_df
# Save the pre-trained model
from joblib import dump, load
dump(model, 'random_forest_model_squeeze.joblib')
"""
Below the exporting code to csv files
"""
X_testoutput_path = Path('../Resources/X_test.csv')
X_test.to_csv(X_testoutput_path)
model_results_path = Path('../Resources/results.csv')
compare_predict_df.to_csv(model_results_path)
# different datasets to csv files for reconfigurations
X_valioutput_path = Path("../Resources/X_vali.csv")
X_vali.to_csv(X_valioutput_path)
y_valioutput_path = Path("../Resources/y_vali.csv")
y_vali.to_csv(y_valioutput_path)
```
| github_jupyter |
<center>
<img src="https://habrastorage.org/web/677/8e1/337/6778e1337c3d4b159d7e99df94227cb2.jpg"/>
## Специализация "Машинное обучение и анализ данных"
<center>Автор материала: программист-исследователь Mail.Ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ [Юрий Кашницкий](https://yorko.github.io/)
# <center> Capstone проект №1 <br> Идентификация пользователей по посещенным веб-страницам
<img src='http://i.istockimg.com/file_thumbview_approve/21546327/5/stock-illustration-21546327-identification-de-l-utilisateur.jpg'>
# <center>Неделя 3. Визуальный анализ данных и построение признаков
На 3 неделе мы займемся визуальным анализом данных и построением признаков. Сначала мы вместе построим и проанализируем несколько признаков, потом Вы сможете сами придумать и описать различные признаки.
**План 3 недели:**
- Часть 1. Построение признаков
- Часть 2. Визуальный анализ данных
- Часть 3. Дальнейшее построение признаков
- Часть 4. Проверка построенных признаков
**В этой части проекта Вам могут быть полезны видеозаписи следующих лекций курса "Поиск структуры в данных":**
- [Задача визуализации](https://www.coursera.org/learn/unsupervised-learning/lecture/hlvlT/zadacha-vizualizatsii)
- [Визуализация данных в sklearn](https://www.coursera.org/learn/unsupervised-learning/lecture/ityMo/vizualizatsiia-dannykh-v-sklearn)
**Также в задании будет использоваться библиотека Seaborn (ее можно дополнительно установить командой *pip install seaborn*), будет полезно обращаться к документациям [Matplotlib](http://matplotlib.org/users/) и [Seaborn](http://seaborn.pydata.org/), а также к примерам визуализации, описанным на StackOverflow.**
### Задание
1. Заполните код в этой тетрадке
2. Если вы проходите специализацию Яндеса и МФТИ, пошлите тетрадку в соответствующем Peer Review. <br> Если вы проходите курс ODS, выберите ответы в [веб-форме](https://docs.google.com/forms/d/1EbjK7-hF-Gepi6RH-K5I2XeiYGRoY0LNDx03QmLu9Xo).
## Часть 1. Построение признаков
```
# отключим всякие предупреждения Anaconda
import warnings
warnings.filterwarnings('ignore')
from glob import glob
import os
from tqdm import tqdm
import numpy as np
import pandas as pd
import re
import datetime
from itertools import chain
pd.set_option('display.max.columns', 25)
import pickle
#pip install seaborn
import seaborn as sns
%matplotlib inline
from matplotlib import pyplot as plt
# Поменяйте на свой путь к данным
PATH_TO_DATA = 'capstone_user_identification'
```
**Создайте на основе функций *prepare_train_set* и *prepare_sparse_train_set_window* новую – *prepare_train_set_with_fe*, (от "feature engineering"), создайте следующие признаки:**
- `session_timespan` – продолжительность сессии (разница между максимальным и минимальным временем посещения сайтов в сессии, в секундах)
- `#unique_sites` – число уникальных сайтов в сессии
- `start_hour` – час начала сессии (то есть час в записи минимального timestamp среди десяти)
- `day_of_week` – день недели (то есть день недели в записи минимального timestamp среди десяти)
Функция должна возвращать новый DataFrame (как возвращала функция *prepare_train_set*), только признаков должно быть на 4 больше. Порядок, в котором добавляются признаки: *site1*, ... *site10*, *session_timespan*, *#unique_sites*, *start_hour*, *day_of_week* и *user_id* (это видно и чуть ниже по тому, как функция вызывается).
```
def boundaries(N, session_length, window_size):
ran = lambda x: x+session_length
slice_list=[(i, ran(i) if ran(i) < N else N) for i in range(0, N, window_size)]
return slice_list
def prepare_train_set_with_fe(path_to_csv_files, site_freq_path, feature_names,
session_length=10, window_size=10):
user_re = re.compile("user([\d]+)[.]")
list_times_incomplete = []
list_sites = []
list_users = []
list_timediffs = []
with open(site_freq_path,"rb") as f:
site_freq = pickle.load(f)
for file in tqdm(glob(path_to_csv_files+'/*')):
sites_raw = pd.read_csv(file)['site'].apply(lambda x: site_freq[x][0])
timestamps_raw = pd.read_csv(file)['timestamp']
indices = boundaries(len(sites_raw),session_length,window_size)
list_users += [int(re.search(user_re, file).group(1))] * len(indices)
list_times_incomplete += [timestamps_raw.values[ind[0]:ind[1]].reshape(-1) for ind in indices]
list_sites += [sites_raw.values[ind[0]:ind[1]].reshape(-1) for ind in indices]
list_times = [list(map(np.datetime64, i)) for i in list_times_incomplete]
total_time = [(i[-1]-i[0]).astype(int) for i in list_times]
unique = [len(np.unique(i)) for i in list_sites]
for session in list_times:
localdiff = [(session[i]-session[i-1]).astype(int) for i in range(1, len(session))]
list_timediffs.append(localdiff)
df_tstamps = pd.DataFrame(list_times, columns=[f'time{i}' for i in range(session_length)])
df_sites = pd.DataFrame(list_sites)
df_timediffs = pd.DataFrame(list_timediffs)
df = pd.concat([df_sites, df_timediffs], axis=1)
df = df.fillna(0).astype('int')
df['total'] = total_time
df['unique'] = unique
df['hours'] = df_tstamps['time0'].dt.hour
df['days'] = df_tstamps['time0'].dt.dayofweek
df['user_id'] = list_users
df.columns = feature_names
return df
```
**Проверим функцию на игрушечном примере.**
```
feature_names = ['site' + str(i) for i in range(1,11)] + \
['time_diff' + str(j) for j in range(1,10)] + \
['session_timespan', '#unique_sites', 'start_hour',
'day_of_week', 'target']
train_data_toy = prepare_train_set_with_fe(os.path.join(PATH_TO_DATA,
'3users'),
site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_3users.pkl'),
feature_names=feature_names, session_length=10)
train_data_toy
```
**Примените функцию *prepare_train_set_with_fe* к данным по 10 пользователям, укажите *session_length*=10.**
```
%%time
train_data_10users = prepare_train_set_with_fe(os.path.join(PATH_TO_DATA,
'10users'),
site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_10users.pkl'),
feature_names=feature_names, session_length=10)
train_data_10users.head()
```
**Примените функцию *prepare_train_set_with_fe* к данным по 150 пользователям, укажите *session_length*=10.**
```
%%time
train_data_150users = prepare_train_set_with_fe(os.path.join(PATH_TO_DATA,
'150users'),
site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_150users.pkl'),
feature_names=feature_names, session_length=10)
```
**Сохраните в pickle-файлы признаки *session_timespan*, *#unique_sites*, *start_hour* и *day_of_week* для 10 и 150 пользователей.**
```
new_features_10users = train_data_10users.loc[:,['session_timespan', '#unique_sites', 'start_hour', 'day_of_week']]
new_features_150users = train_data_150users.loc[:,['session_timespan', '#unique_sites', 'start_hour', 'day_of_week']]
with open(os.path.join(PATH_TO_DATA,
'new_features_10users.pkl'), 'wb') as new_features_10users_pkl:
pickle.dump(new_features_10users, new_features_10users_pkl)
with open(os.path.join(PATH_TO_DATA,
'new_features_150users.pkl'), 'wb') as new_features_150users_pkl:
pickle.dump(new_features_150users, new_features_150users_pkl)
```
**<font color='red'>Вопрос 1. </font> Выведите медианную продолжительность сессии (*session_timespan*) для сессий 10 пользователей.**
```
np.median(train_data_10users.session_timespan)
```
**<font color='red'>Вопрос 2. </font> Выведите медианный день недели, в который началась сессия, для сессий 10 пользователей.**
```
np.median(train_data_10users.day_of_week)
```
**<font color='red'>Вопрос 3. </font>Выведите медианный час начала сессии для сессий 150 пользователей.**
```
np.median(train_data_150users.start_hour)
```
**<font color='red'>Вопрос 4. </font>Выведите медианное значение числа уникальных сайтов в сессиях 150 пользователей.**
```
np.median(train_data_150users['#unique_sites'])
```
## Часть 2. Визуальный анализ данных
**Забавы ради, потехи для дадим пользователям имена и ассоциируем с ними цвета.**
```
id_name_dict = {128: 'Mary-Kate', 39: 'Ashley', 207: 'Lindsey', 127: 'Naomi', 237: 'Avril',
33: 'Bob', 50: 'Bill', 31: 'John', 100: 'Dick', 241: 'Ed'}
train_data_10users['target'] = train_data_10users['target'].map(id_name_dict)
color_dic = {'Mary-Kate': 'pink', 'Ashley': 'darkviolet', 'Lindsey':'blueviolet',
'Naomi': 'hotpink', 'Avril': 'orchid',
'Bob': 'firebrick', 'Bill': 'gold', 'John': 'forestgreen',
'Dick': 'slategrey', 'Ed':'brown'}
```
**1. Постройте гистограмму распределения длины сессии в секундах (*session_timespan*). Ограничьте по *x* значением 200 (иначе слишком тяжелый хвост). Сделайте гистограмму цвета *darkviolet*, подпишите оси по-русски.**
```
train_data_10users['session_timespan'].hist(color='darkviolet', range=(0, 200))
plt.xlabel('Длина сессии')
plt.ylabel('Частота')
```
**2. Постройте гистограмму распределения числа уникальных сайтов в сессии (*#unique_sites*). Сделайте гистограмму цвета *aqua*, подпишите оси по-русски.**
```
train_data_10users['#unique_sites'].hist(color='aqua')
plt.xlabel('Число уникальных сайтов')
plt.ylabel('Частота')
```
**3. Постройте гистограммы распределения числа уникальных сайтов в сессии (*#unique_sites*) для каждого из 10 пользователей по отдельности. Используйте *subplots*, чтоб разместить все 10 картинок на одной большой. Пометьте легендой каждую картинку, на легенде должно быть написано имя пользователя. Для каждого пользователя раскрасьте гистограмму его/ее цветом (*color_dic*). Подпишите оси по-русски в каждой из 10 гистограмм.**
```
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 10))
# как вариант, можно и по-другому
indices = [(i, j) for i in range(3) for j in range(4)]
for idx, (user, sub_df) in enumerate(train_data_10users.groupby('target')):
sub_df['#unique_sites'].hist(ax=axes[indices[idx]], color=color_dic[user])
axes[indices[idx]].legend([user])
```
**4. Постройте гистограмму распределения часа начала сессии (*start_hour*). Сделайте гистограмму цвета *darkgreen*, подпишите оси по-русски.**
```
train_data_10users['start_hour'].hist(color='darkgreen')
plt.xlabel('Час начала сессии')
plt.ylabel('Частота')
```
**5. Постройте гистограммы распределения часа начала сессии (*start_hour*) для каждого из 10 пользователей по отдельности. Используйте *subplots*, чтоб разместить все 10 картинок на одной большой. Пометьте легендой каждую картинку, на легенде должно быть написано имя пользователя. Для каждого пользователя раскрасьте гистограмму его/ее цветом (*color_dic*). Подпишите оси по-русски в каждой из 10 гистограмм.**
```
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 10))
# как вариант, можно и по-другому
indices = [(i, j) for i in range(3) for j in range(4)]
for idx, (user, sub_df) in enumerate(train_data_10users.groupby('target')):
sub_df['start_hour'].hist(ax=axes[indices[idx]], color=color_dic[user])
axes[indices[idx]].legend([user])
```
**6. Постройте гистограмму распределения дня недели, в который началась сессия (*day_of_week*). Сделайте гистограмму цвета *sienna*, подпишите оси по-русски.**
```
train_data_10users['day_of_week'].hist(color='sienna')
plt.xlabel('День недели начала сессии')
plt.ylabel('Частота')
```
**7. Постройте гистограммы распределения дня недели, в который началась сессия (*day_of_week*) для каждого из 10 пользователей по отдельности. Используйте *subplots*, чтоб разместить все 10 картинок на одной большой. Измените метки по оси *X* на ['Пн', 'Вт', 'Ср', 'Чт', 'Пт', 'Сб', 'Вс'] – метод *set_xticklabels*. Пометьте легендой каждую картинку, на легенде должно быть написано имя пользователя. Для каждого пользователя раскрасьте гистограмму его/ее цветом (*color_dic*). Подпишите по-русски название каждой из 10 гистограмм.**
```
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 10))
# как вариант, можно и по-другому
weekdays = ['Пн', 'Вт', 'Ср', 'Чт', 'Пт', 'Сб', 'Вс']
indices = [(i, j) for i in range(3) for j in range(4)]
for idx, (user, sub_df) in enumerate(train_data_10users.groupby('target')):
sub_df['day_of_week'].hist(ax=axes[indices[idx]], color=color_dic[user])
axes[indices[idx]].legend([user])
axes[indices[idx]].set_xticklabels(weekdays)
```
**8. Сделайте выводы про каждого пользователя по построенным графикам.**
Bill и Ashley часто заходят на один сайт. Dick и Mary-Kate - на два. У остальных распределение по уникальным сайтам примерно одинаковое. Ashley чаще всего заходит куда-либо днем. У Avril пик приходится на обед и вечер. У Bill - утро и обед. Bob, по-видимому, не сидит в интернете позднее 17 часов. У Dick и John пик приходится на утро и обед. John и Lindsey редко заходит в интернет вечером. У Mary-Kate пик приходится на вечер. Naomi почти не заходит в интернет утром, пик приходится на обед. Ashley чаще всего заходит в интернет по четвергам. У Avril распределение по дням недели примерно равномерно. У Bill и Lindsey пик приходится на начало недели, к концу происходит постепенный спад. У John наблюдается обратная тенденция. У Dick два пика приходятся на среду и выходные. Во вторник и четверг он почти не сидит в интернете. У Mary-Kate и Naomi распределение в целом равномерно, но пик приходится на вторую половину недели.
**Загрузите сохраненный ранее в pickle-файл частотный словарь сайтов для 10 пользователей. **
**Определите топ-10 самых посещаемых сайтов (*top10_sites*) и соответствующие кол-ва посещений (*top10_freqs*).**
```
with open(f'{PATH_TO_DATA}\\site_freq_10users.pkl', 'rb') as f:
vocab = pickle.load(f)
vocab_sort = list(vocab.items())
vocab_sort.sort(key=lambda x: x[1][1], reverse=True)
top10_freqs = [i[1][1] for i in vocab_sort[:10]]
top10_sites = [i[0] for i in vocab_sort[:10]]
```
**9. Нарисуйте *seaborn barplot*, показывающий частоты посещений топ-10 сайтов. Сделайте подписи сайтов вертикальными, иначе они сливаются (*xticks*).**
```
sns.barplot(x=top10_sites, y=top10_freqs)
plt.xticks(rotation='vertical')
```
## Часть 3. Дальнейшее построение признаков
Это задание творческое, тут надо придумать, как еще учесть время посещения веб-страниц и прочие признаки.
На следующей неделе мы будем использовать "мешок" сайтов для классификации сессий по принадлежности разным пользователям, а эти новые признаки, которые Вы сейчас создадите, потом добавим и посмотрим, улучшается ли модель. Поэтому можно их создать в виде отдельных матриц и сохранить их также отдельно.
В этой части задания Вы можете построить и визуально исследовать самые разные признаки (ничто фантазию не ограничивает):
- год, месяц и день начала сессии
- час начала сессии (с учетом года, месяца и дня)
- время суток
- среднее время пребывания на сайте, посчитать можно, скажем, для топ-30 популярных сайтов
- индикаторы посещения популярных сайтов (скажем, тоже для топ-30 популярных сайтов)
- частота посещения Facebook
- ...
**Напишите функцию для создания новых признаков и примените ее к исходным данным – каталогам с 10 и 150 файлами. Сделайте это только для набора данных, полученного с параметрами *session_length=10* и *window_size=10*. Сериализуйте полученные матрицы с помощью pickle. Функция может возвращать как только новые признаки, так и старые с новыми. При этом сигнатура функции может быть другой – тут уже свобода выбора.**
```
def feature_engineering(path_to_csv_files, site_freq_path, features, session_length=10):
user_re = re.compile("user([\d]+)[.]")
list_times_incomplete = []
list_sites = []
list_users = []
list_timediffs = []
with open(site_freq_path,"rb") as f:
site_freq = pickle.load(f)
for file in tqdm(glob(path_to_csv_files+'/*')):
sites_raw = pd.read_csv(file)['site'].apply(lambda x: site_freq[x][0])
timestamps_raw = pd.read_csv(file)['timestamp']
indices = boundaries(len(sites_raw),session_length, 10)
list_users += [int(re.search(user_re, file).group(1))] * len(indices)
list_times_incomplete += [timestamps_raw.values[ind[0]:ind[1]].reshape(-1) for ind in indices]
list_sites += [sites_raw.values[ind[0]:ind[1]].reshape(-1) for ind in indices]
list_times = [list(map(np.datetime64, i)) for i in list_times_incomplete]
total_time = [(i[-1]-i[0]).astype(int) for i in list_times]
unique = [len(np.unique(i)) for i in list_sites]
facebook_id = site_freq.get('www.facebook.com', (-1, -1))[0]
google_id = site_freq.get('www.google.com', (-1, -1))[0]
facebook_count = [list(i).count(facebook_id) for i in list_sites]
google_count = [list(i).count(google_id) for i in list_sites]
total_time = [(i[-1]-i[0]).astype(int) for i in list_times]
for session in list_times:
localdiff = [(session[i]-session[i-1]).astype(int) for i in range(1, len(session))]
list_timediffs.append(localdiff)
df_tstamps = pd.DataFrame(list_times, columns=[f'time{i}' for i in range(session_length)])
df_sites = pd.DataFrame(list_sites, columns=[f'site{i}' for i in range(1, session_length+1)])
df_timediffs = pd.DataFrame(list_timediffs, columns=[f'time{i}' for i in range(1, session_length)])
df = pd.concat([df_sites, df_timediffs], axis=1)
df = df.fillna(0).astype('int')
df['session_timespan'] = total_time
df['#unique_sites'] = unique
df['start_hour'] = df_tstamps['time0'].dt.hour
df['day_of_week'] = df_tstamps['time0'].dt.dayofweek
df['target'] = list_users
df['facebook_visits'] = facebook_count
df['google_visits'] = google_count
df = df.loc[:, features]
return df, df.values
features = ['facebook_visits', 'google_visits']
new_features_10users, _ = feature_engineering(os.path.join(PATH_TO_DATA, '10users'), site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_10users.pkl'), features=features)
new_features_150users, _ = feature_engineering(os.path.join(PATH_TO_DATA, '150users'), site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_150users.pkl'), features=features)
```
**10. Постройте картинки для новых признаков, поисследуйте их, прокомментируйте результаты.**
```
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
new_features_10users['facebook_visits'].hist(ax=ax1, color='red')
new_features_10users['google_visits'].hist(ax=ax2, color='blue')
new_features_10users['facebook_visits'].value_counts()
new_features_10users['google_visits'].value_counts()
```
Мы видим, что доля пользователей, посещающих гугл, больше, чем доля пользователей, не посещающих его. Доля пользователей, посетивших гугл 10 раз за сессию, также больше.
```
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
new_features_150users['facebook_visits'].hist(ax=ax1, color='red')
new_features_150users['google_visits'].hist(ax=ax2, color='blue')
new_features_150users['facebook_visits'].value_counts()
new_features_150users['google_visits'].value_counts()
```
Мы видим, что тенденция сохраняется и для 150 пользователей
**В конце сохраните в pickle-файлы только те признаки, которые, как Вы предполагаете, помогут идентифицировать пользователя более точно. Это касается и признаков, которые мы вместе создали в начале (*session_timespan, #unique_sites, start_hour, day_of_week*), и Ваших собственных. Можно создать все эти признаки не только для сессий из 10 сайтов, но и для других сочетаний параметров *session_length* и *window_size*.**
```
features = ['session_timespan', '#unique_sites', 'start_hour', 'day_of_week']
selected_features_10users, _ = feature_engineering(os.path.join(PATH_TO_DATA, '10users'), site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_10users.pkl'), features=features)
selected_features_150users, _ = feature_engineering(os.path.join(PATH_TO_DATA, '150users'), site_freq_path=os.path.join(PATH_TO_DATA,
'site_freq_150users.pkl'), features=features)
with open(os.path.join(PATH_TO_DATA,
'selected_features_10users.pkl'), 'wb') as selected_features_10users_pkl:
pickle.dump(selected_features_10users, selected_features_10users_pkl,
protocol=2)
with open(os.path.join(PATH_TO_DATA,
'selected_features_150users.pkl'), 'wb') as selected_features_150users_pkl:
pickle.dump(selected_features_150users, selected_features_150users_pkl,
protocol=2)
```
### Критерии оценки работы (только для Peer Review в специализации):
- Верно ли отображена гистограмма session_timespan из п. 1? (max. 3 балла)
- Верно ли отображена гистограмма #unique_sites из п. 2? (max. 3 балла)
- Верно ли отображены гистограммы #unique_sites по каждому пользователю из п. 3? (max. 6 баллов)
- Верно ли отображена гистограмма start_hour из п. 4? (max. 3 балла)
- Верно ли отображены гистограммы start_hour по каждому пользователю из п. 5? (max. 6 баллов)
- Верно ли отображена гистограмма day_of_week из п. 6? (max. 3 балла)
- Верно ли отображены гистограммы day_of_week по каждому пользователю из п. 7? (max. 6 баллов)
- Насколько сделанные выводы в п. 8 соответствуют построенным картинкам? (max. 6 баллов)
- Верно ли отображен barplot для 10 популярных сайтов из п. 9? (max. 6 баллов)
- Правильно ли посчитана медианная продолжительность сессий в п. 10? (max. 3 балла)
- Правильно ли посчитан медианный день недели начала сессии в п. 11? (max. 3 балла)
- Правильно ли посчитан медианный час начала сессии в п. 12? (max. 3 балла)
- Правильно ли посчитано медианное значение числа уникальных сайтов в сессиях 150 пользователей п. 13? (max. 3 балла)
- Есть ли оригинальные построенные признаки и картинки к ним? Оцените также и качество картинок. (max. 8 баллов)
## Пути улучшения
Что еще можно добавить по 3 части проекта:
- IPython-widgets, интерактив и анимация (стоящие статьи по этому ремеслу – [раз](https://habrahabr.ru/post/308162/) и [два](https://habrahabr.ru/company/ods/blog/323210/))
- можно попробовать изобразить исходные данные в некотором пространстве, например, Word2Vec, потом выделить главные компоненты или t-SNE (только пользуйтесь эффективными реализациями типа [Multicore-TSNE](https://github.com/DmitryUlyanov/Multicore-TSNE), не Sklearn) и раскрасить по целевому классу. Но нет гарантий, что получится что-то значимо отличающееся от каши
На следующей неделе мы наконец приступим к обучению моделей классификации.
| github_jupyter |
# 531 - Lab 1 - Visualizing world health data
There are two versions of this lab, one in Python and one in R.
The R lab will use `ggplot` and the Python lab will use `Altair`.
This is the Python version.
Please choose a version to complete, though keep in mind that you are required to alternate between completing the R labs and the Python labs to get experience using both languages.
<div class="alert alert-info" style="color:black">
## Submission instructions
rubric={mechanics:2}
<p>You receive marks for submitting your lab correctly, please follow these instructions:</p>
<ul>
<li><a href="https://ubc-mds.github.io/resources_pages/general_lab_instructions/">
Follow the general lab instructions.</a></li>
<li><a href="https://github.com/UBC-MDS/public/tree/master/rubric">
Click here to view a description of the rubrics used to grade the questions</a></li>
<li>Push your <code>.ipynb</code> file to your GitHub repository for this lab.</li>
<li>Upload a <code>.html</code> version of your assignment to Canvas.
<ul>
<li> Either manually or using the last cell of this notebook.</li>
</ul>
</li>
<li>Include a clickable link to your GitHub repo for the lab just below this cell
<ul>
<li>It should look something like this https://github.ubc.ca/MDS-2020-21/DSCI_531_labX_yourcwl.</li>
</ul>
</li>
</ul>
</div>
https://github.com/ubco-mds-2020-labs/data-550-lab-1-group-11
```
# Run this cell to ensure that altair plots show up in the exported HTML
# and that the R cell magic works
import altair as alt
# Save a vega-lite spec and a PNG blob for each plot in the notebook
alt.renderers.enable('mimetype')
# Handle large data sets without embedding them in the notebook
alt.data_transformers.enable('data_server')
```
# 1. Get motivated!
You have already worked with the Gapminder world health data set in the previous block
and we will revisit an updated version of it in this lab.
The Gapminder foundation strives to educate people about the public health status
in countries all around the world
and fight devastating misconceptions that hinder world development.
This information is important both for our capacity to make considerate choices as individuals,
and from an industry perspective in understanding where markets are emerging.
In their research,
Gapminder has discovered that most people don't really know what the world looks like today.
Do you?
[Take this 7-8 min quiz to find out](https://forms.gapminder.org/s3/test-2018).
This quiz is not easy,
so don't worry if you get a low score.
I took this quiz for the first time a few years back and I didn't do too well myself =)
It is primarily meant to spark your curiosity to learn more about this lab's data set!
When you are done,
[please submit your score in this Google form](https://docs.google.com/forms/d/e/1FAIpQLSc2B0wlF-QWqAeJnHbu534WT-Twhpetk_4uUMM3LZvV0wv0mg/viewform?usp=sf_link).
This is anonymous,
I just want to explore if we can use the the distribution of scores
for something interesting in class or future labs.
<div class="alert alert-info" style="color:black">
### Question 1.1
rubric={reasoning:1,writing:1}
<p>To answer the first lab question
<a href=https://www.youtube.com/watch?v=usdJgEwMinM>watch this 20 min video of Hans Rosling</a>
a public health professor at Karolinska Institute
who founded Gapminder together with his son and his son's wife.
Although the video is almost 15 years old,
it is a formidable demonstration on how to present data in a way that engages your audience
while conveying a strong, important message.
(The original clip has over 3 million views,
but I linked you one of better video quality).</p>
<p>Briefly describe (<=90 words)
what you think is the most important message conveyed in the video
and which data visualization you think was the most effective
in getting this message across to the viewers.</p>
</div>
# YOUR ANSWER GOES HERE
Average people (60% of total population) takes 24% of the total income.
The most effective message in my opinion was how Asia (who were the poorest in 1970) and caused the bigger hump in the continuous histogram moved towards average by 2003 by moving out of poverty and the hump in the graph reduced.
Also, within Asia, the progress of South Korea (speed and direction of development), and the country move much faster if they are healthy first than wealthy first
# 2. The Gapminder bubble chart
The "bubble chart" have become quite famous from their appearance in the Gapminder talks,
and are widely used in other areas as well.
Let's start by recreating a simple version of this chart ourselves!
There will be some data wrangling involved in this lab,
and since 531 is primarily about visualization and this is the first lab,
I will give you some hints for most data wrangling parts of this lab.
Often I will link documentation or StackOverflow,
so that you get practice finding information on these sources,
and sometimes you will need to search them yourself if I haven't included a link.
To make this more interesting,
I have compiled a more recent version of the Gapminder dataset,
which contains values up until 2018 for most of the features.
We will not use all the columns in the data set,
but here is a description of what they contain
that you can refer back to throughout the lab.
| Column | Description |
|-----------------------|----------------------------------------------------------------------------------------------|
| country | Country name |
| year | Year of observation |
| population | Population in the country at each year |
| region | Continent the country belongs to |
| sub_region | Sub-region the country belongs to |
| income_group | Income group [as specified by the world bank in 2018] |
| life_expectancy | The mean number of years a newborn would <br>live if mortality patterns remained constant |
| income | GDP per capita (in USD) <em>adjusted <br>for differences in purchasing power</em> |
| children_per_woman | Average number of children born per woman |
| child_mortality | Deaths of children under 5 years <break>of age per 1000 live births |
| pop_density | Average number of people per km<sup>2</sup> |
| co2_per_capita | CO2 emissions from fossil fuels (tonnes per capita) |
| years_in_school_men | Mean number of years in primary, secondary,<br>and tertiary school for 25-36 years old men |
| years_in_school_women | Mean number of years in primary, secondary,<br>and tertiary school for 25-36 years old women |
[as specified by the world bank in 2018]: https://datahelpdesk.worldbank.org/knowledgebase/articles/378833-how-are-the-income-group-thresholds-determined
<div class="alert alert-info" style="color:black">
### Question 2
rubric={accuracy:1,quality:1,viz:2}
<h4>Python</h4>
<ol type="1">
<li>I have uploaded the <a href=https://raw.githubusercontent.com/UofTCoders/workshops-dc-py/master/data/processed/world-data-gapminder.csv> 2018 Gapminder data at this URL.</a> Use <code>read_csv</code> from <code>pandas</code> to load the data directly from the URL and assign it a suitable variable name. Set the <code>parse_dates</code> parameter to <code>['year']</code> to ensure that Altair recognizes this columns as time data.</li>
<li>Now let’s create a similar bubble chart to what you saw in the video:
<ul>
<li>Filter the dataframe to only keep observations from a single year, 1962. You can create a new data frame variable or perform the filtering directly as you pass the data to Altair. Dates can be matched as strings when filtering.</li>
<li>Use a circle mark to recreate the appearance of the plot in the video.</li>
<li>Encode the proper variables so that children per woman is on the x-axis, life expectancy on the y-axis, and so that the circles’ color corresponds to their region, and the size reflects the population.</li>
</ul></li>
</ol>
<p> Don't worry about getting axis labels and sizes to be exactly like in the video,
we will return to this code later in the lab to customize it.</p>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/UofTCoders/workshops-dc-py/master/data/processed/world-data-gapminder.csv", parse_dates=['year'])
import datetime as dt
df_1962 = df[df['year'].dt.year == 1962]
import altair as alt
alt.Chart(df_1962).mark_point().encode(
alt.X('children_per_woman'),
alt.Y('life_expectancy'),
size='region',
color='region'
)
```
# 3. Education balance
A common misconception is that women around the world go to school many years less than men. Let’s find out what the data actually says about this.
```
df['ratio'] = df['years_in_school_women']/ df['years_in_school_men']
df1 = df.loc[df["year"].between('1970-01-01', '2015-12-31')]
df2 = df1.groupby(['income_group','year'], as_index=False).agg({"ratio": "mean"})
```
<div class="alert alert-info" style="color:black">
### Question 3
rubric={accuracy:2,quality:1,viz:2}
<h4>
Python
</h4>
<ol type="1">
<li>Compute a new column in your dataframe that represents the ratio between the number of years in school for women and men (calculate it so that the value 1 means as many years for both, and 0.5 means half as many for women compared to men).</li>
<li>Filter the dataframe to only contain value from 1970 - 2015, since those are the years where the education data has been recorded. Again you can either create a new variable or perform the filtering as you pass the data to the plotting function.</li>
<li>Create a line plot showing how the ratio of women’s of men’s years in school has changed over time. Group the data by income group and plot the mean for each group.</li>
<li>Use layering to add a square mark for every data point in your line plot (so one per yearly mean in each group).</li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
alt.Chart(df2).mark_line().encode(
x='year',
y='ratio',
color='income_group',
shape=alt.Shape('income_group', scale=alt.Scale(range=['square', 'square', 'square', 'square']), legend=None)
)
```
<div class="alert alert-warning" style="color:black">
### Question 3.1 (Optional)
rubric={accuracy:1}
<h4>
Python
</h4>
Add <a href=https://altair-viz.github.io/gallery/line_with_ci.html> a confidence interval band</a>
to your line + square plot by assigning the plot in the previous question to a variable name
and then using layering to add the band.
The default in the link above is a 95% bootstrapped confidence interval.
</div>
```
# YOUR PYTHON ANSWER GOES HERE
```
# 4. Family planning
Another common misconception is that saving the lives of children in low income countries
will lead to overpopulation.
Rather,
lower child mortality is actually correlated with smaller family sizes.
As more children survive,
parents feel more secure with a smaller family size.
Let's have a look in the data to see how this relationship has evolved over time.
In the plots we are going to make,
it is important to note that it is not possible to tell causation,
just correlation.
However,
in the [Gapminder](https://www.gapminder.org/videos/) video library
there are a few videos on this topic
(including [this](https://www.gapminder.org/answers/will-saving-poor-children-lead-to-overpopulation/)
and [this](https://www.gapminder.org/videos/population-growth-explained-with-ikea-boxes/) one),
discussing how reducing poverty can help slow down population growth
through decreased family sizes.
Current estimates suggest that the word population
will stabilize around 11 billion people
and the average number of children per woman
will be close to two worldwide in year 2100.
<div class="alert alert-info" style="color:black">
### Question 4
rubric={accuracy:1,viz:2,reasoning:1}
<h4>
Python
</h4>
<ol type="1">
<li>Filter the data to include only the years 1918, 1938, 1958, 1978, 1998, and 2018. To do this, you need to write out the full date strings, <code>'1918-01-01'</code> etc, or use <code>pd.to_datetime</code> with <code>format=%Y</code> on a list of the year integers only, up to you which one.</li>
<li>Use filled circles to make a scatter plot with children per women on the x-axis, child mortality on the y-axis, and the circles colored by the income group.</li>
<li>Facet your data into six subplots, one for each year laid out in 3 columns and 2 rows. To avoid taking too much space, set the width and height of the plots to suitable numbers.</li>
<li>Briefly describe your interpretation of the data. Does it support what was written in the introduction to this section of the lab? Why / why not?</li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
df_1918 = df[df['year'].dt.year == 1918]
df_1938 = df[df['year'].dt.year == 1938]
df_1958 = df[df['year'].dt.year == 1958]
df_1978 = df[df['year'].dt.year == 1978]
df_1998 = df[df['year'].dt.year == 1998]
df_2018 = df[df['year'].dt.year == 2018]
df4 = pd.concat([df_1918, df_1938, df_1958, df_1978, df_1998, df_2018])
df4
alt.Chart(df4).mark_point().encode(
x='children_per_woman',
y='child_mortality',
color='income_group',
facet=alt.Facet('year', columns=3)
).properties(
width=180,
height=180
)
```
YOUR ANSWER TO 4 GOES HERE
Gradually over the year, regions are moving towards less children per woman and we can see child_mortality is decreasing
# 5. Carbon dioxide emissions
CO2 emissions are often talked about in it's relation to climate change.
Let's explore the data to see which countries emits the most CO2 per capita
and which regions has emitted the most in total over time.
<div class="alert alert-info" style="color:black">
### Question 5
rubric={accuracy:1,quality:1,viz:2}
<h4>
Python
</h4>
<ol type="1">
<li>Filter the data to include only the most recent year when <code>'co2_per_capita'</code> was measured (it is up to you how you find out which year this is).</li>
<li>Use the data frame <code>nlargest</code> method to select the top 40 countries in CO2 production per capita for that year.</li>
<li>Since we have only one value per country per year, let’s create a bar chart to visualize it. Encode the CO2 per capita as on the x-axis, the country on the y-axis, and the region as the color.</li>
<li>Sort your bar chart so that the highest CO2 per capita is the closest to the x-axis (the bottom of the chart). <a href="https://altair-viz.github.io/gallery/bar_chart_sorted.html">Here is an example of how to sort in Altair</a>.</li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
df_2014 = df[df['year'].dt.year == 2014]
df_2014_large = df_2014.nlargest(40,"co2_per_capita")
alt.Chart(df_2014_large).mark_bar().encode(
x=alt.X('co2_per_capita'),
y=alt.Y('country', sort='x'),
color='region'
).properties(
width=400,
height=400
)
```
<div class="alert alert-info" style="color:black">
### Question 5.1
rubric={accuracy:1,quality:1,viz:2}
<h4>
Python
</h4>
<ol type="1">
<li>in addition to the co2 per capita, the total population also matter for a country’s overall co2 emissions. compute a new column in your data set called <code>'co2_total'</code> which contains the total co2 emissions per observation.</li>
<li>plot this new column over time in an area chart, but instead of plotting one area for each country, plot one for each region which represents the sum of all countries co2 emissions in that region.</li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
df['co2_total'] = df['co2_per_capita']*df['population']
alt.Chart(df).mark_bar().encode(
x=alt.X('year'),
y='co2_total',
color='region'
).properties(
width=600,
height=600
)
```
# 6. Income distribution
In his talk back in 2003, Rosling showed a projection of how the world income distribution would look like in 2015. Let’s eyeball if the suggested trend was accurate.
<div class="alert alert-warning" style="color:black">
### Question 6 (Optional)
rubric={accuracy:1,viz:1}
<h4>Python</h4>
<ol type="1">
<li>Wrangle your data to include the years 1979, 1991, 2003 and 2015.</li>
<li>Create a histogram (binned bar chart) of the income distribution with an appropriate number of bins.</li>
<li>Facet by year and make the plots smaller so that they fit in a single row.</li>
<li>It is a little hard to tell if the data is exactly the same as the prediction since we are not using a log scale and a histogram instead of a density plot (we’ll learn about these things later). But in general, briefly explain whether you think the trend is the same or not?</li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
```
# 7. Chart beautification
Let's make our charts from question 2 look more like the Gapminder bubble chart! Beautifying charts can take a long time, but it is also satisfying when you end up with a really nice looking chart in the end. We will learn more about how to create charts for communication later, but these parameters are usually enough to create basic communication charts and to help you in your data exploration.
<div class="alert alert-info" style="color:black">
### Question 7
rubric={accuracy:2,quality:1,viz:1}
<h4>
Python
</h4>
<ol type="1">
<li>Copy in your code from question 2.1 and confirm that your scatter plot is generated properly so that you didn't miss to copy anything.</li>
<li>Add a title of your choice to the chart.</li>
<li>Set the x-axis and y-axis scale so that they don’t include zero and are zoomed in to the extent of the data instead.</li>
<li>Set proper titles for the axis and the legends, which include spaces instead of underscores and are capitalized.</li>
<li>Some of the dots are really hard to see because they are so small and it is a bit difficult to distinguish the changes in size as well. Let’s make everything bigger and emphasize the size difference by using the <a href="https://altair-viz.github.io/gallery/airport_connections.html">range argument to <code>alt.Scale</code></a> (there is a lot of other things going on in this example, so just focus on how they specify <code>size</code>).</li>
<li>Enlarge the axis title font by finding and setting the <a href="https://altair-viz.github.io/user_guide/configuration.html?highlight=titlefont#axis-configuration">right parameter of <code>.configure_axis</code></a></li>
</ol>
</div>
```
# YOUR PYTHON ANSWER GOES HERE
alt.Chart(df_1962, title = 'Data of 1962').mark_point().encode(
alt.X('children_per_woman', scale=alt.Scale(zero=False), title='Children Per Woman'),
alt.Y('life_expectancy', scale=alt.Scale(zero=False), title='Life Expectancy'),
color='region',
size=alt.Size('region', scale=alt.Scale(range=[0, 200]))
).configure_axis(labelFontSize=10, titleFontSize=20
).configure_title(fontSize=30)
```
---
# Submission to Canvas
When you are ready to submit your assignment do the following:
1. Run all cells in your notebook to make sure there are no errors by doing `Kernel -> Restart Kernel and Run All Cells...`
2. Convert your notebook to .html format using the `convert_notebook()` function below or by `File -> Export Notebook As... -> Export Notebook to HTML`
3. Submit your exported .html file to Canvas.
4. Don't forget to also push all your work (including the .ipynb file) to GitHub.
| github_jupyter |
# Modeling resting-state MEG-Data
In this example we will learn how to use `neurolib` to simulate resting state functional connectivity of MEG recordings.
In the first part of the notebook, we will compute the frequency specific functional connectivity matrix of an examplary resting state MEG recording from the [YouR-Study](https://doi.org/10.1186/s12888-017-1206-5) *Uhlhaas, P.J., Gajwani, R., Gross, J. et al. The Youth Mental Health Risk and Resilience Study (YouR-Study). BMC Psychiatry 17, 43 (2017)*.
To this end we will:
* Band-Pass filter the signal
* Apply the `hilbert`-transformation to extract the signal envelope
* Orthogonalize the signal envelopes of two examplary regions
* Low-Pass filter the signal envelopes
* and compute the pairwise envelope correlations which yields the `functional connectivity` matrix.
We follow the approach presented in *[Hipp, J., Hawellek, D., Corbetta, M. et al.](https://doi.org/10.1038/nn.3101), Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884–890 (2012)*
In the second part of this notebook, we will use a whole-brain model to simulate brain activity and compute functional connectivity matrix of the simulated signal envelope, as was done for the empirical MEG data. The parameters of this model have been previously optimized with `neurolib`'s evolutionary algorithms (not shown here).
Finally, we will compute the fit (Pearson correlation) of the simulated functional connectivity to the empirical MEG data, which was used as a fitting objective in a previous optimization procedure.
```
# change to the root directory of the project
import os
if os.getcwd().split("/")[-1] == "examples":
os.chdir('..')
# This will reload all imports as soon as the code changes
%load_ext autoreload
%autoreload 2
import os
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import seaborn as sns
import ipywidgets as widgets
from IPython.utils import io
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import time
import pandas as pd
```
## Empirical Functional Connectivity
### Load MEG-Data
First off, let's load the MEG data using the `Signal` class from `neurolib`. Our example data has already been preprocessed and projected into source space using the [AAL2](https://www.gin.cnrs.fr/en/tools/aal/) atlas.
```
from neurolib.utils.signal import Signal
signal = Signal.from_file(os.path.join('examples', 'data','rs-meg.nc'))
region_labels = signal.data.regions.values
nr_regions = len(region_labels)
display(signal.data)
```
### Band-Pass filter and Hilbert transform
We will now filter the signal into the desidered frequency band and apply the [hilbert transform](https://en.wikipedia.org/wiki/Hilbert_transform) on the band-passed filtered signal. This will provide us with the analytic representation of the signal, which we can then use to extract the signal's envelope and its phase.
In the following, we plot each processing step for an example target region that you can chose using the widgets below *(default: left Precentral Gyrus)*. Furthermore, we can also choose the frequency range that we'd like to filter the signal in *(default: alpha (8-12Hz))*.
```
print('Select a region from the AAL2 atlas and a frequency range')
# Select a Region
target = widgets.Select(options=region_labels, value='PreCG.L', description='Regions',
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
layout=widgets.Layout(width='50%', height='150px'))
display(target)
# Select Frequency Range
freq = widgets.IntRangeSlider(min=1, max=46, description='Frequency (Hz)', value=[8, 12], layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
display(freq)
# Define how many timepoints you'd like to plot
plot_timepoints = 1000
# Plot unfiltered Signal
fig, ax = plt.subplots(2,1,figsize=(12,8), sharex=True)
sns.lineplot(x=signal.data.time[:plot_timepoints], y=signal.data.sel(regions=target.value)[:plot_timepoints],
ax=ax[0], color='k', alpha=0.6)
ax[0].set_title(f'Unfiltered Signal ({target.value})');
# Band Pass Filter the Signal
signal.filter(freq.value[0], freq.value[1], inplace=True);
# Apply hilbert-transform to extract the signal envelope
complex_signal = signal.hilbert_transform('complex', inplace=False)
signal_env = np.abs(complex_signal.data)
# Plot filtered Signal and Signal Envelope
sns.lineplot(x=signal.data.time[:plot_timepoints], y=signal.data.sel(regions=target.value)[:plot_timepoints],
ax=ax[1], label='Bandpass-Filtered Signal')
sns.lineplot(x=signal_env.time[:plot_timepoints], y=signal_env.sel(regions=target.value)[:plot_timepoints],
ax=ax[1], label='Signal Envelope')
ax[1].set_title(f'Filtered Signal ({target.value})');
ax[1].legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)
sns.despine(trim=True)
```
### Orthogonalized signal envelope
Now we are going to address the main methodological issue of MEG when it comes to the analysis of the cortical
functional connectivity structure, i.e. its low spatial resolution. The electric field
generated by any given neural source spreads widely over the cortex so that the signal captured at the MEG sensors is a complex mixture of signals from multiple underlying neural sources.
To account for the effect of electric field spread on our MEG connectivity measures, we adapted the orthogonalization approach by *Hipp, J., Hawellek, D., Corbetta, M. et al. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884–890 (2012) __[link](https://doi.org/10.1038/nn.3101)__*.
The basic idea here is that a signal generated by one neural source and measured at two separate sensors must have exactly the same phase at both sensors. In contrast, signals from different neural sources have different phases. And thus it is possible to eliminate the effect of a reference signal on the target signal by removing the signal component that has the same phase as a reference region.
Formally, this can be expressed as: $Y_{\perp X}(t,f) = imag\big(\ Y(t,f)\ \frac{X(t,f)^\star}{|X(t,f)|}\ \big)\ \label{eq:orth}$. Here, $Y$ represents the analytic signal from our target regions that is being orthogonalized with respect to the signal from region $X$.
Using the widgets below, you can choose the reference region $X$ *(default: right Precentral Gyrus)*
```
print('Select a reference region for the orthogonalization')
# Select a Region
referenz = widgets.Select(options=region_labels, value='PreCG.R', description='Regions',
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
layout=widgets.Layout(width='50%', height='150px'))
display(referenz)
# Perform Orthogonalization
signal_conj = complex_signal.data.conj()
conj_div_env = signal_conj/signal_env
orth_signal = (complex_signal.data.sel(regions=target.value) * conj_div_env.sel(regions=referenz.value)).imag
orth_env = np.abs(orth_signal)
# Plot
fig, ax = plt.subplots(2,1,figsize=(12,8), sharex=True)
sns.lineplot(x=signal.data.time[:plot_timepoints], y=signal.data.sel(regions=referenz.value)[:plot_timepoints], ax=ax[0])
sns.lineplot(x=signal_env.time[:plot_timepoints], y=signal_env.sel(regions=referenz.value)[:plot_timepoints], ax=ax[0])
ax[0].set_title(f'Referenz Region X ({referenz.value})');
sns.lineplot(x=signal.data.time[:plot_timepoints], y=signal.data.sel(regions=target.value)[:plot_timepoints],
ax=ax[1], label='Bandpass-Filtered Signal')
sns.lineplot(x=signal_env.time[:plot_timepoints], y=signal_env.sel(regions=target.value)[:plot_timepoints],
ax=ax[1], label='Signal Envelope')
sns.lineplot(x = orth_env.time[:plot_timepoints], y=orth_env[:plot_timepoints], ax=ax[1], label='Orthogonalized Envelope')
ax[1].legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)
ax[1].set_title(f'Target Region Y ({target.value})');
sns.despine(trim=True)
```
### Low-Pass filtering of the envelopes
As a last step, before calculating the envelope correlations, we need to low-pass filter the signal envelopes since the connectivity measures of (ultra)-low frequency components of the MEG-signal correspond best to the functional connectivity as measured using fMRI.
Below, you can choose the low-pass frequency *(default: 0.2 Hz)*.
```
low_pass = widgets.FloatSlider(value=0.2, min=0, max=2.0, step=0.1, description='Low-Pass Frequency (Hz)',
disabled=False, readout=True, readout_format='.1f', layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
display(low_pass)
with io.capture_output() as captured:
low_orth_env = Signal(orth_env).filter(low_freq=None, high_freq=low_pass.value, inplace=False)
low_signal_env = Signal(signal_env.sel(regions=referenz.value)).filter(low_freq=None, high_freq=low_pass.value, inplace=False)
# Plot
fig, ax = plt.subplots(1,2,figsize=(15,4), sharey=True)
sns.lineplot(x=signal_env.time[:plot_timepoints], y=signal_env.sel(regions=referenz.value)[:plot_timepoints], ax=ax[0])
sns.lineplot(x=low_signal_env.data.time[:plot_timepoints], y=low_signal_env.data[:plot_timepoints], ax=ax[0])
ax[0].set_title(f'Referenz Region X ({referenz.value})');
sns.lineplot(x = orth_env.time[:plot_timepoints], y=orth_env[:plot_timepoints], ax=ax[1], label='Orthogonalized Envelope')
sns.lineplot(x = low_orth_env.data.time[:plot_timepoints], y=low_orth_env.data[:plot_timepoints], ax=ax[1], label='Low-Passed Orthogonalized Envelope')
ax[1].legend(bbox_to_anchor=(1, -0.18),borderaxespad=0)
ax[1].set_title(f'Target Region Y ({target.value})');
sns.despine(trim=True)
print(f'Orthogonalized envelope correlation between {referenz.value} and {target.value}: ', np.round(np.corrcoef(low_orth_env.data,low_signal_env.data)[0,1],2))
```
### Computing the functional connectivity matrix
We will now define a function that iterates over each pair of brain regions and performs the previously presented processing steps, i.e. that extracts the envelopes, performs the orthogonalization, applies the low-pass filter, and returns the functional connectivity matrix that contains the pairwise envelope correlations.
This step may take a minute.
```
def orth_fc(signal, low_pass):
nr_regions = signal.data.shape[0]
progress = widgets.IntProgress(min=0, max=nr_regions, description=('Calculating FC Matrix'),
layout=widgets.Layout(width='80%'), style={'description_width': 'initial'})
display(progress)
complex_signal = signal.hilbert_transform('complex', inplace=False)
signal_env = signal.hilbert_transform('amplitude', inplace=False);
conj_div_env = complex_signal.data.conj()/signal_env.data
# Low-pass filter Signal envelope
with io.capture_output() as captured:
signal_env.filter(low_freq=None, high_freq=low_pass)
corr = []
for complex_region in complex_signal.data:
orth_signal = (complex_region * conj_div_env).imag
orth_env = np.abs(orth_signal).T
orth_env = Signal(orth_env)
with io.capture_output() as captured:
orth_env.filter(low_freq=None, high_freq=low_pass)
corr_mat = np.corrcoef(orth_env.data, signal_env.data)
corr.append(np.diag(corr_mat, k=nr_regions))
progress.value += 1
fc = np.array(corr)
# Since the orthogonalization process is not symmetric we take the mean of both directions.
fc = (fc.T + fc) / 2.
np.fill_diagonal(fc,0)
return fc
# Execute Function
fc = orth_fc(signal, low_pass.value)
```
Let's now plot the functional connectivity matrix. We label only every second row/column since right and left regions alternate in the AAL2 atlas.
```
fig, ax = plt.subplots(figsize=(10,8))
sns.heatmap(fc, square=True, ax=ax, cmap='YlGnBu', linewidth=0.005, cbar_kws={"shrink": .8})
ticks = [tick[:-2] for tick in region_labels[::2]]
ax.set_xticks(np.arange(0,94,2)); ax.set_yticks(np.arange(0,94,2))
ax.set_xticklabels(ticks, rotation=90, fontsize=8); ax.set_yticklabels(ticks, rotation=0, fontsize=8);
```
#### Exclude subcortical regions
For the following whole-brain simulation we are only interested in the cortical regions. So we'll now exclude all subcortical regions:
* Hippocampus: 41 - 44
* Amygdala: 45-46
* Basal Ganglia: 75-80
* Thalamus: 81-82
> Attention: AAL indices start with 1
```
exclude = list(range(40, 46)) + list(range(74, 82))
tmp = np.delete(fc, exclude, axis=0)
emp_fc = np.delete(tmp, exclude, axis=1)
# Exclude regions from the list of region labels
emp_labels = np.delete(region_labels, exclude)
```
## Whole-brain model
In this part of the notebook, we will use `neurolib` to simulate the functional connectivity. We will therefore:
* Load structural connectivity matrices from the *Human Connectome Project* and initiate the whole-brain model using the Wilson-Cowan model to simulate each brain region
* Set the *global coupling strength*, *exc. background input*, and the *noise strength* parameters of the model
* Run the simulation
* Compute the functional connectivity using the signal envelopes
Please refer to the `wc-minimal` example for an introduction to the Wilson-Cowan model.
#### Initiate whole-brain model
```
# Let's import the neurolib
from neurolib.models.wc import WCModel
from neurolib.utils.loadData import Dataset
# First we load the structural data set from the Human Connectome Project
ds = Dataset("hcp")
# We initiate the Wilson-Cowan model
wc = WCModel(Cmat = ds.Cmat, Dmat = ds.Dmat, seed=0)
```
#### Parameter settings
You may now choose parameters settings for the *global coupling*, the *excitatory background input*, and the *noise strength*, which will be used when we run the model. The final fit between simulated and empirical connectivity matrices will depend on the parameters choosen here.
```
global_coupling = widgets.FloatSlider(value=6.55, min=0., max=20.0, step=0.01, description='Global Coupling',
disabled=False, readout=True, readout_format='.2f', layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
exc_drive = widgets.FloatSlider(value=1.58, min=0.0, max=4.0, step=0.01, description='Exc. Background Drive',
disabled=False, readout=True, readout_format='.2f', layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
inh_drive = widgets.FloatSlider(value=2.83, min=0.0, max=4.0, step=0.01, description='Inh. Background Drive',
disabled=False, readout=True, readout_format='.2f', layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
noise_level = widgets.FloatSlider(value=0.02, min=0.001, max=0.05, step=0.001, description='Noise Level',
disabled=False, readout=True, readout_format='.3f', layout=widgets.Layout(width='80%'),
style={'description_width': 'initial'})
display(global_coupling)
display(exc_drive)
display(inh_drive)
display(noise_level)
```
#### Run the simulation
Let's now run the whole-brain model using the defined parameter settings. This may take some time since we're simulating a complete minute here.
```
# Let's set the previously defined parameters
# note: the duraiton here is short for testing:
wc.params['duration'] = 10*1000
# use longer simulation for real run:
#wc.params['duration'] = 1*60*1000
wc.params['K_gl'] = global_coupling.value
wc.params['exc_ext'] = exc_drive.value
wc.params['inh_ext'] = inh_drive.value
wc.params['sigma_ou'] = noise_level.value
# Run the model
wc.run()
```
### Simulated functional connectivity
We'll now compute the functional connectivity matrix containing the pairwise envelope correlations between all cortical regions of the AAL2 atlas. We'll thus follow the processing steps as before, i.e. band-pass filter the signal, extract the signal envelopes using the hilbert transformation, low-pass filter the envelopes and compute the pairwise pearson correlations. Note that we don't apply the orthogonalization scheme here, since this was only done to account to the electric field spread in the empirical data.
```
# Create xr DataArray from the simulated excitatory timeseries (keeping the region labels)
sim_signal = xr.DataArray(wc.exc[:, int(1000/wc.params.dt):], dims=("regions", "time"), coords={"regions": emp_labels, "time": wc.t[int(1000/wc.params.dt):]/1000},
attrs={'atlas':'AAL2'})
# Initialize Figure
fig, ax = plt.subplots(figsize=(12,4))
# Filter signal
sim_signal = Signal(sim_signal)
sim_signal.resample(to_frequency=100)
with io.capture_output() as captured:
sim_signal.filter(freq.value[0], freq.value[1], inplace=True);
sns.lineplot(x=sim_signal.data.time[:plot_timepoints], y=sim_signal.data.sel(regions=target.value)[:plot_timepoints], ax=ax, label='Filtered Signal')
# Extract signal envelope
sim_signal.hilbert_transform('amplitude', inplace=True)
sns.lineplot(x=sim_signal.data.time[:plot_timepoints], y=sim_signal.data.sel(regions=target.value)[:plot_timepoints], ax=ax, label='Signal Envelope')
# Low-Pass Filter
with io.capture_output() as captured:
sim_signal.filter(low_freq=None, high_freq=low_pass.value, inplace=True)
sns.lineplot(x=sim_signal.data.time[:plot_timepoints], y=sim_signal.data.sel(regions=target.value)[:plot_timepoints], ax=ax, label='Low-Pass Signal Envelope')
ax.legend(bbox_to_anchor=(1.2, 1),borderaxespad=0)
ax.set_title(f'Simulated Signal of Target Region Y ({target.value})');
sns.despine(trim=True)
```
To compute the simulated functional connectivity matrix we use the `fc` functions from neurolib.
```
import neurolib.utils.functions as func
# Compute the functional connectivity matrix
sim_fc = func.fc(sim_signal.data)
# Set diagonal to zero
np.fill_diagonal(sim_fc, 0)
# Plot Empirical and simulated connectivity matrix
fig, ax = plt.subplots(1,2, figsize=(16,10))
sns.heatmap(emp_fc, square=True, ax=ax[0], cmap='YlGnBu', linewidth=0.005, cbar_kws={"shrink": .5})
ax[0].set_title('Empirical FC',pad=10);
sns.heatmap(sim_fc, square=True, ax=ax[1], cmap='YlGnBu', linewidth=0.005, cbar_kws={"shrink": .5})
ax[1].set_title('Simulated FC',pad=10);
ticks = [tick[:-2] for tick in emp_labels[::2]]
for ax in ax:
ax.set_xticks(np.arange(0,80,2)); ax.set_yticks(np.arange(0,80,2))
ax.set_xticklabels(ticks, rotation=90, fontsize=8); ax.set_yticklabels(ticks, rotation=0, fontsize=8);
```
## Model fit
Lastly, we may evaluate the model fit by computing the pearson correlation between our simulated functional connectivity matrix and the empirical one. Additionally we'll also plot the correlation between structural and functional connectivity matrices to have a reference.
```
# Compare structural and simulated connectivity to the empirical functional connectivity
struct_emp = np.corrcoef(emp_fc.flatten(), ds.Cmat.flatten())[0,1]
sim_emp = np.corrcoef(emp_fc.flatten(), sim_fc.flatten())[0,1]
# Plot
fig, ax = plt.subplots(figsize=(6,6))
splot = sns.barplot(x=['Structural Connectivity', 'Simulated Connectivity'], y=[struct_emp, sim_emp], ax=ax)
ax.set_title('Correlation to Empiral Functional Connectivity', pad=10)
for p in splot.patches:
splot.annotate(format(p.get_height(), '.2f'),
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
size=20, color='white',
xytext = (0, -12),
textcoords = 'offset points')
sns.despine()
print(f"Parameters: \tGlobal Coupling: {wc.params['K_gl']}\n\t\tExc. Background Drive: {wc.params['exc_ext']}")
print(f"\t\tNoise Level: {wc.params['sigma_ou']}")
```
| github_jupyter |
## Outline
* Recap of data
* Feedforward network with Pytorch tensors and autograd
* Using Pytorch's NN -> Functional, Linear, Sequential & Pytorch's Optim
* Moving things to CUDA
```
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
import time
from IPython.display import HTML
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
import torch
torch.manual_seed(0)
my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"])
```
## Generate Dataset
```
data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
print(data.shape, labels.shape)
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
X_train, X_val, Y_train, Y_val = train_test_split(data, labels, stratify=labels, random_state=0)
print(X_train.shape, X_val.shape, labels.shape)
```
## Using torch tensors and autograd
```
X_train, Y_train, X_val, Y_val = map(torch.tensor, (X_train, Y_train, X_val, Y_val))
print(X_train.shape, Y_train.shape)
def model(x):
a1 = torch.matmul(x, weights1) + bias1 # (N, 2) x (2, 2) -> (N, 2)
h1 = a1.sigmoid() # (N, 2)
a2 = torch.matmul(h1, weights2) + bias2 # (N, 2) x (2, 4) -> (N, 4)
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1) # (N, 4)
return h2
y_hat = torch.tensor([[0.1, 0.2, 0.3, 0.4], [0.8, 0.1, 0.05, 0.05]])
y = torch.tensor([2, 0])
(-y_hat[range(y_hat.shape[0]), y].log()).mean().item()
(torch.argmax(y_hat, dim=1) == y).float().mean().item()
def loss_fn(y_hat, y):
return -(y_hat[range(y.shape[0]), y].log()).mean()
def accuracy(y_hat, y):
pred = torch.argmax(y_hat, dim=1)
return (pred == y).float().mean()
torch.manual_seed(0)
weights1 = torch.randn(2, 2) / math.sqrt(2)
weights1.requires_grad_()
bias1 = torch.zeros(2, requires_grad=True)
weights2 = torch.randn(2, 4) / math.sqrt(2)
weights2.requires_grad_()
bias2 = torch.zeros(4, requires_grad=True)
learning_rate = 0.2
epochs = 10000
X_train = X_train.float()
Y_train = Y_train.long()
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = model(X_train)
loss = loss_fn(y_hat, Y_train)
loss.backward()
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
with torch.no_grad():
weights1 -= weights1.grad * learning_rate
bias1 -= bias1.grad * learning_rate
weights2 -= weights2.grad * learning_rate
bias2 -= bias2.grad * learning_rate
weights1.grad.zero_()
bias1.grad.zero_()
weights2.grad.zero_()
bias2.grad.zero_()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
```
## Using NN.Functional
```
import torch.nn.functional as F
torch.manual_seed(0)
weights1 = torch.randn(2, 2) / math.sqrt(2)
weights1.requires_grad_()
bias1 = torch.zeros(2, requires_grad=True)
weights2 = torch.randn(2, 4) / math.sqrt(2)
weights2.requires_grad_()
bias2 = torch.zeros(4, requires_grad=True)
learning_rate = 0.2
epochs = 10000
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = model(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss.backward()
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
with torch.no_grad():
weights1 -= weights1.grad * learning_rate
bias1 -= bias1.grad * learning_rate
weights2 -= weights2.grad * learning_rate
bias2 -= bias2.grad * learning_rate
weights1.grad.zero_()
bias1.grad.zero_()
weights2.grad.zero_()
bias2.grad.zero_()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
```
## Using NN.Parameter
```
import torch.nn as nn
class FirstNetwork(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.weights1 = nn.Parameter(torch.randn(2, 2) / math.sqrt(2))
self.bias1 = nn.Parameter(torch.zeros(2))
self.weights2 = nn.Parameter(torch.randn(2, 4) / math.sqrt(2))
self.bias2 = nn.Parameter(torch.zeros(4))
def forward(self, X):
a1 = torch.matmul(X, self.weights1) + self.bias1
h1 = a1.sigmoid()
a2 = torch.matmul(h1, self.weights2) + self.bias2
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1)
return h2
def fit(epochs = 1000, learning_rate = 1):
loss_arr = []
acc_arr = []
for epoch in range(epochs):
y_hat = fn(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
loss.backward()
with torch.no_grad():
for param in fn.parameters():
param -= learning_rate * param.grad
fn.zero_grad()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
fn = FirstNetwork()
fit()
```
## Using NN.Linear and Optim
```
class FirstNetwork_v1(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.lin1 = nn.Linear(2, 2)
self.lin2 = nn.Linear(2, 4)
def forward(self, X):
a1 = self.lin1(X)
h1 = a1.sigmoid()
a2 = self.lin2(h1)
h2 = a2.exp()/a2.exp().sum(-1).unsqueeze(-1)
return h2
fn = FirstNetwork_v1()
fit()
from torch import optim
def fit_v1(epochs = 1000, learning_rate = 1):
loss_arr = []
acc_arr = []
opt = optim.SGD(fn.parameters(), lr=learning_rate)
for epoch in range(epochs):
y_hat = fn(X_train)
loss = F.cross_entropy(y_hat, Y_train)
loss_arr.append(loss.item())
acc_arr.append(accuracy(y_hat, Y_train))
loss.backward()
opt.step()
opt.zero_grad()
plt.plot(loss_arr, 'r-')
plt.plot(acc_arr, 'b-')
plt.show()
print('Loss before training', loss_arr[0])
print('Loss after training', loss_arr[-1])
fn = FirstNetwork_v1()
fit_v1()
```
## Using NN.Sequential
```
class FirstNetwork_v2(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 2),
nn.Sigmoid(),
nn.Linear(2, 4),
nn.Softmax()
)
def forward(self, X):
return self.net(X)
fn = FirstNetwork_v2()
fit_v1()
def fit_v2(x, y, model, opt, loss_fn, epochs = 1000):
for epoch in range(epochs):
loss = loss_fn(model(x), y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
fn = FirstNetwork_v2()
loss_fn = F.cross_entropy
opt = optim.SGD(fn.parameters(), lr=1)
fit_v2(X_train, Y_train, fn, opt, loss_fn)
```
## Running it on GPUs
```
device = torch.device("cuda")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn = FirstNetwork_v2()
fn.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn))
toc = time.time()
print('Time taken', toc - tic)
class FirstNetwork_v3(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(0)
self.net = nn.Sequential(
nn.Linear(2, 1024*4),
nn.Sigmoid(),
nn.Linear(1024*4, 4),
nn.Softmax()
)
def forward(self, X):
return self.net(X)
device = torch.device("cpu")
X_train=X_train.to(device)
Y_train=Y_train.to(device)
fn = FirstNetwork_v3()
fn.to(device)
tic = time.time()
print('Final loss', fit_v2(X_train, Y_train, fn, opt, loss_fn))
toc = time.time()
print('Time taken', toc - tic)
```
## Exercises
1. Try out a deeper neural network, eg. 2 hidden layers
2. Try out different parameters in the optimizer (eg. try momentum, nestrov) -> check `optim.SGD` docs
3. Try out other optimization methods (eg. RMSProp and Adam) which are supported in `optim`
4. Try out different initialisation methods which are supported in `nn.init`
| github_jupyter |
# Import modules
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import gridspec
from matplotlib import cm
from pysheds.grid import Grid
from matplotlib import colors
import seaborn as sns
import warnings
from partition import differentiated_linear_weights, controller_placement_algorithm
warnings.filterwarnings('ignore')
sns.set()
sns.set_palette('husl', 8)
%matplotlib inline
```
# Generate graph
```
grid = Grid.from_raster('../data/n30w100_dir', data_name='dir')
dirmap = (64, 128, 1, 2, 4, 8, 16, 32)
# Specify pour point
x, y = -97.294167, 32.73750
# Delineate the catchment
grid.catchment(data='dir', x=x, y=y, dirmap=dirmap, out_name='catch',
recursionlimit=15000, xytype='label')
# Clip the bounding box to the catchment
grid.clip_to('catch', pad=(1,1,1,1))
# Compute flow distance
grid.accumulation(data='catch', out_name='acc', dirmap=dirmap)
grid.flow_distance(data='catch', x=x, y=y, dirmap=dirmap, out_name='dist', xytype='label')
dist = grid.view('dist', nodata=0, dtype=np.float64)
dist_weights = (np.where(grid.view('acc') >= 100, 0.1, 0)
+ np.where((0 < grid.view('acc')) & (grid.view('acc') <= 100), 1, 0)).ravel()
dists = grid.flow_distance(data='catch', x=x, y=y, weights=dist_weights,
dirmap=dirmap, out_name='dist', xytype='label', inplace=False)
weights = differentiated_linear_weights(dists)
```
# Determine weighted accumulation
```
acc = grid.accumulation(data='catch', dirmap=dirmap, inplace=False)
wacc = grid.accumulation(data='catch', weights=weights, dirmap=dirmap, inplace=False)
ratio = np.where(grid.mask & acc.astype(bool), wacc / acc, np.nan).ravel()
mask = (dists != 0)
hist, bin_edges = np.histogram(dists[mask].ravel(), range=(0,dists.max()+1e-5), bins=40)
```
# Ratio of accumulation within critical range to total accumulation
```
k = 7
c = 2000
fdir = grid.view('catch')
subs, ixes = controller_placement_algorithm(fdir, c, k, weights=weights, grid=grid,
compute_weights=differentiated_linear_weights,
dist_weights=dist_weights)
ixy, ixx = np.unravel_index(ixes, wacc.shape)
fig = plt.figure(figsize=(12, 4))
fig.patch.set_alpha(0)
gs = gridspec.GridSpec(1, 2, width_ratios=[2, 3])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
cmap = cm.get_cmap('plasma', len(subs))
im = np.zeros_like(wacc)
for i, sub in enumerate(subs):
im += (1 + i)*(sub != 0).astype(int)
im[im == 0] = np.nan
im0 = ax0.imshow(im, cmap=cmap, zorder=2)
ax0.scatter(ixx, ixy, zorder=4, c='k', s=15, marker='x')
ax0.grid(zorder=-1)
ax0.xaxis.set_ticklabels([])
ax0.yaxis.set_ticklabels([])
ax0.set_title('Ordered partitions (k = 7)', size=14)
plt.colorbar(im0, ax=ax0)
plotlist = [np.bincount(np.digitize(dists.flat[np.where(sub.ravel())[0]], bin_edges[1:]),
minlength=len(bin_edges) - 1).astype(int)
for sub in subs]
ax1.stackplot(bin_edges[1:], *plotlist, linewidth=0.7,
colors=sns.color_palette('plasma', k), edgecolor='0.4')
ax1.set_xlim(0, int(dists.max()))
ax1.set_title('Stacked width function of partitions', size=14)
ax1.set_xlabel('Normalized travel time [-]', size=13)
ax1.set_ylabel('Frequency', size=13)
ax1.yaxis.tick_right()
ax1.yaxis.set_label_position('right')
plt.tight_layout()
plt.savefig('../img/partitions_k7_phi10.png', bbox_inches='tight', dpi=200)
k = 15
c = 900
fdir = grid.view('catch')
subs, ixes = controller_placement_algorithm(fdir, c, k, weights=weights, grid=grid,
compute_weights=differentiated_linear_weights,
dist_weights=dist_weights)
ixy, ixx = np.unravel_index(ixes, wacc.shape)
fig = plt.figure(figsize=(12, 4))
fig.patch.set_alpha(0)
gs = gridspec.GridSpec(1, 2, width_ratios=[2, 3])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
cmap = cm.get_cmap('plasma', len(subs))
im = np.zeros_like(wacc)
for i, sub in enumerate(subs):
im += (1 + i)*(sub != 0).astype(int)
im[im == 0] = np.nan
im0 = ax0.imshow(im, cmap=cmap, zorder=2)
ax0.scatter(ixx, ixy, zorder=4, c='k', s=15, marker='x')
ax0.grid(zorder=-1)
ax0.xaxis.set_ticklabels([])
ax0.yaxis.set_ticklabels([])
ax0.set_title('Ordered partitions (k = 15)', size=14)
plt.colorbar(im0, ax=ax0)
plotlist = [np.bincount(np.digitize(dists.flat[np.where(sub.ravel())[0]], bin_edges[1:]), minlength=40).astype(int)
for sub in subs]
ax1.stackplot(bin_edges[1:], *plotlist, linewidth=0.4,
colors=sns.color_palette('plasma', k), edgecolor='0.6')
ax1.set_xlim(0, int(dists.max()))
ax1.set_title('Stacked width function of partitions', size=14)
ax1.set_xlabel('Normalized travel time [-]', size=13)
ax1.set_ylabel('Frequency', size=13)
ax1.yaxis.tick_right()
ax1.yaxis.set_label_position('right')
plt.tight_layout()
plt.savefig('../img/partitions_k15_phi10.png', bbox_inches='tight', dpi=200)
k = 10
c = 1350
fdir = grid.view('catch')
subs, ixes = controller_placement_algorithm(fdir, c, k, weights=weights, grid=grid,
compute_weights=differentiated_linear_weights,
dist_weights=dist_weights)
ixy, ixx = np.unravel_index(ixes, wacc.shape)
fig = plt.figure(figsize=(12, 4))
fig.patch.set_alpha(0)
gs = gridspec.GridSpec(1, 2, width_ratios=[2, 3])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
cmap = cm.get_cmap('plasma', len(subs))
im = np.zeros_like(wacc)
for i, sub in enumerate(subs):
im += (1 + i)*(sub != 0).astype(int)
im[im == 0] = np.nan
im0 = ax0.imshow(im, cmap=cmap, zorder=2)
ax0.scatter(ixx, ixy, zorder=4, c='k', s=15, marker='x')
ax0.grid(zorder=-1)
ax0.xaxis.set_ticklabels([])
ax0.yaxis.set_ticklabels([])
ax0.set_title('Ordered partitions (k = 10)', size=14)
plt.colorbar(im0, ax=ax0)
plotlist = [np.bincount(np.digitize(dists.flat[np.where(sub.ravel())[0]], bin_edges[1:]), minlength=40).astype(int)
for sub in subs]
ax1.stackplot(bin_edges[1:], *plotlist, linewidth=0.4,
colors=sns.color_palette('plasma', k), edgecolor='0.4')
ax1.set_xlim(0, int(dists.max()))
ax1.set_title('Stacked width function of partitions', size=14)
ax1.set_xlabel('Normalized travel time [-]', size=13)
ax1.set_ylabel('Frequency', size=13)
ax1.yaxis.tick_right()
ax1.yaxis.set_label_position('right')
plt.tight_layout()
plt.savefig('../img/partitions_k10_phi10.png', bbox_inches='tight', dpi=200)
k = 25
c = 530
fdir = grid.view('catch')
subs, ixes = controller_placement_algorithm(fdir, c, k, weights=weights, grid=grid,
compute_weights=differentiated_linear_weights,
dist_weights=dist_weights)
ixy, ixx = np.unravel_index(ixes, wacc.shape)
fig = plt.figure(figsize=(12, 4))
fig.patch.set_alpha(0)
gs = gridspec.GridSpec(1, 2, width_ratios=[2, 3])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
cmap = cm.get_cmap('plasma', len(subs))
im = np.zeros_like(wacc)
for i, sub in enumerate(subs):
im += (1 + i)*(sub != 0).astype(int)
im[im == 0] = np.nan
im0 = ax0.imshow(im, cmap=cmap, zorder=2)
ax0.scatter(ixx, ixy, zorder=4, c='k', s=15, marker='x')
ax0.grid(zorder=-1)
ax0.xaxis.set_ticklabels([])
ax0.yaxis.set_ticklabels([])
ax0.set_title('Ordered partitions (k = 25)', size=14)
plt.colorbar(im0, ax=ax0)
plotlist = [np.bincount(np.digitize(dists.flat[np.where(sub.ravel())[0]], bin_edges[1:]), minlength=40).astype(int)
for sub in subs]
ax1.stackplot(bin_edges[1:], *plotlist, linewidth=0.1,
colors=sns.color_palette('plasma', k), edgecolor='0.2')
ax1.set_xlim(0, int(dists.max()))
ax1.set_title('Stacked width function of partitions', size=14)
ax1.set_xlabel('Normalized travel time [-]', size=13)
ax1.set_ylabel('Frequency', size=13)
ax1.yaxis.tick_right()
ax1.yaxis.set_label_position('right')
plt.tight_layout()
plt.savefig('../img/partitions_k25_phi10.png', bbox_inches='tight', dpi=200)
1350 / np.count_nonzero(grid.mask)
900 / np.count_nonzero(grid.mask)
# Time run
k = 15
c = 900
fdir = grid.view('catch')
%%timeit
subs, ixes = controller_placement_algorithm(fdir, c, k, weights=weights, grid=grid,
compute_weights=differentiated_linear_weights,
dist_weights=dist_weights)
```
| github_jupyter |
# NSF COA author/affiliation tool
Inspired by [this awesome tool](https://github.com/ejfertig/NSFBiosketch) from Dr. Elana Fertig, but unable to get it to run in time due to a java install problem with the xlsx package in my perpetually infuriating R environment, I whipped up something similar for the Pythonistas.
This tool will take a list of PMIDs and return the list of authors and affiliations, along with most recent authorship date.
```
import pandas as pd
from pymed import PubMed
from time import sleep
```
## Import papers
Import a list of your publication PMIDs, one per line in a plaintext file
```
pmids = []
with open('PMID-export.txt', 'r') as f:
for line in f:
pmids.append(line.strip())
pmids
```
We'll sort them in chronological order, to ensure we get the most recent conflict dates per author
```
pmids.sort()
# Create a PubMed object that GraphQL can use to query
# Note that the parameters are not required but kindly requested by PubMed Central
# https://www.ncbi.nlm.nih.gov/pmc/tools/developers/
pubmed = PubMed(tool="BioSketchify", email="my@email.address")
```
## Retrieve and parse PubMed entries
Query PubMed one publication at a time, and parse the author and affiliation list.
Due to API limits, we have to limit the rate at which we query.
```
authors = {}
for pmid in pmids:
results = pubmed.query(pmid, max_results=1)
for article in results:
for author in article.authors:
name = '%s, %s' % (author['lastname'], author['firstname'])
year = article.publication_date.year
affiliation = author['affiliation']
authors[name] = (year, affiliation)
print(article.title)
sleep(1)
```
Make an author dataframe, with blank columns for "Organization" and "Department"
```
author_df = pd.DataFrame.from_dict(authors, orient='index', columns=['year','affiliation'])
author_df['Organization'] = ''
author_df['Department'] = ''
author_df.head()
```
## Split affiliation into department and organization
This might be optional, but PubMed stores affiliation in a single column, and NSF requests 'Organization' be in its own column. This function will loop over the author dataframe, and present each comma-separated element of the 'affiliation' value to you and prompt for input. Press 1 to store that chunk to the 'Department' column, 2 to store that chunk to the 'Organization' column, and any other key to move to the next author.
It will only parse authors that have no entry for the required 'Organization' column, so if you miss that and re-run this cell it will pick up where you left off.
```
print("Enter 1 for Department, 2 for Organization, or nothing to skip rest")
for i, author in author_df.iterrows():
if author['Organization'] != '':
continue
try:
for bit in author['affiliation'].split(','):
print(bit)
choice = input("Input:")
if choice == '1':
author_df.loc[i, 'Department'] = author_df.loc[i, 'Department'] + bit
elif choice == '2':
author_df.loc[i, 'Organization'] = author_df.loc[i, 'Organization'] + bit
else:
break
except:
continue
author_df.head()
```
## Export author dataframe to CSV file
You can now open this in your favorite spreadsheet column to clean it up and add to the NSF workbook.
```
author_df.to_csv('authors_with_affiliations.csv')
```
| github_jupyter |
```
# default_exp optimizer
#export
from local.torch_basics import *
from local.test import *
from local.notebook.showdoc import *
```
# Optimizer
> Define the general fastai optimizer and the variants
## Optimizer -
```
#export
class _BaseOptimizer():
"Common functionality between `Optimizer` and `OptimWrapper`"
def all_params(self, n=slice(None), with_grad=False):
res = L((p,pg,self.state[p],hyper) for pg,hyper in zip(self.param_groups[n],self.hypers[n]) for p in pg)
return L(o for o in res if o[0].grad is not None) if with_grad else res
def _set_require_grad(self, rg, p,pg,state,h): p.requires_grad_(rg or state.get('force_train', False))
def freeze_to(self, n):
self.frozen_idx = n if n >= 0 else len(self.param_groups) + n
if self.frozen_idx >= len(self.param_groups):
warn(f"Freezing {self.frozen_idx} groups; model has {len(self.param_groups)}; whole model is frozen.")
for o in self.all_params(slice(n, None)): self._set_require_grad(True, *o)
for o in self.all_params(slice(None, n)): self._set_require_grad(False, *o)
def freeze(self):
assert(len(self.param_groups)>1)
self.freeze_to(-1)
def unfreeze(self): self.freeze_to(0)
def set_hypers(self, **kwargs): L(kwargs.items()).starmap(self.set_hyper)
def _set_hyper(self, k, v):
for v_,h in zip(v, self.hypers): h[k] = v_
def set_hyper(self, k, v):
if isinstance(v, slice):
if v.start: v = even_mults(v.start, v.stop, len(self.param_groups))
else: v = [v.stop/10]*(len(self.param_groups)-1) + [v.stop]
v = L(v, use_list=None)
if len(v)==1: v = v*len(self.param_groups)
assert len(v) == len(self.hypers), f"Trying to set {len(v)} values for {k} but there are {len(self.param_groups)} parameter groups."
self._set_hyper(k, v)
add_docs(_BaseOptimizer,
all_params="List of param_groups, parameters, and hypers",
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
unfreeze="Unfreeze the entire model",
set_hypers="`set_hyper` for all `kwargs`",
set_hyper="Set the value(s) in `v` for hyper-parameter `k`")
# export
class Optimizer(_BaseOptimizer):
"Base optimizer class for the fastai library, updating `params` with `steppers`"
_keep_on_clear = ['force_train', 'do_wd']
def __init__(self, params, steppers, stats=None, train_bn=True, **defaults):
params = L(params)
self.steppers,self.stats,self.state,self.train_bn = L(steppers),L(stats),defaultdict(dict),train_bn
defaults = merge(*self.stats.attrgot('defaults'), *self.steppers.attrgot('defaults'), defaults)
self.param_groups = L(L(p) for p in params) if isinstance(params[0], (L,list)) else L([params])
#self.step_func = compose(*steppers)
self.hypers = L({} for _ in range_of(self.param_groups))
self.set_hypers(**defaults)
self.frozen_idx = 0
def zero_grad(self):
for p,*_ in self.all_params(with_grad=True):
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,pg,state,hyper in self.all_params(with_grad=True):
for stat in self.stats: state = stat(state, p, **hyper)
for step in self.steppers: step(p, **{**state, **hyper})
self.state[p] = state
def clear_state(self):
for p,pg,state,hyper in self.all_params():
self.state[p] = {k: state[k] for k in self._keep_on_clear if k in state}
def state_dict(self):
state = [self.state[p] for p,*_ in self.all_params()]
return {'state': state, 'hypers': self.hypers}
def load_state_dict(self, sd):
assert len(sd["hypers"]) == len(self.param_groups)
assert len(sd["state"]) == sum([len(pg) for pg in self.param_groups])
self.hypers = sd['hypers']
self.state = {p: s for p,s in zip(self.all_params().itemgot(0), sd['state'])}
add_docs(Optimizer,
zero_grad="Standard PyTorch API: Zero all the grad attributes of the parameters",
step="Standard PyTorch API: Update the stats and execute the steppers in on all parameters that have a grad",
state_dict="Return the state of the optimizer in a dictionary",
load_state_dict="Load the content of `sd`",
clear_state="Reset the state of the optimizer")
```
### Initializing an Optimizer
`params` will be used to create the `param_groups` of the optimizer. If it's a collection (or a generator) of parameters, it will be a `L` containing one `L` with all the parameters. To define multiple parameter groups `params` should be passed as a collection (or a generator) of `L`s.
> Note: In PyTorch, `model.parameters()` returns a generator with all the parameters, that you can directly pass to `Optimizer`.
```
opt = Optimizer([1,2,3], noop)
test_eq(opt.param_groups, [[1,2,3]])
opt = Optimizer(range(3), noop)
test_eq(opt.param_groups, [[0,1,2]])
opt = Optimizer([[1,2],[3]], noop)
test_eq(opt.param_groups, [[1,2],[3]])
opt = Optimizer(([o,o+1] for o in range(0,4,2)), noop)
test_eq(opt.param_groups, [[0,1],[2,3]])
```
`steppers` is a list of functions that will be composed when applying the step. For instance, you can compose a function making the SGD step, with another one applying weight decay. Additionally, each `stepper` can have a `defaults` attribute that contains hyper-parameters and their default value. Those are all gathered at initialization, and new values can be passed to override those defaults with the `defaults` kwargs. The steppers will be called by `Optimizer.step` (which is the standard PyTorch name), and gradients can be cleared with `Optimizer.zero_grad` (also a standard PyTorch name).
Once the defaults have all been pulled off, they are copied as many times as there are `param_groups` and stored in `hypers`. To apply different hyper-parameters to different groups (differential learning rates, or no weight decay for certain layers for instance), you will need to adjsut those values after the init.
```
def tst_arg(p, lr=0, **kwargs): return p
tst_arg.defaults = dict(lr=1e-2)
def tst_arg2(p, lr2=0, **kwargs): return p
tst_arg2.defaults = dict(lr2=1e-3)
def tst_arg3(p, mom=0, **kwargs): return p
tst_arg3.defaults = dict(mom=0.9)
def tst_arg4(p, **kwargs): return p
opt = Optimizer([1,2,3], [tst_arg,tst_arg2], tst_arg3)
test_eq(opt.hypers, [{'lr2': 1e-3, 'mom': 0.9, 'lr': 1e-2}])
opt = Optimizer([1,2,3], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}])
opt = Optimizer([[1,2],[3]], tst_arg)
test_eq(opt.hypers, [{'lr': 1e-2}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3]], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.1}])
```
For each hyper-parameter, you can pass a slice or a collection to set them, if there are multiple parameter groups. A slice will be converted to a log-uniform collection from its beginning to its end, or if it only has an end `e`, to a collection of as many values as there are parameter groups that are `...,e/10,e/10,e`.
Setting an yper-paramter with a collection that has a different number of elements than the optimizer has paramter groups will raise an error.
```
opt = Optimizer([[1,2],[3]], tst_arg, lr=[0.1,0.2])
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-2))
test_eq(opt.hypers, [{'lr': 1e-3}, {'lr': 1e-3}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-4,1e-2))
test_eq(opt.hypers, [{'lr': 1e-4}, {'lr': 1e-3}, {'lr': 1e-2}])
test_fail(lambda: Optimizer([[1,2],[3],[4]], tst_arg, lr=np.array([0.1,0.2])))
```
### Basic steppers
To be able to give examples of optimizer steps, we will need some steppers, like the following:
```
#export
def sgd_step(p, lr, **kwargs):
p.data.add_(-lr, p.grad.data)
return p
def tst_param(val, grad=None):
"Create a tensor with `val` and a gradient of `grad` for testing"
res = tensor([val]).float()
res.grad = tensor([val/10 if grad is None else grad]).float()
return res
p = tst_param(1., 0.1)
p = sgd_step(p, 1.)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
#export
def weight_decay(p, lr, wd, do_wd=True, **kwargs):
"Weight decay as decaying `p` with `lr*wd`"
if do_wd and wd!=0: p.data.mul_(1 - lr*wd)
return p
weight_decay.defaults = dict(wd=0.)
p = tst_param(1., 0.1)
p = weight_decay(p, 1., 0.1)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
#export
def l2_reg(p, lr, wd, do_wd=True, **kwargs):
"L2 regularization as adding `wd*p` to `p.grad`"
if do_wd and wd!=0: p.grad.data.add_(wd, p.data)
return p
l2_reg.defaults = dict(wd=0.)
p = tst_param(1., 0.1)
p = l2_reg(p, 1., 0.1)
test_eq(p, tensor([1.]))
test_eq(p.grad, tensor([0.2]))
```
> Warning: Weight decay and L2 regularization is the same thing for basic SGD, but for more complex optimizers, they are very different. See [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101) for more information.
### Making the step
```
show_doc(Optimizer.step)
```
This method will loop over all param groups, then all parameters for which `grad` is not None and call each function in `stepper`, passing it the parameter `p` with the hyper-parameters in the corresponding dict in `hypers`.
```
#test basic step
r = L.range(4)
def tst_params(): return r.map(tst_param)
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.99)))
#test two steps
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.98)))
#test None gradients are ignored
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
params[-1].grad = None
opt.step()
test_close([p.item() for p in params], [0., 0.99, 1.98, 3.])
#test discriminative lrs
params = tst_params()
opt = Optimizer([params[:2], params[2:]], sgd_step, lr=0.1)
opt.hypers[0]['lr'] = 0.01
opt.step()
test_close([p.item() for p in params], [0., 0.999, 1.98, 2.97])
show_doc(Optimizer.zero_grad)
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.zero_grad()
[test_eq(p.grad, tensor([0.])) for p in params];
```
`Optimizer` has `stats` which are functions taking the state associated with a parameter. `stats` use that parameter, plus the optimizer hyper-parameters, to update the state.
That state can then be used by any stepper. The best example is a momentum calculation.
`stats` are initialized to an empty dictionary the first time we try to access it, and after that the `stat` function will have to be properly initialized.
```
def tst_stat(state, p, **kwargs):
state['sum'] = state.get('sum', torch.zeros_like(p)) + p.data
return state
tst_stat.defaults = {'mom': 0.9}
#Test Optimizer init
opt = Optimizer([1,2,3], noop, stats=tst_stat)
test_eq(opt.hypers, [{'mom': 0.9}])
opt = Optimizer([1,2,3], noop, stats=tst_stat, mom=0.99)
test_eq(opt.hypers, [{'mom': 0.99}])
#Test stat
x = torch.randn(4,5)
state = tst_stat({}, x)
assert 'sum' in state
test_eq(state['sum'], x)
state = tst_stat(state, x)
test_eq(state['sum'], 2*x)
```
## Statistics
```
# export
def average_grad(state, p, mom, dampening=False, **kwargs):
"Keeps track of the avg grads of `p` in `state` with `mom`."
if 'grad_avg' not in state: state['grad_avg'] = torch.zeros_like(p.grad.data)
damp = 1-mom if dampening else 1.
state['grad_avg'].mul_(mom).add_(damp, p.grad.data)
return state
average_grad.defaults = dict(mom=0.9)
```
`dampening=False` gives the classical formula for momentum in SGD:
```
new_val = old_val * mom + grad
```
whereas `dampening=True` makes it an exponential moving average:
```
new_val = old_val * mom + grad * (1-mom)
```
```
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_grad(state, p, mom=0.9)
test_eq(state['grad_avg'], p.grad)
state = average_grad(state, p, mom=0.9)
test_eq(state['grad_avg'], p.grad * 1.9)
#Test dampening
state = {}
state = average_grad(state, p, mom=0.9, dampening=True)
test_eq(state['grad_avg'], 0.1*p.grad)
state = average_grad(state, p, mom=0.9, dampening=True)
test_eq(state['grad_avg'], (0.1*0.9+0.1)*p.grad)
# export
def average_sqr_grad(state, p, sqr_mom, dampening=True, **kwargs):
if 'sqr_avg' not in state: state['sqr_avg'] = torch.zeros_like(p.grad.data)
damp = 1-sqr_mom if dampening else 1.
state['sqr_avg'].mul_(sqr_mom).addcmul_(damp, p.grad.data, p.grad.data)
return state
average_sqr_grad.defaults = dict(sqr_mom=0.99)
```
`dampening=False` gives the classical formula for momentum in SGD:
```
new_val = old_val * mom + grad**2
```
whereas `dampening=True` makes it an exponential moving average:
```
new_val = old_val * mom + (grad**2) * (1-mom)
```
```
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_sqr_grad(state, p, sqr_mom=0.99, dampening=False)
test_eq(state['sqr_avg'], p.grad.pow(2))
state = average_sqr_grad(state, p, sqr_mom=0.99, dampening=False)
test_eq(state['sqr_avg'], p.grad.pow(2) * 1.99)
#Test dampening
state = {}
state = average_sqr_grad(state, p, sqr_mom=0.99)
test_close(state['sqr_avg'], 0.01*p.grad.pow(2))
state = average_sqr_grad(state, p, sqr_mom=0.99)
test_close(state['sqr_avg'], (0.01*0.99+0.01)*p.grad.pow(2))
```
### Freezing part of the model
```
show_doc(Optimizer.freeze)
show_doc(Optimizer.freeze_to)
show_doc(Optimizer.unfreeze)
#Freezing the first layer
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
opt.freeze_to(1)
req_grad = Self.requires_grad()
test_eq(L(params[0]).map(req_grad), [False]*4)
for i in {1,2}: test_eq(L(params[i]).map(req_grad), [True]*4)
#Unfreezing
opt.unfreeze()
for i in range(2): test_eq(L(params[i]).map(req_grad), [True]*4)
#TODO: test warning
# opt.freeze_to(3)
```
Parameters such as batchnorm weights/bias can be marked to always be in training mode, just put `force_train=true` in their state.
```
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
for p in L(params[1])[[1,3]]: opt.state[p] = {'force_train': True}
opt.freeze()
test_eq(L(params[0]).map(req_grad), [False]*4)
test_eq(L(params[1]).map(req_grad), [False, True, False, True])
test_eq(L(params[2]).map(req_grad), [True]*4)
```
### Serializing
```
show_doc(Optimizer.state_dict)
show_doc(Optimizer.load_state_dict)
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, noop, stats=average_grad)
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
sd = opt.state_dict()
p1 = tst_param([10,20,30], [40,50,60])
opt = Optimizer(p1, noop, stats=average_grad, mom=0.99)
test_eq(opt.hypers[0]['mom'], 0.99)
test_eq(opt.state, {})
opt.load_state_dict(sd)
test_eq(opt.hypers[0]['mom'], 0.9)
test_eq(opt.state[p1]['grad_avg'], tensor([[4., 5., 6.]]))
show_doc(Optimizer.clear_state)
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, noop, stats=average_grad)
opt.state[p] = {'force_train': True}
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
opt.clear_state()
test_eq(opt.state[p], {'force_train': True})
```
## Optimizers
### SGD with momentum
```
#export
def momentum_step(p, lr, grad_avg, **kwargs):
"Step for SGD with momentum with `lr`"
p.data.add_(-lr, grad_avg)
return p
#export
def SGD(params, lr, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for SGD with `lr` and `mom` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(sgd_step if mom==0 else momentum_step)
if mom == 0.: return Optimizer(params, steppers, lr=lr, wd=wd)
else: return Optimizer(params, steppers, stats=average_grad, lr=lr, mom=mom, wd=wd)
```
Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
```
#Vanilla SGD
params = tst_params()
opt = SGD(params, lr=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#SGD with momentum
params = tst_params()
opt = SGD(params, lr=0.1, mom=0.9)
assert isinstance(opt, Optimizer)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*(1 - 0.1 * (0.1 + 0.1*1.9)) for i in range(4)])
for i,p in enumerate(params): test_close(opt.state[p]['grad_avg'].item(), i*0.19)
```
Test weight decay, notice how we can see that L2 regularization is different from weight decay even for simple SGD with momentum.
```
params = tst_params()
#Weight decay
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#L2 reg
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1, decouple_wd=False)
opt.step()
test_close([p.item() for p in params], [i*0.97 for i in range(4)])
```
### RMSProp
```
#export
def rms_prop_step(p, lr, sqr_avg, eps, grad_avg=None, **kwargs):
"Step for SGD with momentum with `lr`"
denom = sqr_avg.sqrt().add_(eps)
p.data.addcdiv_(-lr, (grad_avg if grad_avg is not None else p.grad), denom)
return p
rms_prop_step.defaults = dict(eps=1e-8)
#export
def RMSProp(params, lr, sqr_mom=0.99, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for RMSProp with `lr`, `sqr_mom`, `mom` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(rms_prop_step)
stats = [average_sqr_grad] if mom==0. else [average_grad, average_sqr_grad]
return Optimizer(params, steppers, stats=stats, lr=lr, mom=mom, sqr_mom=sqr_mom, wd=wd)
```
RMSProp was introduced by Geoffrey Hinton in his [course](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). What is named `sqr_mom` here is the `alpha` in the course. Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
```
#Without momentum
import math
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * 0.1 / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
#With momentum
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1, mom=0.9)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * (0.1 + 0.9*0.1) / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
```
### Adam
```
#export
def step_stat(state, p, **kwargs):
"Register the number of steps done in `state` for `p`"
if 'step' not in state: state['step'] = 0
state['step'] += 1
return state
p = tst_param(1,0.1)
state = {}
state = step_stat(state, p)
test_eq(state['step'], 1)
for _ in range(5): state = step_stat(state, p)
test_eq(state['step'], 6)
#export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
#export
def adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for Adam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
p.data.addcdiv_(-lr / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
return p
adam_step._defaults = dict(eps=1e-5)
#export
def Adam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(adam_step)
stats = [partial(average_grad, dampening=True), average_sqr_grad, step_stat]
return Optimizer(params, steppers, stats=stats, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
```
Adam was introduced by Diederik P. Kingma and Jimmy Ba in [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). For consistency accross optimizers, we renamed `beta1` and `beta2` in the paper to `mom` and `sqr_mom`. Note that our defaults also differ from the paper (0.99 for `sqr_mom` or `beta2`, 1e-5 for `eps`). Those values seem to be better from our experiments in a wide range of situations.
Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
> Note: Don't forget that `eps` is an hyper-parameter you can change. Some models won't train without a very high `eps` like 0.1 (intuitively, the higher `eps` is, the closer we are to normal SGD). The usual default of 1e-8 is often too extreme in the sense we don't manage to get as good results as with SGD.
```
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Adam(params, lr=0.1)
opt.step()
step = -0.1 * 0.1 / (math.sqrt(0.1**2) + 1e-8)
test_close(params[0], tensor([1+step, 2+step, 3+step]))
opt.step()
test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)
```
### RAdam
RAdam (for rectified Adam) was introduced by Zhang et al. in [On the Variance of the Adaptive Learning Rate and Beyond](https://arxiv.org/abs/1907.08610) to slightly modify the Adam optimizer to be more stable at the beginning of training (and thus not require a long warmup). They use an estimate of the variance of the moving average of the squared gradients (the term in the denominator of traditional Adam) and rescale this moving average by this term before performing the update.
```
#export
def radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for RAdam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r_inf = 2/(1-sqr_mom) - 1
r = r_inf - 2*step*sqr_mom**step/(1-sqr_mom**step)
if r > 4:
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
p.data.addcdiv_(-lr*v / debias1, grad_avg, (sqr_avg/debias2).sqrt() + eps)
else: p.data.add_(-lr / debias1, grad_avg)
return p
radam_step._defaults = dict(eps=1e-5)
#export
def RAdam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(radam_step)
stats = [partial(average_grad, dampening=True), average_sqr_grad, step_stat]
return Optimizer(params, steppers, stats=stats, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
```
This is the effective correction apported to the adam step for 500 iterations in RAdam. We can see how it goes from 0 to 1, mimicking the effect of a warm-up.
```
beta = 0.99
r_inf = 2/(1-beta) - 1
rs = np.array([r_inf - 2*s*beta**s/(1-beta**s) for s in range(5,500)])
v = np.sqrt(((rs-4) * (rs-2) * r_inf)/((r_inf-4)*(r_inf-2)*rs))
plt.plot(v)
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RAdam(params, lr=0.1)
#The r factor is lower than 5 during the first 5 steps so updates use the aveage of gradients (all the same)
r_inf = 2/(1-0.99) - 1
for i in range(4):
r = r_inf - 2*(i+1)*0.99**(i+1)/(1-0.99**(i+1))
assert r <= 4
opt.step()
p = tensor([0.96, 1.92, 2.88])
test_close(params[0], p)
#The r factor is greater than 4 for the sixth step so we update with RAdam
r = r_inf - 2*5*0.99**5/(1-0.99**5)
assert r > 4
opt.step()
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
step = -0.1*0.1*v/(math.sqrt(0.1**2) + 1e-8)
test_close(params[0], p+step)
```
### LARS/LARC
```
#export
def larc_layer_lr(state, p, lr, trust_coeff, wd, eps, clip=True, **kwargs):
"Computes the local lr before weight decay is applied"
p_norm,g_norm = torch.norm(p.data),torch.norm(p.grad.data)
local_lr = lr*trust_coeff * (p_norm) / (g_norm + p_norm * wd + eps)
state['local_lr'] = min(lr, local_lr) if clip else local_lr
return state
larc_layer_lr.defaults = dict(trust_coeff=0.02, wd=0., eps=1e-8)
#export
def larc_step(p, local_lr, grad_avg=None, **kwargs):
p.data.add_(-local_lr, p.grad.data if grad_avg is None else grad_avg)
"Step for LARC `local_lr` on `p`"
return p
#export
def Larc(params, lr, mom=0.9, clip=True, trust_coeff=0.02, eps=1e-8, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(larc_step)
stats = [] if mom==0. else [average_grad]
stats.append(partial(larc_layer_lr, clip=clip))
return Optimizer(params, steppers, stats=stats, lr=lr, mom=mom, trust_coeff=trust_coeff, eps=eps, wd=wd)
```
The LARS optimizer was first introduced in [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) then refined in its LARC variant (original LARS is with `clip=False`). A learning rate is computed for each individual layer with a certain `trust_coefficient`, then clipped to be always less than `lr`.
Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
```
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1)
opt.step()
#First param local lr is 0.02 < lr so it's not clipped
test_close(opt.state[params[0]]['local_lr'], 0.02)
#Second param local lr is 0.2 > lr so it's clipped
test_eq(opt.state[params[1]]['local_lr'], 0.1)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.999,1.998,2.997]))
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1, clip=False)
opt.step()
#No clipping
test_close(opt.state[params[0]]['local_lr'], 0.02)
test_close(opt.state[params[1]]['local_lr'], 0.2)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.998,1.996,2.994]))
```
### LAMB
```
#export
def lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for LAMB with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps)
r2 = step.pow(2).mean().sqrt()
q = 1 if r1 == 0 or r2 == 0 else min(r1/r2,10)
p.data.add_(-lr * q, step)
return p
lamb_step._defaults = dict(eps=1e-6, wd=0.)
#export
def Lamb(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
steppers = [weight_decay] if decouple_wd else [l2_reg]
steppers.append(lamb_step)
stats = [partial(average_grad, dampening=True), average_sqr_grad, step_stat]
return Optimizer(params, steppers, stats=stats, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
```
LAMB was introduced in [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Intuitively, it's LARC applied to Adam. As in `Adam`, we renamed `beta1` and `beta2` in the paper to `mom` and `sqr_mom`. Note that our defaults also differ from the paper (0.99 for `sqr_mom` or `beta2`, 1e-5 for `eps`). Those values seem to be better from our experiments in a wide range of situations.
Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
```
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Lamb(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.7840,1.7840,2.7840]), eps=1e-3)
```
## Lookahead -
Lookahead was introduced by Zhang et al. in [Lookahead Optimizer: k steps forward, 1 step back](https://arxiv.org/abs/1907.08610). It can be run on top of any optimizer and consists in having the final weights of the model be a moving average. In practice, we update our model using the internal optimizer but keep a copy of old weights that and every `k` steps, we change the weghts by a moving average of the *fast weights* (the ones updated by the inner optimizer) with the *slow weights* (the copy of old weights). Those *slow weights* act like a stability mechanism.
```
#export
class Lookahead(Optimizer, GetAttr):
"Wrap `opt` in a lookahead optimizer"
_default='opt'
def __init__(self, opt, k=6, alpha=0.5):
store_attr(self, 'opt,k,alpha')
self._init_state()
def step(self):
if self.slow_weights is None: self._copy_weights()
self.opt.step()
self.count += 1
if self.count%self.k != 0: return
for slow_pg,fast_pg in zip(self.slow_weights,self.param_groups):
for slow_p,fast_p in zip(slow_pg,fast_pg):
slow_p.data.add_(self.alpha, fast_p.data-slow_p.data)
fast_p.data.copy_(slow_p.data)
def clear_state(self):
self.opt.clear_state()
self._init_state()
def state_dict(self):
state = self.opt.state_dict()
state.update({'count': self.count, 'slow_weights': self.slow_weights})
return state
def load_state_dict(self, sd):
self.count = sd.pop('count')
self.slow_weights = sd.pop('slow_weights')
self.opt.load_state_dict(sd)
def _init_state(self): self.count,self.slow_weights = 0,None
def _copy_weights(self): self.slow_weights = L(L(p.clone().detach() for p in pg) for pg in self.param_groups)
@property
def param_groups(self): return self.opt.param_groups
@param_groups.setter
def param_groups(self, v): self.opt.param_groups = v
params = tst_param([1,2,3], [0.1,0.2,0.3])
p,g = params[0].data.clone(),tensor([0.1,0.2,0.3])
opt = LookAhead(SGD(params, lr=0.1))
for k in range(5): opt.step()
#first 5 steps are normal SGD steps
test_close(params[0], p - 0.5*g)
#Since k=6, sixth step is a moving average of the 6 SGD steps with the intial weight
opt.step()
test_close(params[0], p * 0.5 + (p-0.6*g) * 0.5)
```
## OptimWrapper -
```
#export
def detuplify_pg(d):
res = {}
for k,v in d.items():
if k == 'params': continue
if is_listy(v): res.update(**{f'{k}__{i}': v_ for i,v_ in enumerate(v)})
else: res[k] = v
return res
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'mom': 0.9})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'betas__0': 0.9, 'betas__1': 0.999})
#export
def set_item_pg(pg, k, v):
if '__' not in k: pg[k] = v
else:
name,idx = k.split('__')
pg[name] = tuple(v if i==int(idx) else pg[name][i] for i in range_of(pg[name]))
return pg
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'lr', 1e-3), {'lr': 1e-3, 'mom': 0.9, 'params':[0,1,2]})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'betas__0', 0.95), {'lr': 1e-2, 'betas': (0.95,0.999), 'params':[0,1,2]})
#export
pytorch_hp_map = {'momentum': 'mom', 'weight_decay': 'wd', 'alpha': 'sqr_mom', 'betas__0': 'mom', 'betas__1': 'sqr_mom'}
#export
class OptimWrapper(_BaseOptimizer, GetAttr):
_xtra=['zero_grad', 'step', 'state_dict', 'load_state_dict']
_default='opt'
def __init__(self, opt, hp_map=None):
self.opt = opt
if hp_map is None: hp_map = pytorch_hp_map
self.fwd_map = {k: hp_map[k] if k in hp_map else k for k in detuplify_pg(opt.param_groups[0]).keys()}
self.bwd_map = {v:k for k,v in self.fwd_map.items()}
self.state = defaultdict(dict, {})
self.frozen_idx = 0
@property
def param_groups(self): return [pg['params'] for pg in self.opt.param_groups]
@param_groups.setter
def param_groups(self, v):
for pg,v_ in zip(self.opt.param_groups,v): pg['params'] = v_
@property
def hypers(self):
return [{self.fwd_map[k]:v for k,v in detuplify_pg(pg).items() if k != 'params'} for pg in self.opt.param_groups]
def _set_hyper(self, k, v):
for pg,v_ in zip(self.opt.param_groups,v): pg = set_item_pg(pg, self.bwd_map[k], v_)
def clear_state(self): self.opt.state = defaultdict(dict, {})
sgd = SGD([tensor([1,2,3])], lr=1e-3, mom=0.9, wd=1e-2)
tst_sgd = OptimWrapper(torch.optim.SGD([tensor([1,2,3])], lr=1e-3, momentum=0.9, weight_decay=1e-2))
#Access to param_groups
test_eq(tst_sgd.param_groups, sgd.param_groups)
#Set param_groups
tst_sgd.param_groups = [[tensor([4,5,6])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[0], 'dampening': 0., 'nesterov': False}])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq(tst_sgd.opt.param_groups[0]['momentum'], 0.95)
tst_sgd = OptimWrapper(torch.optim.SGD([{'params': [tensor([1,2,3])], 'lr': 1e-3},
{'params': [tensor([4,5,6])], 'lr': 1e-2}], momentum=0.9, weight_decay=1e-2))
sgd = SGD([[tensor([1,2,3])], [tensor([4,5,6])]], lr=[1e-3, 1e-2], mom=0.9, wd=1e-2)
#Access to param_groups
test_eq(tst_sgd.param_groups, sgd.param_groups)
#Set param_groups
tst_sgd.param_groups = [[tensor([4,5,6])], [tensor([1,2,3])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
test_eq(tst_sgd.opt.param_groups[1]['params'], [tensor(1,2,3)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[i], 'dampening': 0., 'nesterov': False} for i in range(2)])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq([pg['momentum'] for pg in tst_sgd.opt.param_groups], [0.95,0.95])
tst_sgd.set_hyper('lr', [1e-4,1e-3])
test_eq([pg['lr'] for pg in tst_sgd.opt.param_groups], [1e-4,1e-3])
#hide
#check it works with tuply hp names like in Adam
tst_adam = OptimWrapper(torch.optim.Adam([tensor([1,2,3])], lr=1e-2, betas=(0.9, 0.99)))
test_eq(tst_adam.hypers, [{'lr': 0.01, 'mom': 0.9, 'sqr_mom': 0.99, 'eps': 1e-08, 'wd': 0, 'amsgrad': False}])
tst_adam.set_hyper('mom', 0.95)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.99))
tst_adam.set_hyper('sqr_mom', 0.9)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.9))
def _mock_train(m, x, y, opt):
m.train()
for i in range(0, 100, 25):
z = m(x[i:i+25])
loss = F.mse_loss(z, y[i:i+25])
loss.backward()
opt.step()
opt.zero_grad()
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(torch.optim.AdamW(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2))
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(torch.optim.Adam(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2))
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2, decouple_wd=False)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
```
## Export -
```
#hide
from local.notebook.export import *
notebook2script(all_fs=True)
```
| github_jupyter |
# Grid algorithm for the beta-binomial hierarchical model
[Bayesian Inference with PyMC](https://allendowney.github.io/BayesianInferencePyMC)
Copyright 2021 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
# If we're running on Colab, install PyMC and ArviZ
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pymc3
!pip install arviz
# PyMC generates a FutureWarning we don't need to deal with yet
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import seaborn as sns
def plot_hist(sample, **options):
"""Plot a histogram of goals.
sample: sequence of values
"""
sns.histplot(sample, stat='probability', discrete=True,
alpha=0.5, **options)
def plot_kde(sample, **options):
"""Plot a distribution using KDE.
sample: sequence of values
"""
sns.kdeplot(sample, cut=0, **options)
import matplotlib.pyplot as plt
def legend(**options):
"""Make a legend only if there are labels."""
handles, labels = plt.gca().get_legend_handles_labels()
if len(labels):
plt.legend(**options)
def decorate(**options):
plt.gca().set(**options)
legend()
plt.tight_layout()
def decorate_heads(ylabel='Probability'):
"""Decorate the axes."""
plt.xlabel('Number of heads (k)')
plt.ylabel(ylabel)
plt.title('Distribution of heads')
legend()
def decorate_proportion(ylabel='Likelihood'):
"""Decorate the axes."""
plt.xlabel('Proportion of heads (x)')
plt.ylabel(ylabel)
plt.title('Distribution of proportion')
legend()
from empiricaldist import Cdf
def compare_cdf(pmf, sample):
pmf.make_cdf().plot(label='grid')
Cdf.from_seq(sample).plot(label='mcmc')
print(pmf.mean(), sample.mean())
decorate()
```
## The Grid Algorithm
```
import numpy as np
from scipy.stats import gamma
alpha = 4
beta = 0.5
qs = np.linspace(0.1, 25, 100)
ps = gamma(alpha, scale=1/beta).pdf(qs)
from empiricaldist import Pmf
prior_alpha = Pmf(ps, qs)
prior_alpha.normalize()
prior_alpha.index.name = 'alpha'
prior_alpha.shape
prior_alpha.plot()
prior_alpha.mean()
qs = np.linspace(0.1, 25, 90)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_beta = Pmf(ps, qs)
prior_beta.normalize()
prior_beta.index.name = 'beta'
prior_beta.shape
prior_beta.plot()
prior_beta.mean()
def make_hyper(prior_alpha, prior_beta):
PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij')
hyper = PA * PB
return hyper
hyper = make_hyper(prior_alpha, prior_beta)
hyper.shape
import pandas as pd
from utils import plot_contour
plot_contour(pd.DataFrame(hyper))
```
## Make Prior
```
from scipy.stats import beta as betadist
xs = np.linspace(0.01, 0.99, 80)
prior_x = Pmf(betadist.pdf(xs, 2, 2), xs)
prior_x.plot()
from scipy.stats import beta as betadist
def make_prior(hyper, prior_alpha, prior_beta, xs):
A, B, X = np.meshgrid(prior_alpha.qs, prior_beta.qs, xs, indexing='ij')
ps = betadist.pdf(X, A, B)
totals = ps.sum(axis=2)
nc = hyper / totals
shape = nc.shape + (1,)
prior = ps * nc.reshape(shape)
return prior
xs = np.linspace(0.01, 0.99, 80)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
prior.sum()
def marginal(joint, axis):
axes = [i for i in range(3) if i != axis]
return joint.sum(axis=tuple(axes))
prior_a = Pmf(marginal(prior, 0), prior_alpha.qs)
prior_alpha.plot()
prior_a.plot()
prior_a.mean()
prior_b = Pmf(marginal(prior, 1), prior_beta.qs)
prior_beta.plot()
prior_b.plot()
prior_x = Pmf(marginal(prior, 2), xs)
prior_x.plot()
```
## The Update
```
from scipy.stats import binom
n = 250
ks = 140
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
like_x.shape
plt.plot(xs, like_x)
def update(prior, data):
n, ks = data
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
posterior = prior * like_x
posterior /= posterior.sum()
return posterior
data = 250, 140
posterior = update(prior, data)
marginal_x = Pmf(marginal(posterior, 2), xs)
marginal_x.plot()
marginal_x.mean()
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
marginal_alpha.plot()
marginal_alpha.mean()
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
marginal_beta.plot()
marginal_beta.mean()
```
## One coin with PyMC
```
import pymc3 as pm
n = 250
with pm.Model() as model1:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
x1 = pm.Beta('x1', alpha, beta)
k1 = pm.Binomial('k1', n=n, p=x1, observed=140)
pred = pm.sample_prior_predictive(1000)
```
Here's the graphical representation of the model.
```
pm.model_to_graphviz(model1)
from utils import kde_from_sample
kde_from_sample(pred['alpha'], prior_alpha.qs).plot()
prior_alpha.plot()
kde_from_sample(pred['beta'], prior_beta.qs).plot()
prior_beta.plot()
kde_from_sample(pred['x1'], prior_x.qs).plot()
prior_x.plot()
```
Now let's run the sampler.
```
with model1:
trace1 = pm.sample(500)
```
Here are the posterior distributions for the two coins.
```
compare_cdf(marginal_alpha, trace1['alpha'])
compare_cdf(marginal_beta, trace1['beta'])
compare_cdf(marginal_x, trace1['x1'])
```
## Two coins
```
def get_hyper(joint):
return joint.sum(axis=2)
posterior_hyper = get_hyper(posterior)
posterior_hyper.shape
prior2 = make_prior(posterior_hyper, prior_alpha, prior_beta, xs)
data = 250, 110
posterior2 = update(prior2, data)
marginal_alpha2 = Pmf(marginal(posterior2, 0), prior_alpha.qs)
marginal_alpha2.plot()
marginal_alpha2.mean()
marginal_beta2 = Pmf(marginal(posterior2, 1), prior_beta.qs)
marginal_beta2.plot()
marginal_beta2.mean()
marginal_x2 = Pmf(marginal(posterior2, 2), xs)
marginal_x2.plot()
marginal_x2.mean()
```
## Two coins with PyMC
```
with pm.Model() as model2:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
x1 = pm.Beta('x1', alpha, beta)
x2 = pm.Beta('x2', alpha, beta)
k1 = pm.Binomial('k1', n=n, p=x1, observed=140)
k2 = pm.Binomial('k2', n=n, p=x2, observed=110)
```
Here's the graph for this model.
```
pm.model_to_graphviz(model2)
```
Let's run the sampler.
```
with model2:
trace2 = pm.sample(500)
```
And here are the results.
```
kde_from_sample(trace2['alpha'], marginal_alpha.qs).plot()
marginal_alpha2.plot()
trace2['alpha'].mean(), marginal_alpha2.mean()
kde_from_sample(trace2['beta'], marginal_beta.qs).plot()
marginal_beta2.plot()
trace2['beta'].mean(), marginal_beta2.mean()
kde_from_sample(trace2['x2'], marginal_x.qs).plot()
marginal_x2.plot()
```
## Heart Attack Data
This example is based on [Chapter 10 of *Probability and Bayesian Modeling*](https://bayesball.github.io/BOOK/bayesian-hierarchical-modeling.html#example-deaths-after-heart-attack); it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a `DataFrame`.
```
import os
filename = 'DeathHeartAttackManhattan.csv'
if not os.path.exists(filename):
!wget https://github.com/AllenDowney/BayesianInferencePyMC/raw/main/DeathHeartAttackManhattan.csv
import pandas as pd
df = pd.read_csv(filename)
df
```
The columns we need are `Cases`, which is the number of patients treated at each hospital, and `Deaths`, which is the number of those patients who died.
```
# shuffled = df.sample(frac=1)
data_ns = df['Cases'].values
data_ks = df['Deaths'].values
```
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
## Hospital Data with grid
```
alpha = 4
beta = 0.5
qs = np.linspace(0.1, 25, 100)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_alpha = Pmf(ps, qs)
prior_alpha.normalize()
prior_alpha.index.name = 'alpha'
qs = np.linspace(0.1, 50, 90)
ps = gamma(alpha, scale=1/beta).pdf(qs)
prior_beta = Pmf(ps, qs)
prior_beta.normalize()
prior_beta.index.name = 'beta'
prior_beta.shape
prior_alpha.plot()
prior_beta.plot()
prior_alpha.mean()
hyper = make_hyper(prior_alpha, prior_beta)
hyper.shape
xs = np.linspace(0.01, 0.99, 80)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
prior.shape
for data in zip(data_ns, data_ks):
print(data)
posterior = update(prior, data)
hyper = get_hyper(posterior)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
marginal_alpha.plot()
marginal_alpha.mean()
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
marginal_beta.plot()
marginal_beta.mean()
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
marginal_x.plot()
marginal_x.mean()
```
## Hospital Data with PyMC
```
with pm.Model() as model4:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
xs = pm.Beta('xs', alpha, beta, shape=len(data_ns))
ks = pm.Binomial('ks', n=data_ns, p=xs, observed=data_ks)
trace4 = pm.sample(500)
```
Here's the graph representation of the model, showing that the observable is an array of 13 values.
```
pm.model_to_graphviz(model4)
```
Here's the trace.
```
kde_from_sample(trace4['alpha'], marginal_alpha.qs).plot()
marginal_alpha.plot()
trace4['alpha'].mean(), marginal_alpha.mean()
kde_from_sample(trace4['beta'], marginal_beta.qs).plot()
marginal_beta.plot()
trace4['beta'].mean(), marginal_beta.mean()
trace_xs = trace4['xs'].transpose()
trace_xs.shape
kde_from_sample(trace_xs[-1], marginal_x.qs).plot()
marginal_x.plot()
trace_xs[-1].mean(), marginal_x.mean()
xs = np.linspace(0.01, 0.99, 80)
hyper = get_hyper(posterior)
post_all = make_prior(hyper, prior_alpha, prior_beta, xs)
def forget(posterior, data):
n, ks = data
X, K = np.meshgrid(xs, ks)
like_x = binom.pmf(K, n, X).prod(axis=0)
prior = posterior / like_x
prior /= prior.sum()
return prior
def get_marginal_x(post_all, data):
prior = forget(post_all, data)
hyper = get_hyper(prior)
prior = make_prior(hyper, prior_alpha, prior_beta, xs)
posterior = update(prior, data)
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
return marginal_x
data = 270, 16
marginal_x = get_marginal_x(post_all, data)
kde_from_sample(trace_xs[0], marginal_x.qs).plot()
marginal_x.plot()
trace_xs[0].mean(), marginal_x.mean()
```
## One at a time
```
prior.shape, prior.sum()
likelihood = np.empty((len(df), len(xs)))
for i, row in df.iterrows():
n = row['Cases']
k = row['Deaths']
likelihood[i] = binom.pmf(k, n, xs)
prod = likelihood.prod(axis=0)
prod.shape
i = 3
all_but_one = prod / likelihood[i]
prior
hyper_i = get_hyper(prior * all_but_one)
hyper_i.sum()
prior_i = make_prior(hyper_i, prior_alpha, prior_beta, xs)
data = df.loc[i, 'Cases'], df.loc[i, 'Deaths']
data
posterior_i = update(prior_i, data)
marginal_alpha = Pmf(marginal(posterior_i, 0), prior_alpha.qs)
marginal_beta = Pmf(marginal(posterior_i, 1), prior_beta.qs)
marginal_x = Pmf(marginal(posterior_i, 2), prior_x.qs)
compare_cdf(marginal_alpha, trace4['alpha'])
compare_cdf(marginal_beta, trace4['beta'])
compare_cdf(marginal_x, trace_xs[i])
```
| github_jupyter |
```
from datascience import *
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from scipy import stats
from scipy.stats import norm
import matplotlib
matplotlib.__version__
import seaborn as sns
sns.set(color_codes = True)
#Data or Fe-based, Cuprates, Hydrides
#There were no high T hydrides in the original data set
features8 = pd.read_csv("https://raw.githubusercontent.com/9161AD/superconduct-/master/features_H_Cu_Fe2.csv")
features8
len(features8)
# We Remove the one outlier that contains Hg but no Cu to isolate the Hydrides
#Already determined All Fe based SCs contain Cu
features_Hydrides1 = features8[~features8.material_name.str.contains("Cu")]
features_Hydrides2 = features_Hydrides1[~features_Hydrides1.material_name.str.contains("Hg")]
features_Hydrides3 = features_Hydrides2[~features_Hydrides2.material_name.str.contains("Hf")]
features_Hydrides4 = features_Hydrides3[~features_Hydrides3.material_name.str.contains("Hs")]
features_Hydrides5 = features_Hydrides4[~features_Hydrides4.material_name.str.contains("Ho")]
features_Hydrides6 = features_Hydrides5[~features_Hydrides5.material_name.str.contains("Fe")]
features_Hydrides6.head()
#Hydrides Groups
Hydrides = features_Hydrides6.assign(Group='Hydride')[['Group'] + features_Hydrides6.columns.tolist()]
Hydrides = Hydrides.drop(Hydrides.columns[1], axis=1)
Hydrides.head()
len(Hydrides)
(len(Hydrides)/len(features8)) * 100
#9% Hydrides
#Cuprate Groups --> Isolating Fe then picking out Cu
features_Cuprates1 = features8[~features8.material_name.str.contains("Fe")]
features_Cuprates2 = features_Cuprates1[features_Cuprates1.material_name.str.contains("Cu")]
#Cuprates Groups
Cuprates = features_Cuprates2.assign(Group='Cuprate')[['Group'] + features_Cuprates2.columns.tolist()]
Cuprates = Cuprates.drop(Cuprates.columns[1], axis=1)
Cuprates.head()
len(Cuprates)
len(Cuprates)
(len(Cuprates)/len(features8)) * 100
#60 % Cuprates
features_Fe = features8[features8.material_name.str.contains("Fe")]
#Iron Groups
Iron_Based = features_Fe.assign(Group='Iron-Based')[['Group'] + features_Fe.columns.tolist()]
Iron_Based = Iron_Based.drop(Iron_Based.columns[1], axis=1)
Iron_Based.head()
len(Iron_Based)
(len(Iron_Based)/len(features8)) * 100
# 7% Iron Based
#Isolated 3 desired Classes
Classes = Hydrides.append(Cuprates).append(Iron_Based)
len(Classes)
(len(Classes)) / 21263 * 100
#Now down to 5.66 % of dataset
Box1 = sns.violinplot(x='Group', y='critical_temp', data=Classes)
plt
plt.title("Classes Critical Temperature Distributions", loc = "left")
plt.xlabel("Class")
plt.ylabel("Critical Temperature (K)")
#Superposition of Jitter with Boxplot
Box2 =sns.boxplot(x='Group', y='critical_temp', data = Classes)
Box2 = sns.stripplot(x='Group', y = 'critical_temp', data= Classes, color = "orange", jitter = 0.2, size = 2.5)
plt.title("Classes Critical Temperature Distributions", loc = "left")
plt.xlabel("Class")
plt.ylabel("Critical Temperature (K)")
g = sns.pairplot(Classes, vars=["critical_temp", "number_of_elements"], hue = "Group")
import seaborn as sns; sns.set(style="ticks", color_codes=True, hue = "Group")
g = sns.pairplot(Classes)
g
#Normalized for all classes
#features8.hist('critical_temp', bins = 16, range = (10,160), color = 'r', density=1)
#plots.title('Critical Temperature for Iron-Based,Cuprates,Hydrides-- High T Superconductors')
#plots.xlabel("Temperature (K)")
#plots.ylabel("Count")
import statsmodels.formula.api as smf
#Begins groundwork for setting a linear regression
model = 'critical_temp ~ %s'%(" + ".join(Classes.columns.values[2:]))
#Multiple Regression Analysis on 3 combined classes
linear_regression = smf.ols(model, data = Classes).fit()
linear_regression.summary()
import statsmodels.formula.api as smf
#Begins groundwork for setting a linear regression
model = 'critical_temp ~ %s'%(" + ".join(Hydrides.columns.values[2:]))
#Train Test on Combined Classes
#X contains predictors
X1 = Classes.drop(['Group','material_name','critical_temp'],1)
X1.head()
#Make Y a true column vector containing the mpg for each superconductor
Y1 = Classes[['critical_temp']]
#Removed Material_names because they are not statistical predictors
#, rather just labels
Z1 = Classes[['Group', 'material_name']]
from sklearn.model_selection import train_test_split
# Split X and y into X_
#test size = 0.66 to match previous literature
X1_train, X1_test, Y1_train, Y1_test = train_test_split(X1, Y1, test_size=0.66, random_state=1)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
lineReg = LinearRegression()
lineReg.fit(X1_train, Y1_train)
lineReg.score(X1_test, Y1_test)
#Recent Literature had 74% for full data set. I matched this as well
#priro to splitting up by class
#See how reducing to single classes affects correlation
#Train Test on HYDRIDES
#X2 contains predictors
X2 = Hydrides.drop(['Group','material_name','critical_temp'],1)
len(X2)
#Make Y2 a true column vector containing the mpg for each superconductor
Y2 = Hydrides[['critical_temp']]
#Removed Material_names because they are not statistical predictors
#, rather just labels
Z2 = Hydrides[['Group', 'material_name']]
from sklearn.model_selection import train_test_split
# Split X and y into X_
#test size = 0.66 to match previous literature
X2_train, X2_test, Y2_train, Y2_test = train_test_split(X2, Y2, test_size=0.66, random_state=1)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
lineReg = LinearRegression()
lineReg.fit(X2_train, Y2_train)
lineReg.score(X2_test, Y2_test)
#Test Cuprates Variable-3
#X2 contains predictors
X3 = Cuprates.drop(['Group','material_name','critical_temp'],1)
Y3 = Cuprates[['critical_temp']]
Z3 = Cuprates[['Group', 'material_name']]
from sklearn.model_selection import train_test_split
X3_train, X3_test, Y3_train, Y3_test = train_test_split(X3, Y3, test_size=0.66, random_state=1)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
lineReg = LinearRegression()
lineReg.fit(X3_train, Y3_train)
abs(lineReg.score(X3_test, Y3_test))
#Test Fe-Based Variable - 4
#X4 contains predictors
X4 = Iron_Based.drop(['Group','material_name','critical_temp'],1)
Y4 = Iron_Based[['critical_temp']]
Z4 = Iron_Based[['Group', 'material_name']]
from sklearn.model_selection import train_test_split
X4_train, X4_test, Y4_train, Y4_test = train_test_split(X4, Y4, test_size=0.66, random_state=1)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
lineReg = LinearRegression()
lineReg.fit(X4_train, Y4_train)
abs(lineReg.score(X4_test, Y4_test))
Groups = ['Hydrides', 'Cuprates', 'Iron-Based']
Number_Entries =[len(Hydrides),len(Cuprates),len(Iron_Based)]
MR_Scores = [-0.78, 0.31, 0.27]
Summary = pd.DataFrame({'Class': Groups,
'Number of Materials': Number_Entries,
'Coeffieicent of Multiple Determination': MR_Scores,
})
Summary
sns.lmplot(x='Number of Materials', y='MR_Scores', data=Summary)
plt.ylim(-1,1)
plt.xlim(0,1000)
```
| github_jupyter |
# Generate reference data
Make reference data for `carsus/io/output/base.py` module.
```
import pandas as pd
from pandas.testing import assert_frame_equal, assert_series_equal
from carsus.io.nist import NISTWeightsComp, NISTIonizationEnergies
from carsus.io.kurucz import GFALLReader
from carsus.io.zeta import KnoxLongZeta
from carsus.io.chianti_ import ChiantiReader
from carsus.io.cmfgen import CMFGENEnergyLevelsParser, CMFGENOscillatorStrengthsParser, CMFGENReader
from carsus.io.output import TARDISAtomData
GFALL_IONS = "H-Si"
CHIANTI_IONS = "H-He"
CMFGEN_IONS = "Si_I-II"
fname = f"test_data_ku_{GFALL_IONS}_ch_{CHIANTI_IONS}_cm_{CMFGEN_IONS}.h5"
refdata = pd.HDFStore(fname)
atomic_weights = NISTWeightsComp()
ionization_energies = NISTIonizationEnergies(GFALL_IONS)
gfall_reader = GFALLReader(ions=GFALL_IONS)
chianti_reader = ChiantiReader(ions=CHIANTI_IONS, collisions=True, priority=20)
zeta_data = KnoxLongZeta()
si_0_lvl = CMFGENEnergyLevelsParser('./cmfgen/energy_levels/SiI_OSC')
si_0_osc = CMFGENOscillatorStrengthsParser('./cmfgen/energy_levels/SiI_OSC')
si_1_lvl = CMFGENEnergyLevelsParser('./cmfgen/energy_levels/si2_osc_kurucz')
si_1_osc = CMFGENOscillatorStrengthsParser('./cmfgen/energy_levels/si2_osc_kurucz')
cmfgen_data = {'Si 0': {'levels': si_0_lvl, 'lines': si_0_osc},
'Si 1': {'levels': si_1_lvl, 'lines': si_1_osc},}
cmfgen_reader = CMFGENReader(cmfgen_data, priority=20)
atom_data = TARDISAtomData(atomic_weights,
ionization_energies,
gfall_reader,
zeta_data,
chianti_reader,
cmfgen_reader)
atomic_weights = atom_data.atomic_weights.base.loc[1:14] # H-Si to do: make more consistent
ionization_energies = atom_data.ionization_energies.base # to do: make more consistent
levels_all = atom_data._get_all_levels_data().drop(columns=["ds_id"])
levels = atom_data.levels.drop(columns=["ds_id"])
levels_prepared = atom_data.levels_prepared
lines_all = atom_data._get_all_lines_data().drop(columns=["ds_id"])
lines = atom_data.lines.drop(columns=["ds_id"])
lines_prepared = atom_data.lines_prepared
macro_atom = atom_data.macro_atom
macro_atom_prepared = atom_data.macro_atom_prepared
macro_atom_references = atom_data.macro_atom_references
macro_atom_references_prepared = atom_data.macro_atom_references_prepared
collisions = atom_data.collisions.drop(columns=["btemp", "bscups"])
collisions_prepared = atom_data.collisions_prepared
zeta_data = atom_data.zeta_data.base # to do: make more consistent
refdata.put('atomic_weights', atomic_weights)
refdata.put('ionization_energies', ionization_energies)
refdata.put('levels_all', levels_all)
refdata.put('levels', levels)
refdata.put('levels_prepared', levels_prepared)
refdata.put('lines_all', lines_all)
refdata.put('lines', lines)
refdata.put('lines_prepared', lines_prepared)
refdata.put('macro_atom', macro_atom)
refdata.put('macro_atom_prepared', macro_atom_prepared)
refdata.put('macro_atom_references', macro_atom_references)
refdata.put('macro_atom_references_prepared', macro_atom_references_prepared)
refdata.put('collisions', collisions)
refdata.put('collisions_prepared', collisions_prepared)
refdata.put('zeta_data', zeta_data)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
import sys
import re
import webbrowser
from time import sleep
# https://stackoverflow.com/questions/4028904/how-to-get-the-home-directory-in-python
from os.path import expanduser
from os import listdir
executable_path = expanduser("~") + "/chromedriver"
if 'chromedriver.exe' not in listdir(expanduser("~")):
print('chomedriver.exe not found in the home directory! Refer to Selenium docs.')
sys.exit()
driver = webdriver.Chrome(executable_path=executable_path)
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
# https://stackoverflow.com/questions/4028904/how-to-get-the-home-directory-in-python
from os.path import expanduser
from os import listdir
executable_path = expanduser("~") + "/chromedriver"
if 'chromedriver.exe' not in listdir(expanduser("~")):
print('chomedriver.exe not found in the home directory! Refer to Selenium docs.')
sys.exit()
driver = webdriver.Chrome(executable_path=executable_path)
#opportunities = pd.read_csv('opportunities.csv')
#use this if restarting scraper
opportunities = pd.read_csv('opportunities_scraped.csv')
opportunities.head()
#opportunities['scraped_address']=np.nan
opportunities[opportunities.scraped_address!='not found'].head()
opportunities.head(33)
driver = webdriver.Chrome(executable_path="/Users/rajchakrabarty/Downloads/chromedriver")
#export to csv
opportunities.to_csv('opportunities_scraped.csv',index_label=False)
def scrape():
driver = webdriver.Chrome(executable_path="/Users/rajchakrabarty/Downloads/chromedriver")
counter = 0
for i, row in opportunities[(opportunities['scraped_address'].isnull()==True)].iterrows():
search_term = ''
url = ''
address = ''
search_term = row['Opportunity Name'].replace(' ','+')+'+address'
url = 'https://www.google.com/search?\&q='+search_term
driver.get(url)
search_bar = driver.find_element_by_id('lst-ib')
search_bar.clear()
search_bar.send_keys(search_term)
response = driver.page_source
html = BeautifulSoup(response, 'lxml')
span = html.find('span',{'class':'LrzXr'})
#print (i, search_term)
try:
address = span.text
except:
address = 'not found'
#print(address)
opportunities.loc[opportunities['Opportunity ID'] == row['Opportunity ID']
,'scraped_address'] = address
counter += 1
if counter%20 == 0:
print(i, search_term)
print(address)
#sleep(10)
print('start')
keep_going = True
while keep_going:
try:
scrape()
keep_going = False
except:
opportunities.to_csv('opportunities_scraped.csv',index_label=False)
opportunities.to_csv('opportunities_scraped.csv',index_label=False)
```
| github_jupyter |
# Starting a local neuroglancer session with FAFB dataset
### This example shows how to start a local neuroglancer session and further add neurons, synapses, neuropil meshes from a public catmaid instance
### Import neccesary library modules now
```
import navis
import fafbseg
import pymaid
import pandas as pd
import numpy as np
import os
from copy import deepcopy
import io
from PIL import Image
from pyroglancer.layers import create_nglayer, setlayerproperty
from pyroglancer.localserver import startdataserver, closedataserver
from pyroglancer.ngviewer import openviewer, closeviewer,setviewerstate, get_ngscreenshot
from pyroglancer.ngspaces import create_ngspace
from pyroglancer.createconfig import createconfig
```
### Set configurations to fetch from data from CATMAID
```
publicurl = 'https://fafb.catmaid.virtualflybrain.org/'
working_rm = pymaid.CatmaidInstance(publicurl, api_token=None, project_id = 1)
```
### Get sample skids and neuropil meshes from CATMAID
```
sample_skids = ['40637','27295','57311','2863104','57323']
catmiad_neuronlist=pymaid.get_neurons(sample_skids,remote_instance = working_rm)
vols = pymaid.get_volume(['AL_L', 'AL_R'], color=(255, 0, 0, .2))
vols['AL_R'].id = 200
vols['AL_L'].id = 300
vols
```
### Start the dataserver to host precomputed data..
```
startdataserver()
```
### Start a basic neuroglancer local session with all FAFB configurations..
```
configdata = [dict(
ngspace='FAFB',
dimension=dict(x=1, y=1,z=1,units='um'),
voxelsize=dict(x=4,y=4,z=40,units='nm'),
layers=dict(
fafb_v14_clahe=dict(
type='image',
source='precomputed://gs://neuroglancer-fafb-data/fafb_v14/fafb_v14_clahe'),
fafb_surf=dict(
type='surfacemesh',
source='vtk://https://storage.googleapis.com/neuroglancer-fafb-data/elmr-data/FAFB.surf.vtk.gz'
))
)]
configfileloc = '/Users/sri/.pyroglancer/config_temp.yml'
createconfig(configdata, configfileloc)
layer_kws = {'ngspace': 'FAFB'}
create_ngspace(layer_kws)
```
### Add skids to neuroglancer layers..
```
tmpviewer = create_nglayer(layer_kws = {'type': 'skeletons',
'source': catmiad_neuronlist,
'name':'catmaid_skels',
'color': 'green',
'alpha': 0.5})
```
### Add synapses to neuroglancer layers..
```
tmpviewer = create_nglayer(layer_kws = {'type': 'synapses',
'linked_layername': 'catmaid_skels',
'source': catmiad_neuronlist})
```
### Add neuropil meshes to neuroglancer layers..
```
tmpviewer = create_nglayer(layer_kws = {'type': 'volumes','source': [vols['AL_R'],vols['AL_L']],
'name': 'neuropils','color': ['magenta', 'blue'], 'alpha': 0.3})
```
### Add annotations meshes to neuroglancer layers..
```
temp_pts = pd.DataFrame([[123072, 47001, 3375]],columns=['x','y','z'])
temp_pts = pd.DataFrame([[123072, 47001, 3375], [120000, 17001, 3000]], columns=['x', 'y', 'z'])
temp_pts['description'] = ['center_pt','above_pt']
#plot landmarks..
tmpviewer = create_nglayer(layer_kws = {'type': 'points','name': 'landmarks',
"annotationstatetype": 'precomputed',
'source': temp_pts,'color': 'orange'})
```
### Set settings of the viewer/segments
```
tmpviewer = setlayerproperty(tmpviewer, property_kws = {'name': 'synapses_buhmann2019','visibility': False})
tmpviewer = setlayerproperty(tmpviewer, property_kws = {'name': 'catmaid_skels','segments': sample_skids})
tmpviewer = setlayerproperty(tmpviewer, property_kws = {'name': 'neuropils','segments': [vols['AL_R'].id, vols['AL_L'].id]})
tmpviewer = setviewerstate(axis_lines = False, bounding_box = False)
#adjust the zoom factor a bit according your settings, screen, viewer state before etc.
tmpviewer = setviewerstate(tmpviewer, axis_lines=False, bounding_box=False, layout='3d', zoom_factor = 208000)
```
### Screenshot of the neuroglancer instance
```
screenshot = get_ngscreenshot(tmpviewer, viewer_size=[1000, 1000])
imageStream = io.BytesIO(screenshot.image)
imageFile = Image.open(imageStream)
current_folder = globals()['_dh'][0]
imagefilepath = os.path.join(current_folder, 'pics/local_neuroglancersession.png')
imagefilepath
imageFile.save(imagefilepath)
```

### Close the viewer and dataserver
```
closeviewer()
closedataserver()
```
| github_jupyter |
# BB84 Quantum Key Distribution (QKD) Protocol using Qiskit
This notebook is a _demonstration_ of the BB84 Protocol for QKD using Qiskit.
BB84 is a quantum key distribution scheme developed by Charles Bennett and Gilles Brassard in 1984 ([paper]).
The first three sections of the paper are readable and should give you all the necessary information required.

[paper]: http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, execute
from qiskit.providers.aer import QasmSimulator
from qiskit.visualization import *
```
## Choosing bases and encoding states
Alice generates two binary strings. One encodes the basis for each qubit:
$0 \rightarrow$ Computational basis
$1 \rightarrow$ Hadamard basis
The other encodes the state:
$0 \rightarrow|0\rangle$ or $|+\rangle $
$1 \rightarrow|1\rangle$ or $|-\rangle $
Bob also generates a binary string and uses the same convention to choose a basis for measurement
```
num_qubits = 32
alice_basis = np.random.randint(2, size=num_qubits)
alice_state = np.random.randint(2, size=num_qubits)
bob_basis = np.random.randint(2, size=num_qubits)
print(f"Alice's State:\t {np.array2string(alice_state, separator='')}")
print(f"Alice's Bases:\t {np.array2string(alice_basis, separator='')}")
print(f"Bob's Bases:\t {np.array2string(bob_basis, separator='')}")
```
## Creating the circuit
Based on the following results:
$X|0\rangle = |1\rangle$
$H|0\rangle = |+\rangle$
$ HX|0\rangle = |-\rangle$
Our algorithm to construct the circuit is as follows:
1. Whenever Alice wants to encode 1 in a qubit, she applies an $X$ gate to the qubit. To encode 0, no action is needed.
2. Wherever she wants to encode it in the Hadamard basis, she applies an $H$ gate. No action is necessary to encode a qubit in the computational basis.
3. She then _sends_ the qubits to Bob (symbolically represented in this circuit using wires)
4. Bob measures the qubits according to his binary string. To measure a qubit in the Hadamard basis, he applies an $H$ gate to the corresponding qubit and then performs a mesurement on the computational basis.
```
def make_bb84_circ(enc_state, enc_basis, meas_basis):
'''
enc_state: array of 0s and 1s denoting the state to be encoded
enc_basis: array of 0s and 1s denoting the basis to be used for encoding
0 -> Computational Basis
1 -> Hadamard Basis
meas_basis: array of 0s and 1s denoting the basis to be used for measurement
0 -> Computational Basis
1 -> Hadamard Basis
'''
num_qubits = len(enc_state)
bb84_circ = QuantumCircuit(num_qubits)
# Sender prepares qubits
for index in range(len(enc_basis)):
if enc_state[index] == 1:
bb84_circ.x(index)
if enc_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
# Receiver measures the received qubits
for index in range(len(meas_basis)):
if meas_basis[index] == 1:
bb84_circ.h(index)
bb84_circ.barrier()
bb84_circ.measure_all()
return bb84_circ
```
## Creating the key
Alice and Bob only keep the bits where their bases match.
The following outcomes are possible for each bit sent using the BB84 protocol
| Alice's bit | Alice's basis | Alice's State | Bob's basis | Bob's outcome | Bob's bit | Probability |
|---------------------- |------------------------ |------------------------ |---------------------- |------------------------ |-------------------- |-------------------- |
| 0 | C | 0 | C | 0 | 0 | 1/8 |
| 0 | C | 0 | H | + | 0 | 1/16 |
| 0 | C | 0 | H | - | 1 | 1/16 |
| 0 | H | + | C | 0 | 0 | 1/16 |
| 0 | H | + | C | 1 | 1 | 1/16 |
| 0 | H | + | H | + | 0 | 1/8 |
| 1 | C | 1 | C | 1 | 1 | 1/8 |
| 1 | C | 1 | H | + | 0 | 1/16 |
| 1 | C | 1 | H | - | 1 | 1/16 |
| 1 | H | - | C | 0 | 0 | 1/16 |
| 1 | H | - | C | 1 | 1 | 1/16 |
| 1 | H | - | H | - | 1 | 1/8 |
\begin{align*}
P_{\text{same basis}} &= P_A(C)\times P_B(C) + P_A(H)\times P_B(H)\\
&= \frac{1}{2} \times \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} \\
&= \frac{1}{2}
\end{align*}
Thus, on average, only half of the total bits will be in the final key. It is also interesting to note that half of the key bits will be 0 and the other half will be 1 (again, on average)
```
bb84_circ = make_bb84_circ(alice_state, alice_basis, bob_basis)
temp_key = execute(bb84_circ.reverse_bits(),backend=QasmSimulator(),shots=1).result().get_counts().most_frequent()
key = ''
for i in range(num_qubits):
if alice_basis[i] == bob_basis[i]: # Only choose bits where Alice and Bob chose the same basis
key += str(temp_key[i])
print(f'The length of the key is {len(key)}')
print(f"The key contains {(key).count('0')} zeroes and {(key).count('1')} ones")
print(f"Key: {key}")
```
| github_jupyter |
**[Back to Fan's Intro Stat Table of Content](https://fanwangecon.github.io/Stat4Econ/)**
# Rescaling Standard Deviation and Covariance
We have various tools at our disposal to summarize variables and the relationship between variables. Imagine that we have multiple toolboxes. This is the first one. There are two levels to this toolbox.
## Three Basic Tools
Our three basic tools are:
1. (sample) Mean of X (or Y)
2. (sample) Standard Deviation of X (or Y)
3. (sample) Covariance of X and Y
## Two Rescaling Tools
Additionally, we have two tools that combine the tools from the first level:
1. Coefficient of Variation = (Standard Deviation)/(Mean)
2. Correlation = (Covariance of X and Y)/((Standard Deviation of X)*(Standard Deviation of Y))
The tools on the second level rescale the standard deviation and covariance statistics.
# Data Examples
**The dataset, *EPIStateEduWage2017.csv*, can be downloaded [here](../data/EPIStateEduWage2017.csv).**
## College Education Share and Hourly Wage
Two variables:
1. Fraction of individual with college degree in a state
+ this is in Fraction units, the minimum is 0.00, the maximum is 100 percent, which is 1.00
2. Average hourly salary in the state
+ this is in Dollar units
```
# Load in Data Tools
# For Reading/Loading Data
library(readr)
library(tibble)
library(dplyr)
library(ggplot2)
# Load in Data
df_wgedu <- read_csv('../data/EPIStateEduWage2017.csv')
```
## A Scatter Plot
We can Visualize the Data with a Scatter Plot. There seems to be a positive relationship between the share of individuals in a state with a college education, and the average hourly salary in that state.
While most states are along the trend line, we have some states, like WY, that are outliers. WY has a high hourly salary but low share with college education.
```
# Control Graph Size
options(repr.plot.width = 5, repr.plot.height = 5)
# Draw Scatter Plot
# 1. specify x and y
# 2. label each state
# 3. add in trend line
scatter <- ggplot(df_wgedu, aes(x=Share.College.Edu, y=Hourly.Salary)) +
geom_point(size=1) +
geom_text(aes(label=State), size=3, hjust=-.2, vjust=-.2) +
geom_smooth(method=lm) +
labs(title = 'Hourly Wage and College Share by States',
x = 'Fraction with College Education',
y = 'Hourly Wage',
caption = 'Economic Policy Institute\n www.epi.org/data/') +
theme_bw()
print(scatter)
```
## Standard Deviations and Coefficient of Variation
The two variables above are in different units. We first calculate the mean, standard deviation, and covariance. With just these, it is hard to compare the standard deviation of the two variables, which are on different scales.
The sample standard deviations for the two variables are: $0.051$ and $1.51$, in fraction and dollar units. Can we say the hourly salary has a larger standard deviation? But it is just a different scale. $1.51$ is a large number, but that does not mean that variable has greater variation than the fraction with college education variable.
Converting the Statistics to Coefficient of Variations, now we have: $0.16$ and $0.09$. Because of the division, these are both in fraction units--standard deviations as a fraction of the mean. Now these are more comparable.
```
# We can compute the three basic statistics
stats.msdv <- list(
# Mean, SD and Var for the College Share variable
Shr.Coll.Mean = mean(df_wgedu$Share.College.Edu),
Shr.Coll.Std = sd(df_wgedu$Share.College.Edu),
Shr.Coll.Var = var(df_wgedu$Share.College.Edu),
# Mean, SD and Var for the Hourly Wage Variable
Hr.Wage.Mean = mean(df_wgedu$Hourly.Salary),
Hr.Wage.Std = sd(df_wgedu$Hourly.Salary),
Hr.Wage.Var = var(df_wgedu$Hourly.Salary)
)
# We can compute the three basic statistics
stats.coefvari <- list(
# Coefficient of Variation
Shr.Coll.Coef.Variation = (stats.msdv$Shr.Coll.Std)/(stats.msdv$Shr.Coll.Mean),
Hr.Wage.Coef.Variation = (stats.msdv$Hr.Wage.Std)/(stats.msdv$Hr.Wage.Mean)
)
# Let's Print the Statistics we Computed
as_tibble(stats.msdv)
as_tibble(stats.coefvari)
```
## Covariance and Correlation
For covariance, hard to tell whether it is large or small. To make comparisons possible, we calculate the coefficient of variations and correlation statistics.
The covariance we get is positive: $0.06$, but is this actually large positive relationship? $0.06$ seems like a small number.
Rescaling covariance to correlation, the correlation between the two variables is: $0.78$. Since the correlation of two variable is below $-1$ and $+1$, we can now say actually the two variables are very positively related. A higher share of individuals with a college education is strongly positively correlated with a higher hourly salary.
```
# We can compute the three basic statistics
states.covcor <- list(
# Covariance between the two variables
Shr.Wage.Cov = cov(df_wgedu$Hourly.Salary,
df_wgedu$Share.College.Edu),
# Correlation
Shr.Wage.Cor = cor(df_wgedu$Hourly.Salary, df_wgedu$Share.College.Edu),
Shr.Wage.Cor.Formula = (cov(df_wgedu$Hourly.Salary, df_wgedu$Share.College.Edu)
/(stats.msdv$Shr.Coll.Std*stats.msdv$Hr.Wage.Std))
)
# Let's Print the Statistics we Computed
as_tibble(states.covcor)
```
| github_jupyter |
Iterations, in programming, let coders repeat a set of instructions until a condition is met. Think about this as being stuck in a loop that will continue until something tells you to break out.
## While loop
The `while` loop is one of two iteration types you'll learn about. In this loop, you must specify a condition first and then include the code that you want the loop to iterate over. The loop first checks if the condition is `True` and if it is, then it looks at the code inside the loop. When the condition becomes `False`, the code in the loop is skipped over and the program continues executing the rest of your code.
If the condition in the loop is `False` to begin with, the code within the loop never executes. During a single loop, the program then goes through the loop and runs the code. Once the code is finished, it looks back at the condition to see if it is still `True`. It's necessary to change the variables in your loop to eventually have a condition that is `False`, or else an infinite loop will occur.
As shown in the code below, to write a `while` loop, you must first type "while" and then the condition you'll check before every loop. End the line with a colon and be sure to indent the next line, which will be the actual loop. The code below prints out a countdown for a rocket. As you can see, the countdown variable in the condition section decreases in every loop until it reaches -1, at which point the condition is `False` and the loop ends.
Predict what will happen when you run this code, then click the run button to verify you've understood.
```
countdown = 5
while countdown >= 0:
print(countdown)
countdown = countdown - 1
print("Lift Off")
```
In the following example, the condition is never met and the loop continues forever (if we don't stop it). In this code, the developer forgot to decrease the timer variable, so the condition is always true.
```
# Trying to find life outside our planet
timer = 10
while timer > 0:
print("Hello, I am from Earth")
```
This is an infinite loop and you must either wait for Python to terminate it or select the stop button at the top of the window. It's best to avoid infinite loops, if that wasn't already apparent.
## For loop
`For` loops essentially perform the same task as `while` loops: they tend to focus on iterating a set number of times. `For` loops are great when you want to go through a list and look at every single element. In the code below, we make a list and then go through all the elements and print them out.
```
planets = "Mars", "Saturn", "Jupiter"
for planet in planets:
print(planet)
```
| github_jupyter |
```
'''
-*- coding: utf-8 -*-
2019 - 2학기 - 정보융합학부 데이터사이언스
빅데이터 처리 및 응용 과목 지정 프로젝트
주제 : " 네이버 - 다음 - 구글 실시간 검색어 순위 크롤링 및 분석 "
Blog : https://blog.naver.com/sooftware
GitHub : https://github.com/sh951011
Kwangwoon University Electronic-Communication Dept. 2014707073 김수환
'''
from PyQt5 import QtCore, QtGui, QtWidgets, Qt
from multi import MultiCrawler
from matplotlink import MatplotWidget
from keyword_trend import connect_btn, KeywordTrendWindow
import logging
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import font_manager, rc
from matplotlib import style
from datetime import datetime
import queue
# MAIN WINDOW ==
MAIN_WINDOW_WIDTH = 1280
MAIN_WINDOW_HEIGHT = 1000
RANK_NUM = 10
# ==========================
# TITLE ==
TITLE_COORD_X = 160
TITLE_COORD_Y = 25
TITLE_WIDTH = 1000
TITLE_HEIGHT = 50
MIDDLE_COORD_Y = 410
MIDDLE_WIDTH = 300
MIDDLE_HEIGHT = TITLE_HEIGHT
MIDDLE1_COORD_X = 85
MIDDLE2_COORD_X = 275
MIDDLE3_COORD_X = 650
MIDDLE4_COORD_X = 1020
# ==========================
# RANK CONTAINERS ==
RANK_WIDTH = 350
RANK_HEIGHT = 30
RANK_COORD_X = 150
RANK_COORD_Y = 485
RANK_GAP_X = 380
RANK_GAP_Y = 50
SHOW_RANK_WIDTH = 60
SHOW_RANK_HEIGHT = RANK_HEIGHT
# ==========================
# CORR ==
CORR_COORD_X = 180
CORR_COORD_Y = 365
CORR_WIDTH = 80
CORR_HEIGHT = 30
# =========================
# TIME ==
TIME_COORD_X = 50
TIME_COORD_Y = CORR_COORD_Y
TIME_WIDTH = 380
TIME_HEIGHT = CORR_HEIGHT
# =========================
# MATPLOT ==
PLOT_COORD_X = 50
PLOT_COORD_Y = 90
PLOT_GAP_X = 420
PLOT_WIDTH = TIME_WIDTH
PLOT_HEIGHT = 270
PLOT_COMMENT_COORD_X = 560
PLOT_COMMENT_GAP_X = 80
PLOT_COMMENT_COORD_Y = TIME_COORD_Y
PLOT_COMMENT_WIDTH = 60
PLOT_COMMENT_HEIGHT = TIME_HEIGHT
# ==========================
# KEYWORD ==
KEYWORD_HEIGHT = 25
KEYWORD_WIDTH = PLOT_WIDTH
KEYWORD_COORD_X = PLOT_COORD_X + 2 * PLOT_GAP_X
KEYWORD_COORD_Y = 105
KEYWORD_GAP_Y = 43
# =========================
# FONT ==
MARGUN_FONT = "맑은 고딕"
NANUM_BOLD_FONT = "나눔스퀘어 ExtraBold"
NANUM_FONT = "나눔스퀘어"
TITLE_FONT_SIZE = 24
MEDIUM_FONT_SIZE = 14
RANK_FONT_SIZE = 12
# ==========================
# Basic Setting ==
logger = logging.getLogger('root')
FORMAT = "[%(asctime)s %(filename)s:%(lineno)s - %(funcName)s()] %(message)s"
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format=FORMAT)
logger.setLevel(logging.INFO)
font_name = font_manager.FontProperties(fname="c:/Windows/Fonts/malgun.ttf").get_name()
rc('font', family=font_name)
style.use('ggplot')
# ==========================
class MainWindow(object):
def __init__(self):
self.main_window = None
self.centralwidget = None
self.queue = queue.Queue(3) # Multi Threading을 위한 큐
self.data_list = list() # 크롤링한 데이터들을 합친 리스트
self.n_ranks = None # Naver 검색어 저장
self.d_ranks = None # Daum 검색어 저장
self.g_ranks = None # Google 검색어 저장
self.rank_containers_list = None
# 정수만을 입력받기 위한 처리
while True:
self.update_period = input("Enter Update Peroid (sec) : ")
if self.update_period.isdecimal():
break
def setup(self, main_window):
# Main_Window Set ===
translate = QtCore.QCoreApplication.translate
main_window.resize(MAIN_WINDOW_WIDTH, MAIN_WINDOW_HEIGHT)
main_window.setMinimumSize(QtCore.QSize(MAIN_WINDOW_WIDTH, MAIN_WINDOW_HEIGHT))
self.centralwidget = QtWidgets.QWidget(main_window)
main_window.setWindowTitle(translate("MainWindow", "Naver Daum Google Search Ranking"))
# ============================================
# Matplot ===
self.keywords_score = MatplotWidget(self.centralwidget)
self.keywords_score.setGeometry(QtCore.QRect(PLOT_COORD_X + PLOT_GAP_X, PLOT_COORD_Y, PLOT_WIDTH, PLOT_HEIGHT))
self.corr = MatplotWidget(self.centralwidget)
self.corr.setGeometry(QtCore.QRect(PLOT_COORD_X, PLOT_COORD_Y, PLOT_WIDTH, PLOT_HEIGHT))
denote_colors = ['#c2c2f0', '#ff9999', '#ffb3e6']
denote_text = ['Naver','Daum','Google']
self.color_denote = [0] * 3
for i, label in enumerate(self.color_denote):
label = QtWidgets.QLabel(self.centralwidget)
label.setGeometry(QtCore.QRect(PLOT_COMMENT_COORD_X + i * PLOT_COMMENT_GAP_X, TIME_COORD_Y, PLOT_COMMENT_WIDTH, TIME_HEIGHT))
font = QtGui.QFont(NANUM_BOLD_FONT)
font.setPointSize(11)
label.setFont(font)
label.setAlignment(QtCore.Qt.AlignCenter)
label.setStyleSheet("color: " + denote_colors[i] + ";")
label.setText(denote_text[i])
# ============================================
# Title ===
self.title = QtWidgets.QLabel(self.centralwidget)
self.title.setGeometry(QtCore.QRect(TITLE_COORD_X, TITLE_COORD_Y, TITLE_WIDTH, TITLE_HEIGHT))
font = QtGui.QFont(NANUM_BOLD_FONT)
font.setPointSize(TITLE_FONT_SIZE)
self.title.setFont(font)
self.title.setAlignment(QtCore.Qt.AlignCenter)
self.title.setText(translate("MainWindow", "Naver - Daum - Google 실시간 검색어 순위"))
self.title.setStyleSheet("color: purple;")
# =============================
# Time ===
now = datetime.now()
time = str(now.year)
format_ = [now.month, now.day, now.hour, now.minute]
delimiters = ['-', '-', ' ', ':']
for i, item in enumerate(format_):
time += delimiters[i] + str(item)
self.time_plot = QtWidgets.QLabel(self.centralwidget)
self.time_plot.setGeometry(QtCore.QRect(TIME_COORD_X, TIME_COORD_Y, TIME_WIDTH, TIME_HEIGHT))
font = QtGui.QFont(MARGUN_FONT)
font.setPointSize(12)
self.time_plot.setFont(font)
self.time_plot.setAlignment(QtCore.Qt.AlignCenter)
self.time_plot.setText(translate("MainWindow", "<"+time+"> 기준"))
# ============================
# Middle ===
labels = ['순위', 'Naver', 'Daum', 'Google']
colors = ['black', 'green', 'brown', 'blue']
geometrys = [
[MIDDLE1_COORD_X, MIDDLE_COORD_Y, MIDDLE_WIDTH, MIDDLE_HEIGHT],
[MIDDLE2_COORD_X, MIDDLE_COORD_Y, MIDDLE_WIDTH, MIDDLE_HEIGHT],
[MIDDLE3_COORD_X, MIDDLE_COORD_Y, MIDDLE_WIDTH, MIDDLE_HEIGHT],
[MIDDLE4_COORD_X, MIDDLE_COORD_Y, MIDDLE_WIDTH, MIDDLE_HEIGHT]
]
fonts = [MARGUN_FONT] + [NANUM_BOLD_FONT] * 3
font_sizes = [MEDIUM_FONT_SIZE] + [TITLE_FONT_SIZE] * 3
for i in range(4):
self.middle = QtWidgets.QLabel(self.centralwidget)
self.middle.setGeometry(QtCore.QRect(geometrys[i][0], geometrys[i][1], geometrys[i][2], geometrys[i][3]))
font = QtGui.QFont(fonts[i])
font.setPointSize(font_sizes[i])
self.middle.setFont(font)
self.middle.setText(translate("MainWindow", labels[i]))
self.middle.setStyleSheet("color: " + colors[i] + ";")
# ===========================
# Keyword Label ===
self.max_keyword_label = [0] * 3
self.min_keyword_label = [0] * 3
blue_font = QtGui.QFont(NANUM_BOLD_FONT)
blue_font.setPointSize(13)
black_font = QtGui.QFont(NANUM_BOLD_FONT)
black_font.setPointSize(12)
for i in range(3):
self.max_keyword_label[i] = QtWidgets.QLabel(self.centralwidget)
self.max_keyword_label[i].setGeometry(QtCore.QRect(KEYWORD_COORD_X,KEYWORD_COORD_Y + 2 * i * KEYWORD_GAP_Y,KEYWORD_WIDTH,KEYWORD_HEIGHT))
self.max_keyword_label[i].setFont(blue_font)
self.max_keyword_label[i].setAlignment(QtCore.Qt.AlignCenter)
self.max_keyword_label[i].setStyleSheet("color: blue;")
self.min_keyword_label[i] = QtWidgets.QLabel(self.centralwidget)
self.min_keyword_label[i].setGeometry(QtCore.QRect(KEYWORD_COORD_X,KEYWORD_COORD_Y + KEYWORD_GAP_Y + 2 * i * KEYWORD_GAP_Y,KEYWORD_WIDTH,KEYWORD_HEIGHT))
self.min_keyword_label[i].setFont(black_font)
self.min_keyword_label[i].setAlignment(QtCore.Qt.AlignCenter)
self.min_keyword_label[i].setStyleSheet("color: black;")
self.keyword_comment = QtWidgets.QLabel(self.centralwidget)
self.keyword_comment.setGeometry(QtCore.QRect(KEYWORD_COORD_X, CORR_COORD_Y, KEYWORD_WIDTH, KEYWORD_HEIGHT))
font = QtGui.QFont(MARGUN_FONT)
font.setPointSize(11)
self.keyword_comment.setFont(font)
self.keyword_comment.setAlignment(QtCore.Qt.AlignCenter)
# ============================
# Rank Containers ===
def _create_rank_containers(self):
self.rank_containers_list = list()
for i in range(3):
self.rank_containers_list.append(list())
for i, rank_containers in enumerate(self.rank_containers_list):
for j in range(RANK_NUM):
rank_containers.append(QtWidgets.QPushButton(self.centralwidget))
#rank_containers.append(QtWidgets.QLabel(self.centralwidget))
for j, rank in enumerate(rank_containers):
rank.setGeometry(QtCore.QRect(RANK_COORD_X + RANK_GAP_X * i,
RANK_COORD_Y + RANK_GAP_Y * j,
RANK_WIDTH, RANK_HEIGHT))
font = QtGui.QFont(MARGUN_FONT)
font.setPointSize(RANK_FONT_SIZE)
rank.setFont(font)
rank.setStyleSheet("border-radius: 5px;\n""color: black;\n")
for i in range(10):
rank_label = QtWidgets.QLabel(self.centralwidget)
rank_label.setGeometry(QtCore.QRect(MIDDLE1_COORD_X, RANK_COORD_Y + RANK_GAP_Y * i, SHOW_RANK_WIDTH, SHOW_RANK_HEIGHT))
font = QtGui.QFont(NANUM_FONT)
font.setPointSize(MEDIUM_FONT_SIZE)
rank_label.setFont(font)
rank_label.setObjectName("rank_label" + str(i))
rank_label.setText(translate("MainWindow", " " + str(i+1) + "위"))
# ================================================
# Event Connect
def _set_connect(self):
self.crawling() # 처음 실행시 크롤링 실행
connect_btn(self.rank_containers_list, self.keyword_clicked) # 각 검색어 버튼들 keyword_clicked와 연결
self.timer = QtCore.QTimer() # 타이머 설정
self.timer.setInterval(1000 * int(self.update_period)) # ms단위라 1000과 입력받은 주기 (sec)을 곱해주면 해당 초만큼 주기적으로 실행
self.timer.timeout.connect(self.crawling)
self.timer.start()
_create_rank_containers(self)
_set_connect(self)
main_window.setCentralWidget(self.centralwidget)
# 키워드 클릭시 실행
def keyword_clicked(self, keyword):
rank_changes = KeywordTrendWindow(keyword=keyword)
rank_changes.show()
# 크롤링
def crawling(self):
# Naver - Daum - Google에서 수집한 검색어 순위를 포맷에 맞춰 csv 파일로 저장
def update_data(self):
columns = ['Time', 'Search Engine']
columns += ['Rank' + str(i) for i in range(1, 11)]
data_list = list()
for i in range(3):
data_list += self.queue.get() # Thread들이 저장해놓은 데이터 get
self.data_list = data_list
new_data = pd.DataFrame(np.array(data_list).reshape(3, 12), columns=columns) # 새로 수집한 데이터
read_data = pd.read_csv('./data/data.csv', encoding='utf-8') # 기존 엑셀 파일 데이터
merge_data = pd.concat([read_data, new_data], sort=False) # (기존 + New) 병합
merge_data.to_csv('./data/data.csv', encoding='utf-8', sep=',', index=False) # 병합 데이터 저장
# 검색어 종합 스코어 Top5를 Pie chart로 표시
# - 3포털에 대해서 점수를 합산해서 상위 스코어 5개를 표시한다
def update_pie(self):
# 각 키워드별로 점수 계산
# 각 포털 1위는 10점 2위는 9점 ... 10위는 1점 순위권 밖은 0점을 준다
def _get_keywords_score(self):
scores = [] # 키워드에 대한 점수를 저장하는 리스트
keywords = [] # 키워드를 저장하는 리스트
except_case = [0,1,12,13,24,25] # csv파일의 'Time', 'Search Engine' 컬럼에 해당하는 인덱스는 예외처리
k = 0
self.g_ranks = self.data_list[2:12] # google 검색어 저장
self.n_ranks = self.data_list[14:24] # Naver 검색어 저장
self.d_ranks = self.data_list[26:36] # Daum 검색어 저장
# self.data_list (3 포털 검색어가 저장된 리스트) 에서 키워드를 하나씩 뽑는다
for i, keyword in enumerate(self.data_list):
if i in except_case: # 미리 선언해놓은 except_case면 건너뛴다
continue
score = 10 - ( k % 10 ) # score가 10 ~ 1점이 나오도록 한다 => score는 총 30개가 나오는데 10 9 ... 1이 3번 반복된다
k += 1
# keywords에 keyword가 없다면 새로 삽입과, score를 계산한다(
if keyword not in keywords:
keywords.append(keyword)
scores.append(score)
# keywords에 keyword가 있다면 (다른 포털 keyword와 일치하는 경우)
# 해당 keyword의 index를 계산하여 점수를 더해준다
else:
index = keywords.index(keyword)
scores[index] += (score)
scores, keywords = zip(*sorted(zip(scores, keywords), reverse = True)) # sort together (scores를 기준으로 keywords도 같이 정렬)
return keywords, scores
# 테스트
def _top5_engines_score(self, keywords):
self.g_ranks = self.data_list[2:12] # google 검색어 저장
self.n_ranks = self.data_list[14:24] # Naver 검색어 저장
self.d_ranks = self.data_list[26:36] # Daum 검색어 저장
g_scores = list()
n_scores = list()
d_scores = list()
for keyword in keywords:
if keyword in self.g_ranks:
g_scores.append(10 - self.g_ranks.index(keyword))
else:
g_scores.append(0)
if keyword in self.n_ranks:
n_scores.append(10 - self.n_ranks.index(keyword))
else:
n_scores.append(0)
if keyword in self.d_ranks:
d_scores.append(10 - self.d_ranks.index(keyword))
else:
d_scores.append(0)
return n_scores, d_scores, g_scores
# get_keywords_score로 계산한 Top5를 파이차트로 draw
def _draw(self, keywords, scores, n_scores, d_scores, g_scores):
explode = [0.07] * 5
outer_colors = ['#ff6666', '#ffcc99', '#99ff99', '#66b3ff', 'skyblue']
inner_colors = ['#c2c2f0', '#ff9999', '#ffb3e6'] * 5
site_ratio = list()
for i in range(5):
site_ratio.extend([n_scores[i],d_scores[i],g_scores[i]])
self.keywords_score.canvas.axes.clear()
self.keywords_score.canvas.axes.pie(scores, labels=keywords, shadow=True,
startangle=90, colors = outer_colors, explode = explode)
self.keywords_score.canvas.axes.pie(site_ratio, shadow=True,radius=0.75,
startangle=90, colors = inner_colors, explode = explode * 3)
circle = plt.Circle((0, 0), 0.5, color='white')
self.keywords_score.canvas.axes.add_artist(circle)
self.keywords_score.canvas.axes.set_title("종합 검색어 스코어 Top 5")
self.keywords_score.canvas.axes.grid(linewidth=0.2)
self.keywords_score.canvas.draw()
keywords, scores = _get_keywords_score(self)
n_scores, d_scores, g_scores = _top5_engines_score(self, keywords)
_draw(self, keywords[:5], scores[:5], n_scores, d_scores, g_scores)
# 1 - 10위까지 키워드 중 순위 차이가 가장 큰 키워드와 작은 키워드 표시
def update_keywords(self,ranks1, ranks2,engine1,engine2,loc = 0):
# 1 - 10위까지 키워드들 중에 중복되는 키워드들의 distance를 계산
def _get_distances(self, ranks1, ranks2):
# 중복되는 키워드를 추출
def _extract_keywords(self, ranks1, ranks2):
keywords = list()
for item in ranks1:
if item in ranks2:
keywords.append(item)
return keywords
# 중복 키워드들의 distance 계산
def _cal_distance(self, keywords, ranks1, ranks2):
distances = list()
for keyword in keywords:
distances.append(abs(ranks1.index(keyword) - ranks2.index(keyword)))
return distances
keywords = _extract_keywords(self, ranks1=ranks1, ranks2=ranks2)
distances = _cal_distance(self, keywords=keywords,ranks1=ranks1,ranks2=ranks2)
return keywords, distances
# get_distances()로 계산한 distance 기준으로 키워드 Set
def _set_keywords(self, keywords, distances, engine1, engine2, loc = 0):
# 계산한 distance 기준으로 max_corr, min_corr을 계산
def _get_max_n_min_corr(keywords, distances):
# 중복되는 키워드가 없는 경우 '해당없음'으로 표시
if len(distances) == 0:
return '해당없음', '해당없음'
# distance가 가장 작은 키워드가 max_corr, distance가 가장 큰 키워드가 min_corr
return keywords[(distances.index(min(distances)))], keywords[(distances.index(max(distances)))]
# max_corr, min_corr 키워드 Set
# 추가로 현재시간 업데이트도 같이 함
def _set_text(self, max_corr_keyword, min_corr_keyword, engine1, engine2, loc):
translate = QtCore.QCoreApplication.translate
self.max_keyword_label[loc].setText(translate("MainWindow", engine1 + " - " + engine2 + " " + max_corr_keyword))
self.min_keyword_label[loc].setText(translate("MainWindow", engine1 + " - " + engine2 + " " + min_corr_keyword))
self.keyword_comment.setText(translate("MainWindow", "blue : max distance black : min distance"))
now = datetime.now()
time = str(now.year)
format_ = [now.month, now.day, now.hour, now.minute]
delimiters = ['-', '-', ' ', ':']
for i, item in enumerate(format_):
time += delimiters[i] + str(item)
self.time_plot.setText(translate("MainWindow", "<" + time + "> 기준"))
max_corr_keyword, min_corr_keyword = _get_max_n_min_corr(keywords, distances)
_set_text(self, max_corr_keyword, min_corr_keyword, engine1=engine1, engine2=engine2, loc=loc)
keywords, distances = _get_distances(self, ranks1=ranks1,ranks2=ranks2)
_set_keywords(self, keywords=keywords,distances=distances,engine1=engine1,engine2=engine2, loc=loc)
# 상관관계
def update_corrs(self, ranks1, ranks2, engine1, engine2):
# ranks1과 ranks2의 상관관계 계산
# 순위 1-10위까지만을 이용하여 비교를 하고,
# 매치되지 않는 키워드들도 있으므로 기존 상관관계 계산법에 따르지 않고 새로 정의
# 상관관계는 보통 -1.0 ~ 1.0 의 값을 가지지만, 검색어 순위의 상관관계에서
# 음의 상관관계는 나올 수가 없다고 판단하고, 0.0 ~ 1.0 으로 제한을 둠
# 계산시, N사 1등과 D사 10등이 있다고 하면, 기존 상관관계 계산법으로는 낮은 상관관계가 나오겠지만,
# 검색어 10위라는 값이 이미 어느 정도 상관관계가 있다고 판단하여, 통상적으로 강한 상관관계라고 판단하기 시작하는 0.3의 값을 부여해줌.
def _get_corrs(ranks1, ranks2):
ranks1.reverse()
ranks2.reverse()
corrs = list()
for i, keyword1 in enumerate(ranks1):
corrs.append(0)
for j, keyword2 in enumerate(ranks2):
if keyword1 == keyword2:
# 순위가 같다면 corr == 1
if i == j:
corrs[i] = 1.0
# 랭킹에 있는데 순위가 다르다면
# Naver는 1위 Daum은 10위라고 한다면, 둘이 상관관계가 어느 정도 있다고 판단하고
# 통상적으로 어느 정도 상관관계가 있다고 판단하는 0.3을 준다
# 그 사이의 값들은 0.3 ~ 1.0 까지 중 distance 를 고려하여 corr을 계산한다
else:
corrs[i] = ( 1 - (0.7 / 9) * abs(i-j) )
break
return corrs
# get_corrs로 계산한 corr들을 plot
def _draw(self, corrs, engine1='Naver', engine2='Daum'):
self.corr.canvas.axes.clear()
self.corr.canvas.axes.scatter([str(x) for x in range(10)], corrs, color='lightcoral', label = "corr " + str(round(np.mean(corrs),2)))
self.corr.canvas.axes.legend(fontsize='medium', loc='upper left')
self.corr.canvas.axes.set_xticklabels(['Rank10', '.', '.', '.', '.', '.', '.', '.', '.', 'Rank1'])
self.corr.canvas.axes.set_title(engine1 + ' - ' + engine2 + ' corr')
self.corr.canvas.axes.grid(linewidth=0.2)
self.corr.canvas.draw()
corrs = _get_corrs(ranks1, ranks2)
_draw(self, corrs, engine1=engine1, engine2=engine2)
# web_crawling Func Execute code
multi_crawler = MultiCrawler(self.rank_containers_list, self.queue) # Multi Threading Crawling
multi_crawler.start() # Multi Thread Run
multi_crawler.join() # Wait Threads
update_data(self)
update_pie(self)
update_corrs(self, ranks1=self.n_ranks, ranks2=self.d_ranks, engine1='Naver', engine2='Daum')
engine_list = [['Naver', 'Daum'], ['Daum', 'Google'], ['Google', 'Naver']]
self.g_ranks.reverse() # 왜 g_ranks가 뒤집어져 있는지 아직 확인 못함
ranks_list = [[self.n_ranks, self.d_ranks], [self.d_ranks, self.g_ranks], [self.g_ranks, self.n_ranks]]
for i in range(3):
update_keywords(self, ranks1=ranks_list[i][0],
ranks2=ranks_list[i][1], engine1=engine_list[i][0],
engine2=engine_list[i][1], loc=i)
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
main_window = QtWidgets.QMainWindow()
process = MainWindow()
process.setup(main_window)
main_window.show()
sys.exit(app.exec_())
```
| github_jupyter |
My family know I like puzzles so they gave me this one recently:

When you take it out the box it looks like this:

And very soon after it looked like this (which explains why I've christened the puzzle "the snake puzzle"):

The way it works is that there is a piece of elastic running through each block. On the majority of the blocks the elastic runs straight through, but on some of the it goes through a 90 degree bend. The puzzle is trying to make it back into a cube.
After playing with it a while, I realised that it really is quite hard so I decided to write a program to solve it.
The first thing to do is find a representation for the puzzle. Here is the one I chose.
```
# definition - number of straight bits, before 90 degree bend
snake = [3,2,2,2,1,1,1,2,2,1,1,2,1,2,1,1,2]
assert sum(snake) == 27
```
If you look at the picture of it above where it is flattened you can see where the numbers came from. Start from the right hand side.
That also gives us a way of calculating how many combinations there are. At each 90 degree joint, there are 4 possible rotations (ignoring the rotations of the 180 degree blocks) so there are
```
4**len(snake)
```
17 billion combinations. That will include some rotations and reflections, but either way it is a big number.
However it is very easy to know when you've gone wrong with this kind of puzzle - as soon as you place a piece outside of the boundary of the 3x3x3 block you know it is wrong and should try something different.
So how to represent the solution? The way I've chosen is to represent it as a 5x5x5 cube. This is larger than it needs to be but if we fill in the edges then we don't need to do any complicated comparisons to see if a piece is out of bounds. This is a simple trick but it saves a lot of code.
I've also chosen to represent the 3d structure not as a 3d array but as a 1D array (or `list` in python speak) of length 5*5*5 = 125.
To move in the `x` direction you add 1, to move in the `y` direction you add 5 and to move in the `z` direction you move 25. This simplifies the logic of the solver considerably - we don't need to deal with vectors.
The basic definitions of the cube look like this:
```
N = 5
xstride=1 # number of pieces to move in the x direction
ystride=N # number of pieces to move in the y direction
zstride=N*N # number of pieces to move in the z direction
```
In our `list` we will represent empty space with `0` and space which can't be used with `-1`.
```
empty = 0
```
Now define the empty cube with the boundary round the edges.
```
# Define cube as 5 x 5 x 5 with filled in edges but empty middle for
# easy edge detection
top = [-1]*N*N
middle = [-1]*5 + [-1,0,0,0,-1]*3 + [-1]*5
cube = top + middle*3 + top
```
We're going to want a function to turn `x, y, z` co-ordinates into an index in the `cube` list.
```
def pos(x, y, z):
"""Convert x,y,z into position in cube list"""
return x+y*ystride+z*zstride
```
So let's see what that cube looks like...
```
def print_cube(cube, margin=1):
"""Print the cube"""
for z in range(margin,N-margin):
for y in range(margin,N-margin):
for x in range(margin,N-margin):
v = cube[pos(x,y,z)]
if v == 0:
s = " . "
else:
s = "%02d " % v
print(s, sep="", end="")
print()
print()
print_cube(cube, margin = 0)
```
Normally we'll print it without the margin.
Now let's work out how to place a segment.
Assuming that the last piece was placed at `position` we want to place a segment of `length` in `direction`. Note the `assert` to check we aren't placing stuff on top of previous things, or out of the edges.
```
def place(cube, position, direction, length, piece_number):
"""Place a segment in the cube"""
for _ in range(length):
position += direction
assert cube[position] == empty
cube[position] = piece_number
piece_number += 1
return position
```
Let's just try placing some segments and see what happens.
```
cube2 = cube[:] # copy the cube
place(cube2, pos(0,1,1), xstride, 3, 1)
print_cube(cube2)
place(cube2, pos(3,1,1), ystride, 2, 4)
print_cube(cube2)
place(cube2, pos(3,3,1), zstride, 2, 6)
print_cube(cube2)
```
The next thing we'll need is to undo a place. You'll see why in a moment.
```
def unplace(cube, position, direction, length):
"""Remove a segment from the cube"""
for _ in range(length):
position += direction
cube[position] = empty
unplace(cube2, pos(3,3,1), zstride, 2)
print_cube(cube2)
```
Now let's write a function which returns whether a move is valid given a current `position` and a `direction` and a `length` of the segment we are trying to place.
```
def is_valid(cube, position, direction, length):
"""Returns True if a move is valid"""
for _ in range(length):
position += direction
if cube[position] != empty:
return False
return True
is_valid(cube2, pos(3,3,1), zstride, 2)
is_valid(cube2, pos(3,3,1), zstride, 3)
```
Given `is_valid` it is now straight forward to work out what moves are possible at a given time, given a `cube` with a `position`, a `direction` and a `length` we are trying to place.
```
# directions next piece could go in
directions = [xstride, -xstride, ystride, -ystride, zstride, -zstride]
def moves(cube, position, direction, length):
"""Returns the valid moves for the current position"""
valid_moves = []
for new_direction in directions:
# Can't carry on in same direction, or the reverse of the same direction
if new_direction == direction or new_direction == -direction:
continue
if is_valid(cube, position, new_direction, length):
valid_moves.append(new_direction)
return valid_moves
moves(cube2, pos(3,3,1), ystride, 2)
```
So that is telling us that you can insert a segment of length 2 using a direction of `-xstride` or `zstride`. If you look at previous `print_cube()` output you'll see those are the only possible moves.
Now we have all the bits to build a recursive solver.
```
def solve(cube, position, direction, snake, piece_number):
"""Recursive cube solver"""
if len(snake) == 0:
print("Solution")
print_cube(cube)
return
length, snake = snake[0], snake[1:]
valid_moves = moves(cube, position, direction, length)
for new_direction in valid_moves:
new_position = place(cube, position, new_direction, length, piece_number)
solve(cube, new_position, new_direction, snake, piece_number+length)
unplace(cube, position, new_direction, length)
```
This works by being passed in the `snake` of moves left. If there are no moves left then it must be solved, so we print the solution. Otherwise it takes the head off the `snake` with `length, snake = snake[0], snake[1:]` and makes the list of valid moves of that `length`.
Then we `place` each move, and try to `solve` that cube using a recursive call to `solve`. We `unplace` the move so we can try again.
This very quickly runs through all the possible solutions.
```
# Start just off the side
position = pos(0,1,1)
direction = xstride
length = snake[0]
# Place the first segment along one edge - that is the only possible place it can go
position = place(cube, position, direction, length, 1)
# Now solve!
solve(cube, position, direction, snake[1:], length+1)
```
Wow! It came up with 2 solutions! However they are the same solution just rotated and reflected.
But how do you use the solution? Starting from the correct end of the snake, place each piece into its corresponding number. Take the first layer of the solution as being the bottom (or top - whatever is easiest), the next layer is the middle and the one after the top.

After a bit of fiddling around you'll get...

I hope you enjoyed that introduction to puzzle solving with computer.
If you want to try one yourselves, use the same technique to solve solitaire.
| github_jupyter |
```
import cv2
import numpy as np
# Define a function to track the object
def start_tracking():
# Initialize the video capture object
cap = cv2.VideoCapture(0)
# Define the scaling factor for the frames
scaling_factor = 0.5
# Number of frames to track
num_frames_to_track = 5
# Skipping factor
num_frames_jump = 2
# Initialize variables
tracking_paths = []
frame_index = 0
# Define tracking parameters
tracking_params = dict(winSize = (11, 11), maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT,
10, 0.03))
# Iterate until the user hits the 'Esc' key
while True:
# Capture the current frame
_, frame = cap.read()
# Resize the frame
frame = cv2.resize(frame, None, fx=scaling_factor,
fy=scaling_factor, interpolation=cv2.INTER_AREA)
# Convert to grayscale
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Create a copy of the frame
output_img = frame.copy()
if len(tracking_paths) > 0:
# Get images
prev_img, current_img = prev_gray, frame_gray
# Organize the feature points
feature_points_0 = np.float32([tp[-1] for tp in \
tracking_paths]).reshape(-1, 1, 2)
# Compute optical flow
feature_points_1, _, _ = cv2.calcOpticalFlowPyrLK(
prev_img, current_img, feature_points_0,
None, **tracking_params)
# Compute reverse optical flow
feature_points_0_rev, _, _ = cv2.calcOpticalFlowPyrLK(
current_img, prev_img, feature_points_1,
None, **tracking_params)
# Compute the difference between forward and
# reverse optical flow
diff_feature_points = abs(feature_points_0 - \
feature_points_0_rev).reshape(-1, 2).max(-1)
# Extract the good points
good_points = diff_feature_points < 1
# Initialize variable
new_tracking_paths = []
# Iterate through all the good feature points
for tp, (x, y), good_points_flag in zip(tracking_paths,
feature_points_1.reshape(-1, 2), good_points):
# If the flag is not true, then continue
if not good_points_flag:
continue
# Append the X and Y coordinates and check if
# its length greater than the threshold
tp.append((x, y))
if len(tp) > num_frames_to_track:
del tp[0]
new_tracking_paths.append(tp)
# Draw a circle around the feature points
cv2.circle(output_img, (x, y), 3, (0, 255, 0), -1)
# Update the tracking paths
tracking_paths = new_tracking_paths
# Draw lines
cv2.polylines(output_img, [np.int32(tp) for tp in \
tracking_paths], False, (0, 150, 0))
# Go into this 'if' condition after skipping the
# right number of frames
if not frame_index % num_frames_jump:
# Create a mask and draw the circles
mask = np.zeros_like(frame_gray)
mask[:] = 255
for x, y in [np.int32(tp[-1]) for tp in tracking_paths]:
cv2.circle(mask, (x, y), 6, 0, -1)
# Compute good features to track
feature_points = cv2.goodFeaturesToTrack(frame_gray,
mask = mask, maxCorners = 500, qualityLevel = 0.3,
minDistance = 7, blockSize = 7)
# Check if feature points exist. If so, append them
# to the tracking paths
if feature_points is not None:
for x, y in np.float32(feature_points).reshape(-1, 2):
tracking_paths.append([(x, y)])
# Update variables
frame_index += 1
prev_gray = frame_gray
# Display output
cv2.imshow('Optical Flow', output_img)
# Check if the user hit the 'Esc' key
c = cv2.waitKey(1)
if c == 27:
break
if __name__ == '__main__':
# Start the tracker
start_tracking()
# Close all the windows
cv2.destroyAllWindows()
```
| github_jupyter |
# Random Forest Regression Example
## Boston housing prices
The objective is to predict the median price of a home in Boston. The variables are crime rate, zoning information,
proportion of non-retail business, etc. This dataset has median prices in Boston for 1972. Even though the data is pretty old, the methodology for analytics is valid for more recent datasets.
<b>The purpose of this demonstration is to show the use of SAP HANA's Predictive Analytics Library to created Random forest model.</b>
The dataset is from Kaggle. https://www.kaggle.com/c/boston-housing. For tutorials use only.
## Housing Values in Suburbs of Boston in 1972
The <font color='red'>medv</font> variable is the target variable.
### Data description
The Boston data frame has 506 rows and 14 columns.
This data frame contains the following columns:
1. __crim__: per capita crime rate by town.
2. __zn__: proportion of residential land zoned for lots over 25,000 sq.ft.
3. __indus__: proportion of non-retail business acres per town.
4. __chas__: Charles River dummy variable (1 if tract bounds river; 0 otherwise).
5. __nox__: nitrogen oxides concentration (parts per 10 million).
6. __rm__: average number of rooms per dwelling.
7. __age__: proportion of owner-occupied units built prior to 1940.
8. __dis__: weighted mean of distances to five Boston employment centres.
9. __rad__: index of accessibility to radial highways.
10. __tax__: full-value property-tax rate per \$10000
11. __ptratio__: pupil-teacher ratio by town.
12. __black__: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
13. __lstat__: lower status of the population (percent).
14. __medv__: median value of owner-occupied homes in $1000s.
</td></tr></table>
### Factoids
The prices in Boston across years is below. If we had a historical dataset, an analysis could be done to account for the macro trends as well.
The second graph shows the intuition we have with respect to prices in relation to crime rate. It is expected that house prices will be lower in areas where crime rates are higher.
The third figure is a chart showing how inflation may affect prices. So, for deeper analysis and prediction, we may want to consider inflation.
In this notebook, these factors are not considered. They are here to demonstrate the need for deep domain analysis.
<table><tr>
<td><img src="images/boston_prices_by_year.png" alt="Boston home prices" title="Boston housing prices" style="float:left;" /></td>
<td><img src="images/Crime-Rate-and-Median-House-Prices.png" alt="Boston home prices" title="Boston housing prices" /></td>
<td><img src="images/Inflation_Adjusted_Housing_Prices_1890_2006.jpg" alt="Inflation adjusted prices" title="Inflation adjusted prices" style="float:left;" />
</td></tr></table>
In this notebook, we will use the dataset for Boston housing prices and predict the price based on numerous factors.
```
from hana_ml import dataframe
from hana_ml.algorithms.pal import clustering
from hana_ml.algorithms.pal import trees
import numpy as np
import matplotlib.pyplot as plt
import logging
```
## Load data
The data is loaded into 4 tables, for full, training, validation, and test sets:
<li>BOSTON_HOUSING_PRICES</li>
<li>BOSTON_HOUSING_PRICES_TRAINING</li>
<li>BOSTON_HOUSING_PRICES_VALIDATION</li>
<li>BOSTON_HOUSING_PRICES_TEST</li>
To do that, a connection is created and passed to the loader.
There is a config file, config/e2edata.ini that controls the connection parameters and whether or not to reload the data from scratch. In case the data is already loaded, there would be no need to load the data. A sample section is below. If the config parameter, reload_data is true then the tables for test, training, and validation are (re-)created and data inserted into them.
Although this ini file has other sections, please do not modify them. Only the [hana] section should be modified.
#########################<br>
[hana]<br>
url=host.sjc.sap.corp<br>
user=username<br>
passwd=userpassword<br>
port=3xx15<br>
#########################<br>
```
from data_load_utils import DataSets, Settings
url, port, user, pwd = Settings.load_config("../../config/e2edata.ini")
connection_context = dataframe.ConnectionContext(url, port, user, pwd)
full_tbl, training_tbl, validation_tbl, test_tbl = DataSets.load_boston_housing_data(connection_context)
```
# Create Data Frames
Create the data frames for the full, test, training, and validation sets.
Let us also do some dtaa exploration.
## Define Datasets - Training, validation, and test sets
Data frames are used keep references to data so computation on large data sets in HANA can happen in HANA. Trying to bring the entire data set into the client will likely result in out of memory exceptions.
The original/full dataset is split into training, test and validation sets. In the example below, they reside in different tables.
```
full_set = connection_context.table(full_tbl)
training_set = connection_context.table(training_tbl)
validation_set = connection_context.table(validation_tbl)
test_set = connection_context.table(test_tbl)
```
## Simple Exploration
Let us look at the number of rows in the data set
```
print('Number of rows in full set: {}'.format(full_set.count()))
print('Number of rows in training set: {}'.format(training_set.count()))
print('Number of rows in validation set: {}'.format(validation_set.count()))
print('Number of rows in test set: {}'.format(test_set.count()))
```
### Let's look at the columns
```
print(full_set.columns)
```
### Let's look at the data types
```
full_set.dtypes()
```
### Set up the features and labels for the model
```
features=['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'BLACK', 'LSTAT']
label='MEDV'
```
# Create model using training data
For demonstration, we will create two models, model and model_with_id, one where we have a unique id in the training set and one where there is none.
We are using Random Forest regression and SVM routines in this example
Documentation is <a href="https://help.sap.com/http.svc/rc/DRAFT/3f0dbe754b194c42a6bf3405697b711f/2.0.031/en-US/html/index.html">here</a>
## Preprocessing
SAP HANA Predictive Analytics Library takes DOUBLE and INTEGER data types for most numeric types. Since we have DECIMALs and TINYINTs in our data set, we cast them to the types required by PAL.
```
# Cast to correct types so PAL can consume it.
dfts = training_set.cast(['CRIM', "ZN", "INDUS", "NOX", "RM", "AGE", "DIS", "PTRATIO", "BLACK", "LSTAT", "MEDV"], "DOUBLE")
dfts = dfts.cast(["CHAS", "RAD", "TAX"], "INTEGER")
dfts = dfts.to_head("ID")
dfts.head(5).collect()
```
## Create the model
Although we had seen graphically that only a few features had an impact on housing prices, let us use all the features to create a model. We will then use the model to check for importance of the features.
```
# We build the model without IDs. Project only the features and the label.
df = dfts.select(features, label)
model = trees.RandomForestRegressor(connection_context)
model.fit(df, features=features, label=label)
```
### SQL statements executed
Calling PAL directly would require a number of SQL statements and all that is encapsulated in the Python library functions.
## Model analysis
Let's just see what features are most important.
Note that we are using a sort function. The property __feature_importances___ is automatically set when the fit() method is called above.
```
model.feature_importances_.sort(['IMPORTANCE'], desc=True).collect()
```
__As you can see above, LSTAT, RM, NOX, and PTRATIO seem to have the most impact on prices.__
# Predict using test set
Let us now do some predictions and see how well the model generalizes.
The predict() method always takes a unique identifier to identify the prediction on a specific data row. This way, the caller (python programmer) can then join with the original data set to get the rest of the values for that unique row. The test_set has columns of types that PAL does not deal with and therefore the columns are cast to the types that are accepted.
In order to look at the predicted value as well as the true value, the name of the unique identifier for rows in the result table is renamed to PREDICTED_ID. This result table is joined with the test set so the predicted and true value can be compared.
For the predictions we look at the standard error. The standard error is defined as the number of standard deviations away the prediction is from the true value.
```
df_test = test_set.cast(['CRIM', "ZN", "INDUS", "NOX", "RM", "AGE", "DIS", "PTRATIO", "BLACK", "LSTAT", "MEDV"], "DOUBLE")
df_test = df_test.cast(["CHAS", "RAD", "TAX"], "INTEGER")
df_test = df_test.to_head("ID")
# Note that we are renaming the column ID in the result of predict()
result_df = model.predict(df_test, key= 'ID', features=features).rename_columns({'ID': 'PREDICTED_ID'})
# Note the use of join() method to join two tables.
jdf = result_df.join(test_set, '{}."PREDICTED_ID"={}."ID"'.format(result_df.name, test_set.name), how='inner')
```
### Predictions
Let us look at the predictions. The predicted values are in 'SCORE' and the actual values are in 'MEDV'. So, we just rename the 'SCORE' column to 'PREDICTED'
In addition, the column 'CONFIDENCE' is the standard error which is the number of standard deviations away the actual values is from the predicted value. This column is renamed to 'STANDARD_ERROR'
```
jdf.select(['ID', 'SCORE', 'MEDV', 'CONFIDENCE']).rename_columns({"CONFIDENCE": "STANDARD_ERROR", "SCORE": "PREDICTED"}).sort("STANDARD_ERROR", desc=False).head(5).collect()
```
### Out of bag error
Let us look at the out of bag errors which is a method of measuring the prediction error.
Here we look at the first 4 rows
```
model.oob_error_.head(4).collect()
```
## Scoring
We now score the results from are test data. The scoring function we use is R^2.
__In the function below, PAL is not invoked but a query is directly executed against data in HANA__
```
r2_score = model.score(df_test, key='ID', features=features, label=label)
print("r2 score is {}".format(r2_score))
```
## Model
The model is available and can be saved for later predictions
```
# The generated model is in the database.
model.model_.head(4).collect()
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import os
import re
import pickle
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
rc('text', usetex=True)
def bold_text(string):
return r'\textbf{{{}}}'.format(string)
from IPython.display import Markdown
def printmd(string):
"""Embed the input string into Markdown."""
display(Markdown(string))
def list_files(startpath):
level_colours = {0: '#339fff', 1: '#ff5b33'}
for root, dirs, files in os.walk(startpath):
if os.path.basename(root) == startpath:
continue
level = root.replace(startpath, '').count(os.sep) - 1
indent = ' ' * 4 * (level)
printmd('<pre>{}<b><font color={}>{}</font></b></pre>'.format(indent, level_colours[level], os.path.basename(root)))
if len(files) > 0:
print('{}{}'.format(indent, files))
```
# Importing data
Explore the contents of the folder with all data files
```
data_folder = 'session_210302'
printmd('**Data contents**')
list_files(data_folder)
```
Store all data in the form ```{(market, treatment): {'deals': df_deals, 'games': df_games, 'offers': df_offers, 'players': df_players}}```
```
all_data = {}
data_types = []
for path, folders, files in os.walk(data_folder):
for file in files:
treatment = tuple(path.split('\\')[1:])
dtype = re.match(r'^.*_(.*)\.csv.*$', file).group(1)
data_types.append(dtype)
if treatment not in all_data.keys():
all_data[treatment] = {}
all_data[treatment][dtype] = pd.read_csv('{}\\{}'.format(path, file))
data_types = set(data_types)
```
Check whether all .csv files share the same structure and print out the names of their columns
```
for dtype in data_types:
printmd('**{}**'.format(dtype))
data = [d[dtype] for d in all_data.values()]
all([(data[0].columns.intersection(df.columns) == data[0].columns).all() for df in data])
data[0].columns.to_list()
```
Note:\
```var_id``` global id\
```var_iid``` local id
## Game information
```
all_data[('Externalities', 'bystanders_negative')]['games'].columns.to_list()
```
Find all columns with non-constant values
```
for treatment, data in all_data.items():
print(treatment, list(data['games'].columns[data['games'].nunique() > 1]))
for treatment, data in all_data.items():
printmd('**{}**'.format(treatment))
data['games'][['game_iid', 'title', 'elapsed_time']]
```
## Player information
```
all_data[('Externalities', 'bystanders_negative')]['players'].columns.to_list()
```
Find all columns with non-constant values
```
for treatment, data in all_data.items():
print(treatment, list(data['players'].columns[data['players'].nunique() > 1]))
```
## Offer information
```
all_data[('Externalities', 'bystanders_negative')]['offers'].columns.to_list()
for treatment, data in all_data.items():
printmd('**{}**'.format(treatment))
data_offers = data['offers']
print('status: {}'.format(set(data_offers['status'])))
print('type: {}'.format(set(data_offers['type'])))
printmd('status == ```Accepted``` if and only if the bid/ask resulted in a deal')
set(data_offers[data_offers['status'] == 'Replaced']['matched_price'].dropna())
set(data_offers[data_offers['status'] == 'Expired']['matched_price'].dropna())
set(data_offers[data_offers['matched_price'].notna()]['status'])
printmd('type == ```Auto``` corresponds to accepting a deal')
data_offers[(data_offers['type'] == 'Auto') & (data_offers['matched_price'].isna())]
```
Add treatments information and remove redundant/unnecessary columns
```
all_data.keys()
treatment_names = {
('Externalities', 'bystanders_negative'): 'FullExtNeg',
('Externalities', 'bystanders_positive'): 'FullExtPos',
('Externalities', 'normal'): 'FullExtNorm',
('LimitedAsks', 'black_box'): 'BBLimS',
('LimitedAsks', 'open_book'): 'FullLimS'
}
for treatment, data in all_data.items():
#data['offers'].drop(['game_id', 'round_id', 'status'], axis=1, inplace=True)
# Keep the status column
data['offers'].drop(['game_id', 'round_id'], axis=1, inplace=True)
data['offers']['treatment'] = treatment_names[treatment]
data['offers'].rename({'game_iid': 'game', 'round_iid': 'round', 'amount': 'bid',
'player_id': 'id', 'matched_price': 'price'}, axis=1, inplace=True)
```
Add ```match_id``` and ```match_time```
```
for treatment, data in all_data.items():
for idx, row in data['deals'].iterrows():
game, rnd, match_time, buyer, seller, askID, bidID, bprice, sprice = row[['game_iid', 'round_iid', 'time', 'buyer_id',
'seller_id', 'ask_id', 'bid_id', 'bprice', 'sprice']]
game_round = (data['offers']['game'] == game) & (data['offers']['round'] == rnd)
ask_row = (data['offers']['offer_db_id'] == askID)
bid_row = (data['offers']['offer_db_id'] == bidID)
data['offers'].loc[game_round & ask_row, 'match_time'] = match_time
data['offers'].loc[game_round & ask_row, 'match_id'] = buyer
data['offers'].loc[game_round & ask_row, 'price_temp'] = sprice
data['offers'].loc[game_round & bid_row, 'match_time'] = match_time
data['offers'].loc[game_round & bid_row, 'match_id'] = seller
data['offers'].loc[game_round & bid_row, 'price_temp'] = bprice
for treatment, data in all_data.items():
data['offers']['price'].equals(data['offers']['price_temp'])
for treatment, data in all_data.items():
data['offers'].drop(['price_temp'], axis=1, inplace=True)
```
Add ```valuation```
```
for treatment, data in all_data.items():
for (game, idx), dfi in data['offers'].groupby(['game', 'id']):
val = data['players'][data['players']['player_id'] == idx]['rprice'].values[0]
data['offers'].loc[dfi.index, 'valuation'] = val
```
Rearrange to match the order in the rest of the data
```
for treatment, data in all_data.items():
data['offers'] = data['offers'][['treatment', 'game', 'round', 'time', 'id', 'side', 'valuation',
'bid', 'price', 'match_id', 'match_time', 'type', 'status']]
```
# Merging data
Store all datasets in a single dataframe
```
df = pd.DataFrame()
for treatment, data in all_data.items():
df = df.append(data['offers'], ignore_index=True)
```
Create globally unique subject IDs
```
# Create globally unique subject IDs
df['old_id'] = df['id']
df['id'] = df.groupby(['treatment', 'game', 'id']).ngroup()
# Update the column with match IDs accordingly
for (treatment, game), df_game in df.groupby(['treatment', 'game']):
for idx, row in df_game[df_game['match_id'].notna()].iterrows():
df.loc[idx, 'match_id'] = df_game[df_game['old_id'] == row['match_id']]['id'].iloc[0]
df.drop(columns=['old_id'], axis=1, inplace=True)
```
Cast the valuations to ```int```
```
(df['valuation'] % 1 == 0).all()
df['valuation'] = df['valuation'].astype(int)
```
When a buyer and a seller are automatically matched under the first-price mechanism, a new entry with the bid/ask equal to the resulting price is automatically generated for the buyer/seller who submitted the bid/ask last. Remove all such entries and copy the corresopnding prices to the entries with the bids/asks submitted last.
```
df[['type', 'status']].drop_duplicates()
df.groupby(['type', 'status']).size()
```
The status of type ```Auto``` can only be ```Accepted```
```
set(df[df['type'] == 'Auto']['status'])
```
The status of a bid/ask is set to ```Accepted``` if and only if it results in a deal
```
set(df[df['price'].notna()]['status'])
df[df['status'] == 'Accepted']['price'].isna().any()
```
Each bid–ask pair striking a deal is stored as follows: the first of the two is recorded as ``Manual``, the second as ``Auto``.
```
df_prices = df[df['price'].notna()]
bid_ask_pairs = {'MM': 0, 'MA': 0, 'AA': 0}
for (treatment, game, rnd), dfr in df_prices.groupby(['treatment', 'game', 'round']):
for row_id, row in dfr.iterrows():
if row['id'] < row['match_id']:
id1 = row['id']
id2 = row['match_id']
types = {dfr[dfr['id'] == id1]['type'].iloc[0], dfr[dfr['id'] == id2]['type'].iloc[0]}
if len(types) == 2:
bid_ask_pairs['MA'] += 1
elif types == {'Manual'}:
bid_ask_pairs['MM'] += 1
else:
bid_ask_pairs['AA'] += 1
bid_ask_pairs
```
```Auto``` always take place after ```Manual``` (or, possibly, simultaneously)
A match is made at most 1 second after a bid and an ask are compatible
```
times = {'same': 0, 'M then A': 0, 'A then M': 0}
indices = {'M then A': 0, 'A then M': 0}
delays_to_match = []
for (treatment, game, rnd), dfr in df_prices.groupby(['treatment', 'game', 'round']):
for row_id, row in dfr.iterrows():
if row['id'] < row['match_id']:
match = dfr[dfr['id'].isin([row['id'], row['match_id']])]
types = set(match['type'])
if len(types) == 2:
M_time = match[match['type'] == 'Manual']['time'].iloc[0]
A_time = match[match['type'] == 'Auto']['time'].iloc[0]
M_id = match[match['type'] == 'Manual'].index
A_id = match[match['type'] == 'Auto'].index
if M_time == A_time:
times['same'] += 1
elif M_time < A_time:
times['M then A'] += 1
else:
times['A then M'] += 1
if M_id < A_id:
indices['M then A'] += 1
else:
indices['A then M'] += 1
if int(match['match_time'].iloc[0]) != max(match['time']):
delays_to_match.append(int(match['match_time'].iloc[0]) - max(match['time']))
times
indices
delays_to_match
```
<font color=blue>The redundant rows (automatic matching enforced by the computer) correspond to ```Auto``` bids/asks following ```Replaced``` bids/asks which were high/low enough to result in a deal</font>
```
df_new = df.copy()
df_new['redundant'] = False
status = {'Accepted': 0, 'Replaced': 0, 'Expired': 0}
for (treatment, game, rnd, idx), dfi in df_new.groupby(['treatment', 'game', 'round', 'id']):
for row_id, row in dfi.iterrows():
if row['type'] == 'Auto':
if len(dfi) > 1:
preceding = dfi.loc[:row.name].iloc[-2]
status[preceding['status']] += 1
if preceding['status'] == 'Replaced':
if row['side'] == 'Buyer':
if preceding['bid'] >= row['bid']:
df_new.loc[row.name, 'redundant'] = True
df_new.loc[preceding.name, 'price'] = row['price']
df_new.loc[preceding.name, 'match_id'] = row['match_id']
df_new.loc[preceding.name, 'match_time'] = row['match_time']
else:
if preceding['bid'] <= row['bid']:
df_new.loc[row.name, 'redundant'] = True
df_new.loc[preceding.name, 'price'] = row['price']
df_new.loc[preceding.name, 'match_id'] = row['match_id']
df_new.loc[preceding.name, 'match_time'] = row['match_time']
status
len(df_new)
len(df)
df_new.drop(['redundant', 'price', 'match_id', 'match_time'], axis=1).equals(df.drop(['price', 'match_id', 'match_time'], axis=1))
df_new = df_new[~df_new['redundant']]
df_new.drop('redundant', axis=1, inplace=True)
len(df_new)
df_new.groupby('type').size()
df_prices = df_new[df_new['price'].notna()]
delays_to_match = []
for (treatment, game, rnd), dfr in df_prices.groupby(['treatment', 'game', 'round']):
for row_id, row in dfr.iterrows():
if row['id'] < row['match_id']:
match = dfr[dfr['id'].isin([row['id'], row['match_id']])]
if (len(match) != 2) or (match['match_time'].count() != 2) or (match['match_id'].count() != 2) or (match['price'].count() != 2):
'Some data is missing'
if int(match['match_time'].iloc[0]) != max(match['time']):
delays_to_match.append(int(match['match_time'].iloc[0]) - max(match['time']))
delays_to_match
for treatment, df_treatment in df.groupby(['treatment']):
printmd(treatment)
diff = pd.merge(df, df_new, how='outer', suffixes=('','_y'), indicator=True)
diff = diff[diff['_merge'] != 'both']
diff.sort_values(['treatment', 'game', 'round', 'time', 'id']).iloc[1:51]
df = df_new.copy()
```
# Overview of the data
```
index = pd.MultiIndex.from_tuples(df[['treatment', 'game']].drop_duplicates().itertuples(index=False, name=None),
names=['Treatment', 'Game'])
overview = pd.DataFrame(index=index, columns=['Buyers', 'Sellers', 'Bids', 'Asks'])
for (treatment, game, side), df_side in df.groupby(['treatment', 'game', 'side']):
if side == 'Buyer':
overview.loc[(treatment, game), 'Buyers'] = len(set(df_side['id']))
overview.loc[(treatment, game), 'Bids'] = len(df_side)
elif side == 'Seller':
overview.loc[(treatment, game), 'Sellers'] = len(set(df_side['id']))
overview.loc[(treatment, game), 'Asks'] = len(df_side)
else:
print('No side provided.')
overview
```
# Exporting data
## Externalities
```
df_ext = df[df['treatment'].str.contains('Ext')].copy()
```
Create globally unique subject IDs
```
# Create globally unique subject IDs
df_ext['old_id'] = df_ext['id']
df_ext['id'] = df_ext.groupby(['treatment', 'game', 'id']).ngroup()
# Update the column with match IDs accordingly
for (treatment, game), df_game in df_ext.groupby(['treatment', 'game']):
for idx, row in df_game[df_game['match_id'].notna()].iterrows():
df_ext.loc[idx, 'match_id'] = df_game[df_game['old_id'] == row['match_id']]['id'].iloc[0]
df_ext.drop(columns=['old_id'], axis=1, inplace=True)
df_ext
df_ext.to_csv('../Data/data_externalities.csv', index=False)
```
## Restricted asks
```
df_LimS = df[df['treatment'].str.contains('LimS')].copy()
```
Create globally unique subject IDs
```
# Create globally unique subject IDs
df_LimS['old_id'] = df_LimS['id']
df_LimS['id'] = df_LimS.groupby(['treatment', 'game', 'id']).ngroup()
# Update the column with match IDs accordingly
for (treatment, game), df_game in df_LimS.groupby(['treatment', 'game']):
for idx, row in df_game[df_game['match_id'].notna()].iterrows():
df_LimS.loc[idx, 'match_id'] = df_game[df_game['old_id'] == row['match_id']]['id'].iloc[0]
df_LimS.drop(columns=['old_id'], axis=1, inplace=True)
df_LimS
df_LimS.to_csv('../Data/data_restricted_asks.csv', index=False)
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, method):
"""Applies the Canny transform"""
""" Bi-Modal Distribution"""
if method == "OTSU":
high_threshold, th3 = cv2.threshold(img,0,255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ratio = 0.5
low_threshold = high_threshold * 0.5
elif method == "auto":
sigma = 0.33
v = np.median(img)
low_threshold = int(max(0, (1.0 - sigma) * v))
high_threshold = int(min(255, (1.0 + sigma) * v))
else:
ratio = 1/3
high_threshold = 150
low_threshold = high_threshold * ratio
#print("Lowth: {}, Highth: {}".format(low_threshold,high_threshold) )
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
xLeft = []
xRight = []
yLeft = []
yRight = []
for line in lines:
for x1,y1,x2,y2 in line:
m = (y2-y1)/(x2-x1)
#import pdb; pdb.set_trace()
angle = abs(math.atan(m) * 180/math.pi)
if angle >=20 and angle <= 70 :
if m < 0:
xLeft.append(x1)
xLeft.append(x2)
yLeft.append(y1)
yLeft.append(y2)
ext_line(img,xLeft,yLeft)
else:
xRight.append(x1)
xRight.append(x2)
yRight.append(y1)
yRight.append(y2)
ext_line(img,xRight,yRight)
def ext_line(img,x,y):
z = np.polyfit(x,y,1)
#z_right = np.polyfit(xRight,yRight,1)
# extrapolate the top and bottom of the lane. y_top = vertices, and y_bottom = img.shape[0]
#boundry = vertices[0]
top_y = 330
top_x = int((top_y-z[1])/z[0])
bottom_y = img.shape[0]
bottom_x = int((bottom_y-z[1])/z[0])
cv2.line(img, (bottom_x, bottom_y), (top_x, top_y), [255, 0, 0], 8)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def save_image(img, name):
mpimg.imsave('./output_images/{0}'.format(name),img)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
test_images = os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# Read in and grayscale the image
def my_pipeline(img):
gray_img = grayscale(img)
#plt.imshow(gray_img, cmap= "gray")
#plt.show()
#Define a kernel size and apply Gaussian smoothing
kernel_size = 3
blur_gray = gaussian_blur(gray_img,kernel_size)
#Define our parameters for Canny and apply
edges = canny(blur_gray,"manual")
# This time we are defining a four sided polygon to mask
imshape = img.shape
vertices = np.array([[(0,imshape[0]),(450, 325), (490, 325), (imshape[1],imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
#Define the Hough transform parameters
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 40 # minimum number of votes (intersections in Hough grid cell)
min_line_len = 90 #minimum number of pixels making up a line
max_line_gap = 150 # maximum gap in pixels between connectable line segments
line_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
end_img = weighted_img(line_img, img)
return end_img
def plot_allimages():
for image in test_images:
path = 'test_images/' + str(image)
#path = os.path.join(os.getcwd(),image_name)
img = cv2.imread(path)
processed_img = my_pipeline(img)
save_image(processed_img,image)
plt.imshow(processed_img)
plt.show()
plot_allimages()
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = my_pipeline(image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
#clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image3)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
# Dask Tutorial
<div class="alert-info">
### Overview
* **teaching:** 20 minutes
* **exercises:** 0
* **questions:**
* How does Dask parallelize computations in Python?
</div>
### Table of contents
1. [**Dask primer**](#Dask-primer)
1. [**Dask clusters**](#Dask-Clusters)
1. [**Dask dataframe**](#Dask-Dataframe)
1. [**Dask arrays**](#Dask-Arrays)
1. [**Dask delayed**](#Dask-Delayed)
## Dask Primer
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg"
width="30%"
align=right
alt="Dask logo">
Dask is a flexible parallel computing library for analytic computing. Dask provides dynamic parallel task scheduling and high-level big-data collections like `dask.array` and `dask.dataframe`. More on dask here: https://docs.dask.org/en/latest/
_Note: Pieces of this notebook comes from the following sources:_
- https://github.com/rabernat/research_computing
- https://github.com/dask/dask-examples
## Dask Clusters
Dask needs a collection of computing resources in order to perform parallel computations. Dask Clusters have different names corresponding to different computing environments (for example, [LocalCluster](https://distributed.dask.org/en/latest/local-cluster.html) for your Laptop, [PBSCluster](http://jobqueue.dask.org/) for your HPC, or [Kubernetes Cluster](http://kubernetes.dask.org/) for machines on the Cloud). Each cluster has a certain number of computing resources called 'Workers', that each get allocated CPU and RAM. The dask scheduling system maps jobs to each worker on a cluster for you, so the syntax is mostly the same once you initialize a cluster!
```
# Let's start simple with a LocalCluster that makes use of all the cores and RAM we have on a single machine
from dask.distributed import Client, LocalCluster
cluster = LocalCluster()
# explicitly connect to the cluster we just created
client = Client(cluster)
client
```
## Dask Dataframe
If you are working with a very large Pandas dataframe, you can consider parallizing computations by turning it into a Dask Dataframe. Dask Dataframes split a dataframe into partitions along an index. They support a large subset of the Pandas API. You can find additional details and examples here https://examples.dask.org/dataframe.html
```
# Although this is small csv file, we'll reuse our same example from before!
# Load csv results from server into a Pandas DataFrame
import dask.dataframe as dd
server = 'https://webservices.volcano.si.edu/geoserver/GVP-VOTW/ows?'
query = 'service=WFS&version=2.0.0&request=GetFeature&typeName=GVP-VOTW:Smithsonian_VOTW_Holocene_Volcanoes&outputFormat=csv'
# blocksize=None means use a single partion
df = dd.read_csv(server+query, blocksize=None)
# We only see the metadata, the actual data are only computed when requested.
df
# We can break up the table into 4 partions to map out to each core:
df = df.repartition(npartitions=4)
df
# Let's say we want to know the minimum last eruption year for all volcanoes
last_eruption_year_min = df.Last_Eruption_Year.min()
last_eruption_year_min
# Instead of getting the actual value we see dd.Scalar, which represents a recipe for actually calculating this value
last_eruption_year_min.visualize(format='svg')
# To get the value call the 'compute method'
# NOTE: this was slower than using pandas directly,,, for small data you often don't need to use parallel computing!
last_eruption_year_min.compute()
```
## Dask Arrays
A dask array looks and feels a lot like a numpy array.
However, a dask array doesn't directly hold any data.
Instead, it symbolically represents the computations needed to generate the data.
Nothing is actually computed until the actual numerical values are needed.
This mode of operation is called "lazy"; it allows one to build up complex, large calculations symbolically before turning them over the scheduler for execution.
If we want to create a numpy array of all ones, we do it like this:
```
import numpy as np
shape = (1000, 4000)
ones_np = np.ones(shape)
ones_np
```
This array contains exactly 32 MB of data:
```
print('%.1f MB' % (ones_np.nbytes / 1e6))
```
Now let's create the same array using dask's array interface.
```
import dask.array as da
ones = da.ones(shape)
ones
```
This works, but we didn't tell dask how to split up the array, so it is not optimized for distributed computation.
A crucal difference with dask is that we must specify the `chunks` argument. "Chunks" describes how the array is split up over many sub-arrays.

_source: [Dask Array Documentation](http://dask.pydata.org/en/latest/array-overview.html)_
There are [several ways to specify chunks](http://dask.pydata.org/en/latest/array-creation.html#chunks).
In this lecture, we will use a block shape.
```
chunk_shape = (1000, 1000)
ones = da.ones(shape, chunks=chunk_shape)
ones
```
Notice that we just see a symbolic represetnation of the array, including its shape, dtype, and chunksize.
No data has been generated yet.
When we call `.compute()` on a dask array, the computation is trigger and the dask array becomes a numpy array.
```
ones.compute()
```
In order to understand what happened when we called `.compute()`, we can visualize the dask _graph_, the symbolic operations that make up the array
```
ones.visualize(format='svg')
```
Our array has four chunks. To generate it, dask calls `np.ones` four times and then concatenates this together into one array.
Rather than immediately loading a dask array (which puts all the data into RAM), it is more common to reduce the data somehow. For example:
```
sum_of_ones = ones.sum()
sum_of_ones.visualize(format='svg')
```
Here we see dask's strategy for finding the sum. This simple example illustrates the beauty of dask: it automatically designs an algorithm appropriate for custom operations with big data.
If we make our operation more complex, the graph gets more complex.
```
fancy_calculation = (ones * ones[::-1, ::-1]).mean()
fancy_calculation.visualize(format='svg')
```
### A Bigger Calculation
The examples above were toy examples; the data (32 MB) is nowhere nearly big enough to warrant the use of dask.
We can make it a lot bigger!
```
bigshape = (200000, 4000)
big_ones = da.ones(bigshape, chunks=chunk_shape)
big_ones
print('%.1f MB' % (big_ones.nbytes / 1e6))
```
This dataset is 6.4 GB, rather than 32 MB! This is probably close to or greater than the amount of available RAM than you have in your computer. Nevertheless, dask has no problem working on it.
_Do not try to `.visualize()` this array!_
When doing a big calculation, dask also has some tools to help us understand what is happening under the hood. Let's watch the dashboard again as we do a bigger computation.
```
big_calc = (big_ones * big_ones[::-1, ::-1]).mean()
result = big_calc.compute()
result
```
### Reduction
All the usual numpy methods work on dask arrays.
You can also apply numpy function directly to a dask array, and it will stay lazy.
```
big_ones_reduce = (np.cos(big_ones)**2).mean(axis=1)
big_ones_reduce
```
Plotting also triggers computation, since we need the actual values
```
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12,8)
plt.plot(big_ones_reduce)
```
## Dask Delayed
Dask.delayed is a simple and powerful way to parallelize existing code. It allows users to delay function calls into a task graph with dependencies. Dask.delayed doesn't provide any fancy parallel algorithms like Dask.dataframe, but it does give the user complete control over what they want to build.
Systems like Dask.dataframe are built with Dask.delayed. If you have a problem that is paralellizable, but isn't as simple as just a big array or a big dataframe, then dask.delayed may be the right choice for you.
## Create simple functions
These functions do simple operations like add two numbers together, but they sleep for a random amount of time to simulate real work.
```
import time
def inc(x):
time.sleep(0.1)
return x + 1
def dec(x):
time.sleep(0.1)
return x - 1
def add(x, y):
time.sleep(0.2)
return x + y
```
We can run them like normal Python functions below
```
%%time
x = inc(1)
y = dec(2)
z = add(x, y)
z
```
These ran one after the other, in sequence. Note though that the first two lines `inc(1)` and `dec(2)` don't depend on each other, we *could* have called them in parallel had we been clever.
## Annotate functions with Dask Delayed to make them lazy
We can call `dask.delayed` on our funtions to make them lazy. Rather than compute their results immediately, they record what we want to compute as a task into a graph that we'll run later on parallel hardware.
```
import dask
inc = dask.delayed(inc)
dec = dask.delayed(dec)
add = dask.delayed(add)
```
Calling these lazy functions is now almost free. We're just constructing a graph
```
%%time
x = inc(1)
y = dec(2)
z = add(x, y)
z
```
## Visualize computation
```
z.visualize(format='svg', rankdir='LR')
```
## Run in parallel
Call `.compute()` when you want your result as a normal Python object
If you started `Client()` above then you may want to watch the status page during computation.
```
%%time
z.compute()
```
## Parallelize Normal Python code
Now we use Dask in normal for-loopy Python code. This generates graphs instead of doing computations directly, but still looks like the code we had before. Dask is a convenient way to add parallelism to existing workflows.
```
%%time
zs = []
for i in range(256):
x = inc(i)
y = dec(x)
z = add(x, y)
zs.append(z)
zs = dask.persist(*zs) # trigger computation in the background
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from aflow.entries import Entry
a = {
"compound": "Be2O2",
"auid":"aflow:ed51b7b3938f117f",
"aurl":"aflowlib.duke.edu:AFLOWDATA/ICSD_WEB/HEX/Be1O1_ICSD_15620",
"agl_thermal_conductivity_300K":"53.361",
"Egap":"7.4494"
}
A = Entry(**a)
A.kpoints
from aflow.caster import _kpoints
_kpoints("16,16,8;17,17,9;\Gamma-M,M-K,K-\Gamma,\Gamma-A,A-L,L-H,H-A,L-M,K-H;20")
from aflow.keywords import *
from aflow.keywords import reset
reset()
k = ((Egap > 6) | (Egap < 21)) & (PV_cell < 13)
reset()
k1 = ((Egap > 6) | (Egap < 21)) & ((PV_cell < 13) | (PV_cell > 2))
str(k1)
reset()
k3 = ((Egap > 0) & (Egap < 2) | (Egap == 5))
str(k3)
str(PV_cell)
str(~PV_cell)
str(PV_cell)
k = (data_source == 'aflowlib') | (species % 'Si')
str(k)
reset()
k2 = (data_source < 'aflow') & (species < 'Ag')
str(k2)
%load_ext autoreload
%autoreload 2
import aflow
from aflow.keywords import *
Si = aflow.search(catalog="icsd").filter(species == 'Si').select(positions_cartesian)
for i, entry in enumerate(Si[90:110]):
print(i, entry.aurl)
sorted(Si.responses[2].keys())
sisl = Si[0:10]
sisl._iter
sisl._iter, sisl._max_entry
len(Si.responses)
for entry in sisl:
print(entry.positions_cartesian)
ss = slice(0, 10)
ss.
import json
keys = json.loads("""{"__schema^2__":{"__comment__":["The zeroth element of any object or array in this document is meta.","If last element is null, element parent considered optional.","If last element is '.', element value can be anything.","If last element is '', element value can be nothing.","This document is the AAPI schema, it is self validating and order sensitive.","."],"class":["intended for document organization, defines major section. Must be one of","API only","chemistry","crystal","electronics","thermodynamics","magnetics","scintillation","mechanical","optical properties","other","calculation"],"delimiter":["An ordered set of single character seperators for distinguishing plural type property values",null],"description":["intended for popup help boxes, describes the current property: freeform text","."],"example":["Actual result that may occur in API or search context, developmental: structured text","."],"expression":["intended for materials reports, developmental. Must be one of","declarative","directive","derivative"],"format":["intended for printf style formating of property value: corresponds to the type attribute","."],"inclusion":["intended for search filters and materials reports. Must be one of","mandatory","conditional","optional","forbidden"],"search":[["intended for search and stat, Must be one of","equals -> exact match input (select or freeform) to value","contains -> substring match (select or freeform) in value","range -> bounded match (select or freeform) in value"],"equals","contains","range",null],"status":["Development stage of property. Must be one of","production","development","deprecated","reserved"],"subclass":["intended for document organization, defines minor section","label","calculation parameters","computational resources","version","provenance","real space lattice","bravais lattice of the crystal","point group of the crystal","bravais lattice of the lattice","super lattice","reciprocal space lattice","space group","parameters",""],"syntax":["Actual setting that may be used in API or search context, developmental: structured text","."],"title":["intended for labeling property in document rendering: freeform text (HTML?)","."],"type":["intended for DB and document type handling: must be one of","string","strings","number","numbers"],"units":["units for search filter number in HTML: optional",null],"verification":["Optional list of property references designed to certify that the result is contextually relevant.",null]},"Bravais_lattice_orig":{"__comment__":[""],"description":"Returns the Bravais lattice of the original unrelaxed structure before the calculation.","title":"original bravais lattice","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"optional","expression":"declarative","example":"Bravais_lattice_orig=MCLC","status":"production","syntax":"$aurl/?Bravais_lattice_orig"},"Bravais_lattice_relax":{"__comment__":[""],"description":"Returns the Bravais lattice of the original relaxed structure after the calculation.","title":"relaxed bravais lattice","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"Bravais_lattice_relax=MCLC","status":"production","syntax":"$aurl/?Bravais_lattice_relax"},"Egap":{"__comment__":[""],"description":"Band gap calculated with the approximations and pseudopotentials described by other keywords.","title":"energy gap","format":"%s","class":"electronics","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"Egap=2.5","status":"production","syntax":"$aurl/?Egap"},"Egap_fit":{"__comment__":[""],"description":"Simple cross-validated correction (fit) of Egap.","title":"fitted band gap","format":"%s","class":"electronics","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"Egap_fit=3.5","status":"production","syntax":"$aurl/?Egap_fit"},"Egap_type":{"__comment__":[""],"description":"Given a band gap, this keyword describes if the system is a metal, a semi-metal, an insulator with direct or indirect band gap.","title":"band gap type","format":"%s","class":"electronics","subclass":"","type":"string","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"Egap_type=insulator_direct","status":"production","syntax":"$aurl/?Egap_type"},"PV_atom":{"__comment__":[""],"description":"Pressure multiplied by volume of the atom.","title":"atomic pressure*volume","format":"%s","class":"mechanical","subclass":"","type":"number","units":"eV/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"PV_atom=12.13","status":"production","syntax":"$aurl/?PV_atom"},"PV_cell":{"__comment__":[""],"description":"Pressure multiplied by volume of the unit cell.","title":"unit cell pressure*volume","format":"%s","class":"mechanical","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"PV_cell=12.13","status":"production","syntax":"$aurl/?PV_cell"},"Pearson_symbol_orig":{"__comment__":[""],"description":"Returns the Pearson symbol of the original-unrelaxed structure before the calculation.","title":"original pearson symbol","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"declarative","example":"Pearson_symbol_orig=mS32","status":"production","syntax":"$aurl/?Pearson_symbol_orig"},"Pearson_symbol_relax":{"__comment__":[""],"description":"Returns the Pearson symbol of the relaxed structure after the calculation.","title":"relaxed pearson symbol","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"derivative","verification":["stress_tensor"],"example":"Pearson_symbol_relax=mS32","status":"production","syntax":"$aurl/?Pearson_symbol_relax"},"Pulay_stress":{"__comment__":[""],"description":"Returns a metric of the basis set inconsistency for the calculation.","title":"Pulay Stress","format":"%s","class":"mechanical","subclass":"","type":"number","units":"kbar","inclusion":"mandatory","expression":"derivative","example":"pulay_stress=10.0","status":"development","syntax":"$aurl/?pulay_stress"},"Pullay_stress":{"__comment__":[""],"description":"Returns a metric of the basis set inconsistency for the calculation.","title":"Pulay Stress","format":"%s","class":"mechanical","subclass":"","type":"number","units":"kbar","inclusion":"mandatory","expression":"derivative","example":"Pullay_stress=10.0","status":"deprecated","syntax":"$aurl/?Pullay_stress"},"ael_bulk_modulus_reuss":{"__comment__":[""],"description":"Returns the bulk modulus as calculated using the Reuss method with AEL.","title":"AEL Reuss bulk modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_bulk_modulus_reuss=105.315","status":"production","syntax":"$aurl/?ael_bulk_modulus_reuss"},"ael_bulk_modulus_voigt":{"__comment__":[""],"description":"Returns the bulk modulus as calculated using the Voigt method with AEL.","title":"AEL Voigt bulk modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_bulk_modulus_voiht=105.315","status":"production","syntax":"$aurl/?ael_bulk_modulus_voigt"},"ael_bulk_modulus_vrh":{"__comment__":[""],"description":"Returns the bulk modulus as calculated using the Voigt-Reuss-Hill average with AEL.","title":"AEL VRH bulk modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_bulk_modulus_vrh=105.315","status":"production","syntax":"$aurl/?ael_bulk_modulus_vrh"},"ael_elastic_anisotropy":{"__comment__":[""],"description":"Returns the elastic anisotropy as calculated with AEL.","title":"AEL elastic anisotropy","format":"%s","class":"mechanical","subclass":"","type":"number","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_elastic_anisotropy=0.0008165","status":"production","syntax":"$aurl/?ael_elastic_anisotropy"},"ael_poisson_ratio":{"__comment__":[""],"description":"Returns the istropic Poisson ratio as calculated with AEL.","title":"AEL Poisson ratio","format":"%s","class":"mechanical","subclass":"","type":"number","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_poisson_ratio=0.216","status":"production","syntax":"$aurl/?ael_poisson_ratio"},"ael_shear_modulus_reuss":{"__comment__":[""],"description":"Returns the shear modulus as calculated using the Reuss method with AEL.","title":"AEL Reuss shear modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_shear_modulus_reuss=73.787","status":"production","syntax":"$aurl/?ael_shear_modulus_reuss"},"ael_shear_modulus_voigt":{"__comment__":[""],"description":"Returns the shear modulus as calculated using the Voigt method with AEL.","title":"AEL Voigt shear modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_shear_modulus_voigt=73.799","status":"production","syntax":"$aurl/?ael_shear_modulus_voigt"},"ael_shear_modulus_vrh":{"__comment__":[""],"description":"Returns the shear modulus as calculated using the Voigt-Reuss-Hill average with AEL.","title":"AEL VRH shear modulus","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"ael_shear_modulus_vrh=73.793","status":"production","syntax":"$aurl/?ael_shear_modulus_vrh"},"aflow_version":{"__comment__":[""],"description":"Returns the version number of AFLOW used to perform the calculation.","title":"aflow version","format":"%s","class":"calculation","subclass":"version","type":"string","inclusion":"optional","expression":"declarative","example":"aflow_version=aflow30641","status":"production","syntax":"$aurl/?aflow_version"},"aflowlib_date":{"__comment__":[""],"description":"Returns the date of the AFLOW post-processor which generated the entry for the library.","title":"material generation date","format":"%s","class":"calculation","subclass":"version","type":"string","inclusion":"optional","expression":"declarative","example":"aflowlib_date=20140204_13:10:39_GMT-5","status":"production","syntax":"$aurl/?aflowlib_date"},"aflowlib_entries":{"__comment__":[""],"description":"For projects and set-layer entries, aflowlib_entries lists the available sub-entries which are associated with the $aurl of the subdirectories. By parsing $aurl/?aflowlib_entries (containing $aurl/aflowlib_entries_number entries) the user finds further locations to interrogate.","title":"aflowlib entries","format":"%s","class":"API only","subclass":"","type":"strings","delimiter":",","inclusion":"conditional","expression":"directive","example":"aflowlib_entries=AgAl,AgAs,AgAu,AgB_h,AgBa_sv,AgBe_sv,AgBi_d,AgBr,AgCa_sv,...","status":"production","syntax":"$aurl/?aflowlib_entries"},"aflowlib_entries_number":{"__comment__":[""],"description":"For projects and set-layer entries, aflowlib_entrieslists the available sub-entries which are associated with the $aurl of the subdirectories. By parsing $aurl/?aflowlib_entries (containing $aurl/aflowlib_entries_number entries) the user finds further locations to interrogate.","title":"aflowlib entry count","format":"%s","class":"API only","subclass":"","type":"number","inclusion":"conditional","expression":"directive","example":"aflowlib_entries_number=654","status":"production","syntax":"$aurl/?aflowlib_entries_number"},"aflowlib_version":{"__comment__":[""],"description":"Returns the version of the AFLOW post-processor which generated the entry for the library.","title":"aflowlib version","format":"%s","class":"calculation","subclass":"version","type":"string","inclusion":"optional","expression":"declarative","example":"aflowlib_version=3.1.103","status":"production","syntax":"$aurl/?aflowlib_version"},"agl_acoustic_debye":{"__comment__":[""],"description":"Returns the acoustic Debye temperature as calculated with AGL.","title":"AGL acoustic Debye temperature","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"K","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_acoustic_debye=492","status":"production","syntax":"$aurl/?agl_acoustic_debye"},"agl_bulk_modulus_isothermal_300K":{"__comment__":[""],"description":"Returns the isothermal bulk modulus at 300K as calculated with AGL.","title":"AGL isothermal bulk modulus 300K","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_bulk_modulus_isothermal_300K=96.6","status":"production","syntax":"$aurl/?agl_bulk_modulus_isothermal_300K"},"agl_bulk_modulus_static_300K":{"__comment__":[""],"description":"Returns the static bulk modulus at 300K as calculated with AGL.","title":"AGL static bulk modulus 300K","format":"%s","class":"mechanical","subclass":"","type":"number","units":"GPa","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_bulk_modulus_static_300K=99.6","status":"production","syntax":"$aurl/?agl_bulk_modulus_static_300K"},"agl_debye":{"__comment__":[""],"description":"Returns the Debye temperature as calculated with AGL.","title":"AGL Debye temperature","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"K","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_debye=620","status":"production","syntax":"$aurl/?agl_debye"},"agl_gruneisen":{"__comment__":[""],"description":"Returns the Gruneisen parameter as calculated with AGL.","title":"AGL Gruneisen parameter","format":"%s","class":"thermodynamics","subclass":"","type":"number","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_gruneisen=2.06","status":"production","syntax":"$aurl/?agl_gruneisen"},"agl_heat_capacity_Cp_300K":{"__comment__":[""],"description":"Returns the heat capacity at constant pressure as calculated with AGL at 300K.","title":"AGL heat capacity Cp","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"kB/cell","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_heat_capacity_Cp_300K=5.502","status":"production","syntax":"$aurl/?agl_heat_capacity_Cp_300K"},"agl_heat_capacity_Cv_300K":{"__comment__":[""],"description":"Returns the heat capacity at constant volume as calculated with AGL at 300K.","title":"AGL heat capacity Cv","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"kB/cell","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_heat_capacity_Cv_300K=4.901","status":"production","syntax":"$aurl/?agl_heat_capacity_Cv_300K"},"agl_thermal_conductivity_300K":{"__comment__":[""],"description":"Returns the thermal conductivity as calculated with AGL at 300K.","title":"AGL thermal conductivity","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"W/m*K","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_thermal_conductivity_300K=24.41","status":"production","syntax":"$aurl/?agl_thermal_conductivity_300K"},"agl_thermal_expansion_300K":{"__comment__":[""],"description":"Returns the thermal expansion as calculated with AGL at 300K.","title":"AGL thermal expansion","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"1/K","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"agl_thermal_expansion_300K=4.997e-05","status":"production","syntax":"$aurl/?agl_thermal_expansion_300K"},"auid":{"__comment__":[""],"description":"AFLOWLIB Unique Identifier for the entry, AUID, which can be used as a publishable object identifier.","title":"AFLOWLIB Unique Identifier","format":"%s","class":"calculation","subclass":"","type":"string","inclusion":"mandatory","expression":"declarative","example":"auid=aflow:e9c6d914c4b8d9ca","status":"production","syntax":"$aurl/?auid"},"aurl":{"__comment__":[""],"description":"AFLOWLIB Uniform Resource Locator returns the AURL of the entry.","title":"AFLOWLIB Uniform Resource Locator","format":"%s","class":"calculation","subclass":"","type":"string","inclusion":"mandatory","expression":"declarative","example":"aurl=aflowlib.duke.edu:AFLOWDATA/LIB3_RAW/Bi_dRh_pvTi_sv/T0003.ABC:LDAU2","status":"production","syntax":"$aurl/?aurl"},"author":{"__comment__":[""],"description":"Returns the name (not necessarily an individual) and affiliation associated with authorship of the data.","title":"author","format":"%s","class":"calculation","subclass":"provenance","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"author=Marco_Buongiorno_Nardelli,Ohad_Levy,Jesus_Carrete","status":"development","syntax":"$aurl/?author"},"bader_atomic_volumes":{"__comment__":[""],"description":"Returns the volume of each atom of the primitive cell as calculated by the Bader Atoms in Molecules Analysis. This volume encapsulates the electron density associated with each atom above a threshold of 0.0001 electrons.","title":"atomic volume per atom","format":"%s","class":"chemistry","subclass":"","type":"numbers","delimiter":",","units":"Å<sup>3</sup>","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"bader_atomic_volumes=15.235,12.581,13.009","status":"production","syntax":"$aurl/?bader_atomic_volumes"},"bader_net_charges":{"__comment__":[""],"description":"Returns a comma delimited set of partial charges per atom of the primitive cell as calculated by the Bader Atoms in Molecules Analysis.","title":"partial charge per atom","format":"%s","class":"chemistry","subclass":"","type":"numbers","delimiter":",","units":"electrons","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"bader_net_charges=0.125,0.125,-0.25","status":"production","syntax":"$aurl/?bader_net_charges"},"calculation_cores":{"__comment__":[""],"description":"Number of processors/cores used for the calculation.","title":"used CPU cores","format":"%s","class":"calculation","subclass":"computational resources","type":"number","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"calculation_cores=32","status":"production","syntax":"$aurl/?calculation_cores"},"calculation_memory":{"__comment__":[""],"description":"The maximum memory used for the calculation.","title":"used RAM","format":"%s","class":"calculation","subclass":"computational resources","type":"number","units":"Megabytes","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"calculation_memory=32","status":"production","syntax":"$aurl/?calculation_memory"},"calculation_time":{"__comment__":[""],"description":"Total time taken for the calculation.","title":"used time","format":"%s","class":"calculation","subclass":"computational resources","type":"number","units":"seconds","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"calculation_time=32","status":"production","syntax":"$aurl/?calculation_time"},"catalog":{"__comment__":[""],"description":"Returns the context set for the calculation.","title":"catalog","format":"%s","class":"calculation","subclass":"version","type":"string","inclusion":"optional","expression":"declarative","example":"catalog=icsd","status":"production","syntax":"$aurl/?catalog"},"code":{"__comment__":[""],"description":"Returns the software name and version used to perform the simulation.","title":"ab initio code","format":"%s","class":"calculation","subclass":"version","type":"string","inclusion":"optional","expression":"declarative","example":"code=vasp.4.6.35","status":"production","syntax":"$aurl/?code"},"composition":{"__comment__":[""],"description":"Returns a comma delimited composition description of the structure entry in the calculated cell.","title":"composition","format":"%s","class":"chemistry","subclass":"","type":"numbers","delimiter":",","inclusion":"optional","expression":"declarative","example":"composition=2,6,6","status":"production","syntax":"$aurl/?composition"},"compound":{"__comment__":[""],"description":"Returns the composition description of the compound in the calculated cell.","title":"chemical formula","format":"%s","class":"chemistry","subclass":"","type":"string","inclusion":"mandatory","expression":"declarative","example":"compound=Co2Er6Si6","status":"production","syntax":"$aurl/?compound"},"corresponding":{"__comment__":[""],"description":"Returns the name (not necessarily an individual) and affiliation associated with the data origin concerning correspondence about data.","title":"coresponding","format":"%s","class":"calculation","subclass":"provenance","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"corresponding=M_Buongiorno_Nardelli_mbn@unt.edu","status":"development","syntax":"$aurl/?corresponding"},"data_api":{"__comment__":[""],"description":"AFLOWLIB version of the entry, API.}","title":"REST API version","format":"%s","class":"API only","subclass":"","type":"string","inclusion":"mandatory","expression":"declarative","example":"data_api=aapi1.0","status":"production","syntax":"$aurl/?data_api"},"data_language":{"__comment__":[""],"description":"Gives the language of the data in AFLOWLIB.","title":"data language","format":"%s","class":"calculation","subclass":"version","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"data_language=aflowlib","status":"production","syntax":"$aurl/?data_language"},"data_source":{"__comment__":[""],"description":"Gives the source of the data in AFLOWLIB.","title":"data source","format":"%s","class":"calculation","subclass":"version","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"data_source=aflowlib","status":"production","syntax":"$aurl/?data_source"},"delta_electronic_energy_convergence":{"__comment__":[""],"description":"Returns the change in energy from the last step of the convergence iteration.","title":"Electronic Energy of Convergence Step","format":"%s","class":"calculation","subclass":"","type":"number","inclusion":"optional","expression":"derivative","example":"delta_electronic_energy_convergence=6.09588e-05","status":"development","syntax":"$aurl/?delta_electronic_energy_convergence"},"delta_electronic_energy_threshold":{"__comment__":[""],"description":"Returns the maximimum change in energy required for the convergence iteration.","title":"Electronic Energy of Convergence Threshold","format":"%s","class":"calculation","subclass":"","type":"number","inclusion":"optional","expression":"declarative","example":"delta_electronic_energy_threshold=0.0001","status":"development","syntax":"$aurl/?delta_electronic_energy_threshold"},"density":{"__comment__":[""],"description":"Returns the mass density in grams/cm3.","title":"mass density","format":"%s","class":"chemistry","subclass":"real space lattice","type":"number","units":"grams/cm<sup>3</sup>","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"example":"density=7.76665","status":"production","syntax":"$aurl/?density"},"dft_type":{"__comment__":[""],"description":"Returns information about the pseudopotential type, the exchange correlation functional used (normal or hybrid) and use of GW.","title":"DFT type","format":"%s","class":"chemistry","subclass":"parameters","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"dft_type=PAW_PBE,HSE06","status":"production","syntax":"$aurl/?dft_type"},"eentropy_atom":{"__comment__":[""],"description":"Returns the electronic entropy of the atom used to converge the ab initio calculation (smearing).","title":"atomistic electronic entropy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV/atom","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"eentropy_atom=0.0011","status":"production","syntax":"$aurl/?eentropy_atom"},"eentropy_cell":{"__comment__":[""],"description":"Returns the electronic entropy of the unit cell used to converge the ab initio calculation (smearing).","title":"unit cell electronic entropy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV/atom","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"eentropy_cell=0.0011","status":"production","syntax":"$aurl/?eentropy_cell"},"energy_atom":{"__comment__":[""],"description":"Returns the total ab initio energy per atom- the value of energy_cell/$N$).","title":"atomic energy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"example":"energy_atom=-82.1656","status":"production","syntax":"$aurl/?energy_atom"},"energy_cell":{"__comment__":[""],"description":"Returns the total ab initio energy of the unit cell, E. At T=0K and p=0, this is the internal energy of the system (per unit cell).","title":"unit cell energy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"example":"energy_cell=-82.1656","status":"production","syntax":"$aurl/?energy_cell"},"energy_cutoff":{"__comment__":[""],"description":"Set of energy cut-offs used during the various steps of the calculations.","title":"energy cutoff","format":"%s","class":"calculation","subclass":"parameters","type":"numbers","delimiter":",","units":"eV","inclusion":"optional","expression":"declarative","example":"energy_cutoff=384.1,384.1,384.1","status":"production","syntax":"$aurl/?energy_cutoff"},"enthalpy_atom":{"__comment__":[""],"description":"Returns the enthalpy per atom- the value of enthalpy_cell/N).","title":"atomic enthalpy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"example":"enthalpy_atom=-82.1656","status":"production","syntax":"$aurl/?enthalpy_atom"},"enthalpy_cell":{"__comment__":[""],"description":"Returns the enthalpy of the system of the unit cell, H = E + PV.","title":"unit cell enthalpy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"example":"enthalpy_cell=-82.1656","status":"production","syntax":"$aurl/?enthalpy_cell"},"enthalpy_formation_atom":{"__comment__":[""],"description":"Returns the formation enthalpy DeltaHFatomic per atom).","title":"atomic formation enthalpy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"enthalpy_formation_atom=-33.1587","status":"production","syntax":"$aurl/?enthalpy_formation_atom"},"enthalpy_formation_cell":{"__comment__":[""],"description":"Returns the formation enthalpy DeltaHF per unit cell.","title":"unit cell formation enthalpy","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"eV","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"enthalpy_formation_cell=-33.1587","status":"production","syntax":"$aurl/?enthalpy_formation_cell"},"entropic_temperature":{"__comment__":[""],"description":"Returns the entropic temperature for the structure.","title":"entropic temperature","format":"%s","class":"thermodynamics","subclass":"","type":"number","units":"Kelvin","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"entropic_temperature=1072.1","status":"production","syntax":"$aurl/?entropic_temperature"},"files":{"__comment__":[""],"description":"Provides access to the input and output files used in the simulation (provenance data).","title":"I/O files","format":"%s","class":"calculation","subclass":"","type":"strings","delimiter":",","inclusion":"conditional","expression":"directive","example":"files=Bi_dRh_pv.33.cif,Bi_dRh_pv.33.png,CONTCAR.relax,CONTCAR.relax1,","status":"production","syntax":"$aurl/?files"},"forces":{"__comment__":[""],"description":"Final quantum mechanical forces (Fi,Fj,Fk) in the notation of the code.","title":"Quantum Forces","format":"%s","class":"mechanical","subclass":"","type":"numbers","delimiter":";,","units":"eV/Å","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"forces=0,-0.023928,0.000197;0,0.023928,-0.000197;...","status":"development","syntax":"$aurl/?forces"},"geometry":{"__comment__":[""],"description":"Returns geometrical data describing the unit cell in the usual a,b,c,alpha,beta,gamma notation.","title":"unit cell basis","format":"%s","class":"chemistry","subclass":"real space lattice","type":"numbers","delimiter":",","units":"Å","inclusion":"mandatory","expression":"declarative","example":"geometry=18.82,18.82,18.82,32.41,32.41,32.41","status":"production","verification":["energy_cutoff","kpoints","pressure_residual","stress_tensor"],"syntax":"$aurl/?geometry"},"keywords":{"__comment__":[""],"description":"This includes the list of keywords available in the entry, separated by commas.","title":"Title","format":"%s","class":"API only","subclass":"","type":"strings","delimiter":",","inclusion":"mandatory","expression":"directive","example":"keywords=aurl,auid,loop,code,compound,prototype,nspecies,natoms,...","status":"production","syntax":"$aurl/?keywords"},"kpoints":{"__comment__":[""],"description":"Set of k-point meshes uniquely identifying the various steps of the calculations, e.g. relaxation, static and electronic band structure (specifying the k-space symmetry points of the structure).","title":"K-point mesh","format":"%s","class":"calculation","subclass":"parameters","type":"numbers","delimiter":":,","inclusion":"optional","expression":"declarative","example":"kpoints=10,10,10;16,16,16;G-X-W-K-G-L-U-W-L-K+U-X","status":"production","syntax":"$aurl/?kpoints"},"lattice_system_orig":{"__comment__":[""],"description":"Return the lattice system and lattice variation (Brillouin zone) of the original-unrelaxed structure before the calculation.","title":"original lattice system","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"declarative","example":"lattice_system_orig=rhombohedral","status":"production","syntax":"$aurl/?lattice_system_orig"},"lattice_system_relax":{"__comment__":[""],"description":"Return the lattice system and lattice variation (Brillouin zone) of the relaxed structure after the calculation.","title":"relaxed lattice system","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"lattice_system_relax=rhombohedral","status":"production","syntax":"$aurl/?lattice_system_relax"},"lattice_variation_orig":{"__comment__":[""],"description":"Return the lattice system and lattice variation (Brillouin zone) of the original-unrelaxed structure before the calculation.","title":"original lattice variation","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"declarative","example":"lattice_variation_orig=rhombohedral","status":"production","syntax":"$aurl/?lattice_variation_orig"},"lattice_variation_relax":{"__comment__":[""],"description":"Return the lattice system and lattice variation (Brillouin zone) of the relaxed structure after the calculation.","title":"relaxed lattice variation","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"string","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"lattice_variation_relax=rhombohedral","status":"production","syntax":"$aurl/?lattice_variation_relax"},"ldau_TLUJ":{"__comment__":[""],"description":"This vector of numbers contains the parameters of the DFT+U calculations, based on a corrective functional inspired by the Hubbard model.","title":"on site coulomb interaction","format":"%s","class":"chemistry","subclass":"parameters","type":"numbers","delimiter":";,","inclusion":"mandatory","expression":"declarative","example":"ldau_TLUJ=2;2,0,0;5,0,0;0,0,0","status":"development","syntax":"$aurl/?ldau_TLUJ"},"loop":{"__comment__":[""],"description":"Informs the user of the type of post-processing that was performed.","title":"process category","format":"%s","class":"calculation","subclass":"parameters","type":"strings","delimiter":",","inclusion":"optional","expression":"directive","example":"loop=thermodynamics,bands,magnetic","status":"production","syntax":"$aurl/?loop"},"natoms":{"__comment__":[""],"description":"Returns the number of atoms in the unit cell of the structure entry. The number can be non integer if partial occupation is considered within appropriate approximations.","title":"unit cell atom count","format":"%s","class":"crystal","subclass":"real space lattice","type":"number","inclusion":"mandatory","expression":"declarative","example":"natoms=12","status":"production","syntax":"$aurl/?natoms"},"nbondxx":{"__comment__":[""],"description":"Nearest neighbors bond lengths of the relaxed structure per ordered set of species Ai,Aj greater than or equal to i.","title":"Nearest neighbors bond lengths","format":"%s","class":"crystal","subclass":"","type":"numbers","delimiter":",","units":"Å","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","forces","kpoints","pressure_residual","stress_tensor"],"example":"nbondxx=1.2599,1.0911,1.0911,1.7818,1.2599,1.7818","status":"production","syntax":"$aurl/?nbondxx"},"node_CPU_Cores":{"__comment__":[""],"description":"Information about the number of cores in the node/cluster where the calculation was performed.","title":"available CPU cores","format":"%s","class":"calculation","subclass":"computational resources","type":"number","inclusion":"optional","expression":"declarative","example":"node_CPU_Cores=12","status":"production","syntax":"$aurl/?node_CPU_Cores"},"node_CPU_MHz":{"__comment__":[""],"description":"Information about the CPU speed in the node/cluster where the calculation was performed.","title":"CPU rate","format":"%s","class":"calculation","subclass":"computational resources","type":"number","units":"Megahertz","inclusion":"optional","expression":"declarative","example":"node_CPU_MHz=12","status":"production","syntax":"$aurl/?node_CPU_MHz"},"node_CPU_Model":{"__comment__":[""],"description":"Information about the CPU model in the node/cluster where the calculation was performed.","title":"CPU model","format":"%s","class":"calculation","subclass":"computational resources","type":"string","inclusion":"optional","expression":"declarative","example":"node_CPU_Model=12","status":"production","syntax":"$aurl/?node_CPU_Model"},"node_RAM_GB":{"__comment__":[""],"description":"Information about the RAM in the node/cluster where the calculation was performed.","title":"available RAM","format":"%s","class":"calculation","subclass":"","type":"number","units":"Gigabytes","inclusion":"optional","expression":"declarative","example":"node_RAM_GB=12","status":"production","syntax":"$aurl/?node_RAM_GB"},"nspecies":{"__comment__":[""],"description":"Returns the number of species in the system (e.g., binary = 2, ternary = 3, etc.).","title":"species count","format":"%s","class":"chemistry","subclass":"","type":"number","inclusion":"mandatory","expression":"declarative","example":"nspecies=3","status":"production","syntax":"$aurl/?nspecies"},"positions_cartesian":{"__comment__":[""],"description":"Final Cartesian positions (xi,xj,xk) in the notation of the code.","title":"relaxed absolute positions","format":"%s","class":"other","subclass":"bravais lattice of the crystal","type":"numbers","delimiter":";,","units":"Å","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","pressure_residual","stress_tensor"],"example":"positions_cartesian=0,0,0;18.18438,0,2.85027;...","status":"development","syntax":"$aurl/?positions_cartesian"},"positions_fractional":{"__comment__":[""],"description":"Final fractional positions (xi,xj,xk) with respect to the unit cell as specified in $geometry.","title":"relaxed relative positions","format":"%s","class":"other","subclass":"","type":"numbers","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","pressure_residual","stress_tensor"],"example":"positions_fractional=0,0,0;0.25,0.25,0.25;...","status":"development","syntax":"$aurl/?positions_fractional"},"pressure":{"__comment__":[""],"description":"Returns the target pressure selected for the simulation.","title":"external pressure","format":"%s","class":"mechanical","subclass":"","type":"number","units":"kbar","inclusion":"mandatory","expression":"declarative","example":"pressure=10.0","status":"production","syntax":"$aurl/?pressure"},"pressure_final":{"__comment__":[""],"description":"Returns the external pressure achieved by the simulation.","title":"resulting pressure","format":"%s","class":"mechanical","subclass":"","type":"number","units":"kbar","inclusion":"mandatory","expression":"derivative","example":"pressure_final=10.0","status":"development","syntax":"$aurl/?pressure_final"},"pressure_residual":{"__comment__":[""],"description":"Returns the external pressure achieved by the simulation.","title":"residual pressure","format":"%s","class":"mechanical","subclass":"","type":"number","units":"kbar","inclusion":"mandatory","expression":"derivative","example":"pressure_residual=10.0","status":"development","syntax":"$aurl/?pressure_residual"},"prototype":{"__comment__":[""],"description":"Returns the AFLOW unrelaxed prototype which was used for the calculation.","title":"original prototype","format":"%s","class":"crystal","subclass":"label","type":"string","inclusion":"mandatory","expression":"declarative","example":"prototype=T0001.A2BC","status":"production","syntax":"$aurl/?prototype"},"scintillation_attenuation_length":{"__comment__":[""],"description":"Returns the scintillation attenuation length of the compound in cm.","title":"attenuation length","format":"%s","class":"scintillation","subclass":"","type":"number","units":"cm","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"scintillation_attenuation_length=2.21895","status":"production","syntax":"$aurl/?scintillation_attenuation_length"},"sg":{"__comment__":[""],"description":"Evolution of the space group of the compound. The first, second and third string represent space group name/number before the first, after the first, and after the last relaxation of the calculation.","title":"compound space group","format":"%s","class":"crystal","subclass":"space group","type":"strings","delimiter":",","inclusion":"mandatory","expression":"directive","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"sg=Fm-3m#225,Fm-3m#225,Fm-3m#225","status":"production","syntax":"$aurl/?sg"},"sg2":{"__comment__":[""],"description":"Evolution of the space group of the compound. The first, second and third string represent space group name/number before the first, after the first, and after the last relaxation of the calculation.","title":"refined compound space group","format":"%s","class":"crystal","subclass":"space group","type":"strings","delimiter":",","inclusion":"mandatory","expression":"directive","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"sg2=Fm-3m#225,Fm-3m#225,Fm-3m#225","status":"production","syntax":"$aurl/?sg2"},"spacegroup_orig":{"__comment__":[""],"description":"Returns the spacegroup number of the original-unrelaxed structure before the calculation.","title":"original space group number","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"number","inclusion":"mandatory","expression":"declarative","example":"spacegroup_orig=225","status":"production","syntax":"$aurl/?spacegroup_orig"},"spacegroup_relax":{"__comment__":[""],"description":"Returns the spacegroup number of the relaxed structure after the calculation.","title":"relaxed space group number","format":"%s","class":"crystal","subclass":"bravais lattice of the crystal","type":"number","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","stress_tensor"],"example":"spacegroup_relax=225","status":"production","syntax":"$aurl/?spacegroup_relax"},"species":{"__comment__":[""],"description":"Species of the atoms in this material.","title":"atomic species","format":"%s","class":"chemistry","subclass":"","type":"strings","delimiter":",","inclusion":"mandatory","expression":"declarative","example":"species=Y,Zn,Zr","status":"production","syntax":"$aurl/?species"},"species_pp":{"__comment__":[""],"description":"Pseudopotentials of the atomic species.","title":"species pseudopotential(s)","format":"%s","class":"chemistry","subclass":"","type":"strings","delimiter":",","inclusion":"mandatory","expression":"declarative","example":"species_pp=Y,Zn,Zr","status":"production","syntax":"$aurl/?species_pp"},"species_pp_ZVAL":{"__comment__":[""],"description":"Returns the number of valence electrons of the atomic species.","title":"valence atoms per species","format":"%s","class":"calculation","subclass":"","type":"numbers","delimiter":",","units":"electrons","inclusion":"optional","expression":"declarative","example":"species_pp_ZVAL=3","status":"production","syntax":"$aurl/?species_pp_ZVAL"},"species_pp_version":{"__comment__":[""],"description":"Species of the atoms, pseudopotentials species, and pseudopotential versions.","title":"pseudopotential species/version","format":"%s","class":"chemistry","subclass":"","type":"strings","delimiter":",","inclusion":"mandatory","expression":"declarative","example":"species_pp_version=Y,Zn,Zr","status":"production","syntax":"$aurl/?species_pp_version"},"spinD":{"__comment__":[""],"description":"For spin polarized calculations, the spin decomposition over the atoms of the cell.","title":"atomic spin decomposition","format":"%s","class":"magnetics","subclass":"","type":"numbers","delimiter":",","units":"μ<sub>B</sub>","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"spinD=0.236,0.236,-0.023,1.005","status":"production","syntax":"$aurl/?spinD"},"spinF":{"__comment__":[""],"description":"For spin polarized calculations, the magnetization of the cell at the Fermi level.","title":"fermi level spin decomposition","format":"%s","class":"magnetics","subclass":"","type":"number","units":"μ<sub>B</sub>","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"spinF=0.410879","status":"production","syntax":"$aurl/?spinF"},"spin_atom":{"__comment__":[""],"description":"For spin polarized calculations, the magnetization per atom.","title":"atomic spin polarization","format":"%s","class":"magnetics","subclass":"","type":"number","units":"μ<sub>B</sub>/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"spin_atom=2.16419","status":"production","syntax":"$aurl/?spin_atom"},"spin_cell":{"__comment__":[""],"description":"For spin polarized calculations, the total magnetization of the cell.","title":"unit cell spin polarization","format":"%s","class":"magnetics","subclass":"","type":"number","units":"μ<sub>B</sub>","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"spin_cell=2.16419","status":"production","syntax":"$aurl/?spin_cell"},"sponsor":{"__comment__":[""],"description":"Returns information about funding agencies and other sponsors for the data.","title":"sponsor","format":"%s","class":"calculation","subclass":"provenance","type":"strings","delimiter":",","inclusion":"optional","expression":"declarative","example":"sponsor=DOD_N000141310635,NIST_70NANB12H163","status":"development","syntax":"$aurl/?sponsor"},"stoich":{"__comment__":[""],"description":"Similar to composition, returns a comma delimited stoichiometry description of the structure entry in the calculated cell.","title":"unit cell stoichiometry","format":"%s","class":"chemistry","subclass":"","type":"numbers","delimiter":",","inclusion":"optional","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"stoichiometry=0.5,0.25,0.25","status":"deprecated","syntax":"$aurl/?stoichiometry"},"stoichiometry":{"__comment__":[""],"description":"Similar to composition, returns a comma delimited stoichiometry description of the structure entry in the calculated cell.","title":"unit cell stoichiometry","format":"%s","class":"chemistry","subclass":"","type":"numbers","delimiter":",","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"stoichiometry=0.5,0.25,0.25","status":"production","syntax":"$aurl/?stoichiometry"},"stress_tensor":{"__comment__":[""],"description":"Returns the stress tensor of the completed calculation.","title":"Stress Tensor","format":"%s","class":"mechanical","subclass":"","type":"numbers","inclusion":"mandatory","expression":"derivative","example":"stress_tensor=-0.96,-0,-0,-0,-0.96,-0,-0,-0,-0.96","status":"development","syntax":"$aurl/?stress_tensor"},"valence_cell_iupac":{"__comment__":[""],"description":"Returns IUPAC valence, the maximum number of univalent atoms that may combine with the atoms.","title":"unit cell IUPAC valence","format":"%s","class":"chemistry","subclass":"","type":"number","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"valence_cell_iupac=22","status":"production","syntax":"$aurl/?valence_cell_iupac"},"valence_cell_std":{"__comment__":[""],"description":"Returns standard valence, the maximum number of univalent atoms that may combine with the atoms.","title":"unit cell standard valence","format":"%s","class":"chemistry","subclass":"","type":"number","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","kpoints"],"example":"valence_cell_std=22","status":"production","syntax":"$aurl/?valence_cell_std"},"volume_atom":{"__comment__":[""],"description":"Returns the volume per atom in the unit cell.","title":"atomic volume","format":"%s","class":"crystal","subclass":"real space lattice","type":"number","units":"Å<sup>3</sup>/atom","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","pressure_residual","stress_tensor"],"example":"volume_atom=100.984","status":"production","syntax":"$aurl/?volume_atom"},"volume_cell":{"__comment__":[""],"description":"Returns the volume of the unit cell.","title":"unit cell volume","format":"%s","class":"crystal","subclass":"real space lattice","type":"number","units":"Å<sup>3</sup>","inclusion":"mandatory","expression":"derivative","verification":["energy_cutoff","forces","kpoints","pressure_residual","stress_tensor"],"example":"volume_cell=100.984","status":"production","syntax":"$aurl/?volume_cell"}}""")
keys["energy_cutoff"]
from aflow.entries import Entry
hasattr(Entry, "Egap")
```
| github_jupyter |
## 1. Train your own word2vec representations as we did in our first example in the checkpoint. But, you need to experiment with the hyperparameters of the vectorization step. Modify the hyperparameters and run the classification models again. Can you wrangle any improvements?
```
import numpy as np
import pandas as pd
import sklearn
import spacy
import re
import nltk
from nltk.corpus import gutenberg
import gensim
import warnings
warnings.filterwarnings("ignore")
nltk.download('gutenberg')
!python -m spacy download en
# utility function for standard text cleaning
def text_cleaner(text):
# visual inspection identifies a form of punctuation spaCy does not
# recognize: the double dash '--'. Better get rid of it now!
text = re.sub(r'--',' ',text)
text = re.sub("[\[].*?[\]]", "", text)
text = re.sub(r"(\b|\s+\-?|^\-?)(\d+|\d*\.\d+)\b", " ", text)
text = ' '.join(text.split())
return text
# load and clean the data
persuasion = gutenberg.raw('austen-persuasion.txt')
alice = gutenberg.raw('carroll-alice.txt')
# the chapter indicator is idiosyncratic
persuasion = re.sub(r'Chapter \d+', '', persuasion)
alice = re.sub(r'CHAPTER .*', '', alice)
alice = text_cleaner(alice)
persuasion = text_cleaner(persuasion)
# parse the cleaned novels. This can take a bit.
nlp = spacy.load('en_core_web_sm')
alice_doc = nlp(alice)
persuasion_doc = nlp(persuasion)
# group into sentences
alice_sents = [[sent, "Carroll"] for sent in alice_doc.sents]
persuasion_sents = [[sent, "Austen"] for sent in persuasion_doc.sents]
# combine the sentences from the two novels into one data frame
sentences = pd.DataFrame(alice_sents + persuasion_sents, columns = ["text", "author"])
sentences.head()
# get rid off stop words and punctuation
# and lemmatize the tokens
for i, sentence in enumerate(sentences["text"]):
sentences.loc[i, "text"] = [token.lemma_ for token in sentence if not token.is_punct and not token.is_stop]
```
Below, we train several word2vec models. In particular, models 1 through 3 try windows sizes of 4, 6 and 8 and models 4 through 6 try vector size of 200 instead of 100:
```
# train word2vec on the the sentences
model1 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=4,
sg=0,
sample=1e-3,
size=100,
hs=1
)
model2 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=6,
sg=0,
sample=1e-3,
size=100,
hs=1
)
model3 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=8,
sg=0,
sample=1e-3,
size=100,
hs=1
)
model4 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=4,
sg=0,
sample=1e-3,
size=200,
hs=1
)
model5 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=6,
sg=0,
sample=1e-3,
size=200,
hs=1
)
model6 = gensim.models.Word2Vec(
sentences["text"],
workers=4,
min_count=1,
window=8,
sg=0,
sample=1e-3,
size=200,
hs=1
)
word2vec_arr1 = np.zeros((sentences.shape[0],100))
word2vec_arr2 = np.zeros((sentences.shape[0],100))
word2vec_arr3 = np.zeros((sentences.shape[0],100))
word2vec_arr4 = np.zeros((sentences.shape[0],200))
word2vec_arr5 = np.zeros((sentences.shape[0],200))
word2vec_arr6 = np.zeros((sentences.shape[0],200))
for i, sentence in enumerate(sentences["text"]):
word2vec_arr1[i,:] = np.mean([model1[lemma] for lemma in sentence], axis=0)
word2vec_arr2[i,:] = np.mean([model2[lemma] for lemma in sentence], axis=0)
word2vec_arr3[i,:] = np.mean([model3[lemma] for lemma in sentence], axis=0)
word2vec_arr4[i,:] = np.mean([model4[lemma] for lemma in sentence], axis=0)
word2vec_arr5[i,:] = np.mean([model5[lemma] for lemma in sentence], axis=0)
word2vec_arr6[i,:] = np.mean([model6[lemma] for lemma in sentence], axis=0)
word2vec_arr1 = pd.DataFrame(word2vec_arr1)
word2vec_arr2 = pd.DataFrame(word2vec_arr2)
word2vec_arr3 = pd.DataFrame(word2vec_arr3)
word2vec_arr4 = pd.DataFrame(word2vec_arr4)
word2vec_arr5 = pd.DataFrame(word2vec_arr5)
word2vec_arr6 = pd.DataFrame(word2vec_arr6)
sentences1 = pd.concat([sentences[["author", "text"]],word2vec_arr1], axis=1)
sentences1.dropna(inplace=True)
sentences2 = pd.concat([sentences[["author", "text"]],word2vec_arr2], axis=1)
sentences2.dropna(inplace=True)
sentences3 = pd.concat([sentences[["author", "text"]],word2vec_arr3], axis=1)
sentences3.dropna(inplace=True)
sentences4 = pd.concat([sentences[["author", "text"]],word2vec_arr4], axis=1)
sentences4.dropna(inplace=True)
sentences5 = pd.concat([sentences[["author", "text"]],word2vec_arr5], axis=1)
sentences5.dropna(inplace=True)
sentences6 = pd.concat([sentences[["author", "text"]],word2vec_arr6], axis=1)
sentences6.dropna(inplace=True)
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.model_selection import train_test_split
Y1 = sentences1['author']
Y2 = sentences2['author']
Y3 = sentences3['author']
Y4 = sentences4['author']
Y5 = sentences5['author']
Y6 = sentences6['author']
X1 = np.array(sentences1.drop(['text','author'], 1))
X2 = np.array(sentences2.drop(['text','author'], 1))
X3 = np.array(sentences3.drop(['text','author'], 1))
X4 = np.array(sentences4.drop(['text','author'], 1))
X5 = np.array(sentences5.drop(['text','author'], 1))
X6 = np.array(sentences6.drop(['text','author'], 1))
# We split the dataset into train and test sets
X_train1, X_test1, y_train1, y_test1 = train_test_split(X1, Y1, test_size=0.4, random_state=123)
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, Y2, test_size=0.4, random_state=123)
X_train3, X_test3, y_train3, y_test3 = train_test_split(X3, Y3, test_size=0.4, random_state=123)
X_train4, X_test4, y_train4, y_test4 = train_test_split(X4, Y4, test_size=0.4, random_state=123)
X_train5, X_test5, y_train5, y_test5 = train_test_split(X5, Y5, test_size=0.4, random_state=123)
X_train6, X_test6, y_train6, y_test6 = train_test_split(X6, Y6, test_size=0.4, random_state=123)
# Models
lr = LogisticRegression()
rfc = RandomForestClassifier()
gbc = GradientBoostingClassifier()
print("-----------------------Word2vec Model 1------------------------------")
lr.fit(X_train1, y_train1)
rfc.fit(X_train1, y_train1)
gbc.fit(X_train1, y_train1)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train1, y_train1))
print('\nTest set score:', lr.score(X_test1, y_test1))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train1, y_train1))
print('\nTest set score:', rfc.score(X_test1, y_test1))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train1, y_train1))
print('\nTest set score:', gbc.score(X_test1, y_test1))
print("-----------------------Word2vec Model 2------------------------------")
lr.fit(X_train2, y_train2)
rfc.fit(X_train2, y_train2)
gbc.fit(X_train2, y_train2)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train2, y_train2))
print('\nTest set score:', lr.score(X_test2, y_test2))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train2, y_train2))
print('\nTest set score:', rfc.score(X_test2, y_test2))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train2, y_train2))
print('\nTest set score:', gbc.score(X_test2, y_test2))
print("-----------------------Word2vec Model 3------------------------------")
lr.fit(X_train3, y_train3)
rfc.fit(X_train3, y_train3)
gbc.fit(X_train3, y_train3)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train3, y_train3))
print('\nTest set score:', lr.score(X_test3, y_test3))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train3, y_train3))
print('\nTest set score:', rfc.score(X_test3, y_test3))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train3, y_train3))
print('\nTest set score:', gbc.score(X_test3, y_test3))
print("-----------------------Word2vec Model 4------------------------------")
lr.fit(X_train4, y_train4)
rfc.fit(X_train4, y_train4)
gbc.fit(X_train4, y_train4)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train4, y_train4))
print('\nTest set score:', lr.score(X_test4, y_test4))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train4, y_train4))
print('\nTest set score:', rfc.score(X_test4, y_test4))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train4, y_train4))
print('\nTest set score:', gbc.score(X_test4, y_test4))
print("-----------------------Word2vec Model 5------------------------------")
lr.fit(X_train5, y_train5)
rfc.fit(X_train5, y_train5)
gbc.fit(X_train5, y_train5)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train5, y_train5))
print('\nTest set score:', lr.score(X_test5, y_test5))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train5, y_train5))
print('\nTest set score:', rfc.score(X_test5, y_test5))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train5, y_train5))
print('\nTest set score:', gbc.score(X_test5, y_test5))
print("-----------------------Word2vec Model 6------------------------------")
lr.fit(X_train6, y_train6)
rfc.fit(X_train6, y_train6)
gbc.fit(X_train6, y_train6)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train6, y_train6))
print('\nTest set score:', lr.score(X_test6, y_test6))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train6, y_train6))
print('\nTest set score:', rfc.score(X_test6, y_test6))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train6, y_train6))
print('\nTest set score:', gbc.score(X_test6, y_test6))
```
Model 6's performance seems to be better. In particular, the best test performance is achieved using model 6 and gradient boosting. Three random forest models also achieved the highest score when trained on model 6.
Moreover, model 6's performance is also superior to that of the model in the checkpoint.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
from keras.datasets import mnist
# Digit recognition when data is in 'pixel form'
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Shape of the pictures
X_test[4,:,:].shape
df = pd.DataFrame(X_train[0,:,:])
df
img = X_test[30,:,:]
plt.imshow(img, cmap = 'gray')
plt.title(y_train[0])
plt.axis('off')
from keras.utils import to_categorical
X_train_new = X_train[:10000,:,:]
y_train_new = y_train[:10000]
X_test_new = X_test[:2500,:,:]
y_test_new = y_test[:2500]
y_train_new = to_categorical(y_train_new, 10)
y_test_new = to_categorical(y_test_new, 10)
X_train_new = X_train_new.reshape(10000, 28, 28, 1)
X_test_new = X_test_new.reshape(2500, 28, 28, 1)
y_test_new.shape
# Convolutional Neural Network for identifying the digits
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Dropout, Flatten, MaxPooling2D
model = Sequential()
model.add(Conv2D(32, kernel_size = (3, 3), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPooling2D((2,2)))
model.add(Conv2D(32, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(X_train_new, y_train_new, epochs = 15, batch_size = 128, validation_data = (X_test_new, y_test_new), verbose = 1)
score = model.evaluate(X_test_new, y_test_new, verbose = 0)
print('Test loss: ', score[0])
print('Accuracy: ', score[1])
# Plotting the accuracy
accuracy = model.history.history
plt.figure(figsize = (9,7))
plt.plot(accuracy['loss'], lw = 2)
plt.title('Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
from sklearn.metrics import confusion_matrix, classification_report
prediction = model.predict(X_test_new)
prediction_classes = np.argmax(prediction, axis = 1)
y_true = np.argmax(y_test_new, axis = 1)
cm = confusion_matrix(y_true, prediction_classes)
print(classification_report(y_true, prediction_classes))
import seaborn as sns
plt.figure(figsize = (10,8))
sns.heatmap(cm, annot = True, cmap = 'viridis')
#b, t = plt.ylim()
#plt.ylim(b + 0.5, t - 0.5)
plt.title('Confusion Matrix')
# New prediction
new_sample = X_test_new[11:12,:,:,:]
new_sample.shape
new_pred = model.predict(new_sample)
new_pred = new_pred.ravel()
np.argmax(new_pred, axis = 0)
# Saving model for reproduction
# model.save('conv_model.h5')
from keras.models import load_model
reconstructed_model = load_model('conv_model.h5')
# Let's check:
# np.testing.assert_allclose(model.predict(new_sample), reconstructed_model.predict(new_sample))
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
# reconstructed_model.fit(test_input, test_target)
# Creating my own digit picture using Paint
# Let's import them with the Pillow library
from PIL import Image
import matplotlib.image as mpimg
image = Image.open('numbers/number_eight.jpg')
image = image.resize((28, 28))
#image.save('numbers28/28X28number_eight.jpg')
image = mpimg.imread('numbers28/number00.jpg')
plt.imshow(image)
# Converting from RGB to grayscale and making prediction
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
gray = rgb2gray(image)
gray = gray.reshape(1, 28, 28, 1)
gray_pred = reconstructed_model.predict(gray)
print('Predicted value:', np.argmax(gray_pred))
import matplotlib.image as mpimg
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
def image():
files = [f for f in os.listdir('numbers/') if f.endswith('.jpg')]
predictions = []
for i in range(len(files)):
image = Image.open('numbers/' + files[i])
image = image.resize((28, 28))
image.save('numbers28/number0' + str(i) + '.jpg')
image = mpimg.imread('numbers28/number0' + str(i) + '.jpg')
gray = rgb2gray(image)
gray = gray.reshape(1, 28, 28, 1)
gray_pred = reconstructed_model.predict(gray)
predictions.append(gray_pred.argmax())
return predictions, image
def plot_images(predictions, images):
truth = [8, 5, 4, 9, 1, 7, 6, 3, 2, 0]
plt.figure(figsize = (12, 6))
for i in range(len(truth)):
plt.subplot(2, 5, i+1)
plt.axis('off')
image = mpimg.imread('numbers28/number0' + str(i) + '.jpg')
color = 'green' if truth[i] == predictions[i] else 'red'
plt.imshow(image)
plt.title('Predicted value:\n' + str(predictions[i]), size = 12, color = color)
plt.subplots_adjust(wspace = 0.2)
return plt.show()
predictions, images = image()
plot_images(predictions, images)
```
| github_jupyter |
# SageMaker/DeepAR demo on electricity dataset
This notebook complements the [DeepAR introduction notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_synthetic/deepar_synthetic.ipynb).
Here, we will consider a real use case and show how to use DeepAR on SageMaker for predicting energy consumption of 370 customers over time, based on a [dataset](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014) that was used in the academic papers [[1](https://media.nips.cc/nipsbooks/nipspapers/paper_files/nips29/reviews/526.html)] and [[2](https://arxiv.org/abs/1704.04110)].
In particular, we will see how to:
* Prepare the dataset
* Use the SageMaker Python SDK to train a DeepAR model and deploy it
* Make requests to the deployed model to obtain forecasts interactively
* Illustrate advanced features of DeepAR: missing values, additional time features, non-regular frequencies and category information
Running this notebook takes around 40 min on a ml.c4.2xlarge for the training, and inference is done on a ml.m4.xlarge (the usage time will depend on how long you leave your served model running).
For more information see the DeepAR [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html) or [paper](https://arxiv.org/abs/1704.04110),
```
%matplotlib inline
import sys
from urllib.request import urlretrieve
import zipfile
from dateutil.parser import parse
import json
from random import shuffle
import random
import datetime
import os
import boto3
import s3fs
import sagemaker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import IntSlider, FloatSlider, Checkbox
# set random seeds for reproducibility
np.random.seed(42)
random.seed(42)
sagemaker_session = sagemaker.Session()
```
Before starting, we can override the default values for the following:
- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these.
```
s3_bucket = sagemaker.Session().default_bucket() # replace with an existing bucket if needed
s3_prefix = 'deepar-electricity-demo-notebook' # prefix used for all data stored within the bucket
role = sagemaker.get_execution_role() # IAM role to use by SageMaker
region = sagemaker_session.boto_region_name
s3_data_path = "s3://{}/{}/data".format(s3_bucket, s3_prefix)
s3_output_path = "s3://{}/{}/output".format(s3_bucket, s3_prefix)
```
Next, we configure the container image to be used for the region that we are running in.
```
image_name = sagemaker.amazon.amazon_estimator.get_image_uri(region, "forecasting-deepar", "latest")
```
### Import electricity dataset and upload it to S3 to make it available for Sagemaker
As a first step, we need to download the original data set of from the UCI data set repository.
```
DATA_HOST = "https://archive.ics.uci.edu"
DATA_PATH = "/ml/machine-learning-databases/00321/"
ARCHIVE_NAME = "LD2011_2014.txt.zip"
FILE_NAME = ARCHIVE_NAME[:-4]
def progress_report_hook(count, block_size, total_size):
mb = int(count * block_size // 1e6)
if count % 500 == 0:
sys.stdout.write("\r{} MB downloaded".format(mb))
sys.stdout.flush()
if not os.path.isfile(FILE_NAME):
print("downloading dataset (258MB), can take a few minutes depending on your connection")
urlretrieve(DATA_HOST + DATA_PATH + ARCHIVE_NAME, ARCHIVE_NAME, reporthook=progress_report_hook)
print("\nextracting data archive")
zip_ref = zipfile.ZipFile(ARCHIVE_NAME, 'r')
zip_ref.extractall("./")
zip_ref.close()
else:
print("File found skipping download")
```
Then, we load and parse the dataset and convert it to a collection of Pandas time series, which makes common time series operations such as indexing by time periods or resampling much easier. The data is originally recorded in 15min interval, which we could use directly. Here we want to forecast longer periods (one week) and resample the data to a granularity of 2 hours.
```
data = pd.read_csv(FILE_NAME, sep=";", index_col=0, parse_dates=True, decimal=',')
num_timeseries = data.shape[1]
data_kw = data.resample('2H').sum() / 8
timeseries = []
for i in range(num_timeseries):
timeseries.append(np.trim_zeros(data_kw.iloc[:,i], trim='f'))
```
Let us plot the resulting time series for the first ten customers for the time period spanning the first two weeks of 2014.
```
fig, axs = plt.subplots(5, 2, figsize=(20, 20), sharex=True)
axx = axs.ravel()
for i in range(0, 10):
timeseries[i].loc["2014-01-01":"2014-01-14"].plot(ax=axx[i])
axx[i].set_xlabel("date")
axx[i].set_ylabel("kW consumption")
axx[i].grid(which='minor', axis='x')
```
### Train and Test splits
Often times one is interested in evaluating the model or tuning its hyperparameters by looking at error metrics on a hold-out test set. Here we split the available data into train and test sets for evaluating the trained model. For standard machine learning tasks such as classification and regression, one typically obtains this split by randomly separating examples into train and test sets. However, in forecasting it is important to do this train/test split based on time rather than by time series.
In this example, we will reserve the last section of each of the time series for evalutation purpose and use only the first part as training data.
```
# we use 2 hour frequency for the time series
freq = '2H'
# we predict for 7 days
prediction_length = 7 * 12
# we also use 7 days as context length, this is the number of state updates accomplished before making predictions
context_length = 7 * 12
```
We specify here the portion of the data that is used for training: the model sees data from 2014-01-01 to 2014-09-01 for training.
```
start_dataset = pd.Timestamp("2014-01-01 00:00:00", freq=freq)
end_training = pd.Timestamp("2014-09-01 00:00:00", freq=freq)
```
The DeepAR JSON input format represents each time series as a JSON object. In the simplest case each time series just consists of a start time stamp (``start``) and a list of values (``target``). For more complex cases, DeepAR also supports the fields ``dynamic_feat`` for time-series features and ``cat`` for categorical features, which we will use later.
```
training_data = [
{
"start": str(start_dataset),
"target": ts[start_dataset:end_training - 1].tolist() # We use -1, because pandas indexing includes the upper bound
}
for ts in timeseries
]
print(len(training_data))
clean_training_data = []
for i in range(370):
if len(training_data[i]['target']) != 2916:
print(i, len(training_data[i]['target']))
else:
clean_training_data.append(training_data[i])
```
As test data, we will consider time series extending beyond the training range: these will be used for computing test scores, by using the trained model to forecast their trailing 7 days, and comparing predictions with actual values.
To evaluate our model performance on more than one week, we generate test data that extends to 1, 2, 3, 4 weeks beyond the training range. This way we perform *rolling evaluation* of our model.
```
num_test_windows = 4
test_data = [
{
"start": str(start_dataset),
"target": ts[start_dataset:end_training + k * prediction_length].tolist()
}
for k in range(1, num_test_windows + 1)
for ts in timeseries
]
print(len(test_data))
clean_test_data = []
for i in range(370):
if len(test_data[i]['target']) != 3001 or len(test_data[i+370]['target']) != 3001+84 or len(test_data[i+370*2]['target']) != 3001+84*2 or len(test_data[i+370*3]['target']) != 3001+84*3:
print(i, len(test_data[i]['target']), len(test_data[i+370]['target']), len(test_data[i+370*2]['target']), len(test_data[i+370*3]['target']))
else:
clean_test_data.append(test_data[i])
clean_test_data.append(test_data[i+370])
clean_test_data.append(test_data[i+370*2])
clean_test_data.append(test_data[i+370*3])
```
Let's now write the dictionary to the `jsonlines` file format that DeepAR understands (it also supports gzipped jsonlines and parquet).
```
def write_dicts_to_file(path, data):
with open(path, 'wb') as fp:
for d in data:
fp.write(json.dumps(d).encode("utf-8"))
fp.write("\n".encode('utf-8'))
%%time
write_dicts_to_file("clean_train.json", clean_training_data)
write_dicts_to_file("clean_test.json", clean_test_data)
```
Now that we have the data files locally, let us copy them to S3 where DeepAR can access them. Depending on your connection, this may take a couple of minutes.
```
s3 = boto3.resource('s3')
def copy_to_s3(local_file, s3_path, override=False):
assert s3_path.startswith('s3://')
split = s3_path.split('/')
bucket = split[2]
path = '/'.join(split[3:])
buk = s3.Bucket(bucket)
if len(list(buk.objects.filter(Prefix=path))) > 0:
if not override:
print('File s3://{}/{} already exists.\nSet override to upload anyway.\n'.format(s3_bucket, s3_path))
return
else:
print('Overwriting existing file')
with open(local_file, 'rb') as data:
print('Uploading file to {}'.format(s3_path))
buk.put_object(Key=path, Body=data)
%%time
copy_to_s3("clean_train.json", s3_data_path + "/train/clean_train.json")
copy_to_s3("clean_test.json", s3_data_path + "/test/clean_test.json")
```
Let's have a look to what we just wrote to S3.
```
s3filesystem = s3fs.S3FileSystem()
with s3filesystem.open(s3_data_path + "/train/clean_train.json", 'rb') as fp:
print(fp.readline().decode("utf-8")[:100] + "...")
```
We are all set with our dataset processing, we can now call DeepAR to train a model and generate predictions.
### Train a model
Here we define the estimator that will launch the training job.
```
estimator = sagemaker.estimator.Estimator(
sagemaker_session=sagemaker_session,
image_name=image_name,
role=role,
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
base_job_name='deepar-electricity-clean-demo',
output_path=s3_output_path
)
```
Next we need to set the hyperparameters for the training job. For example frequency of the time series used, number of data points the model will look at in the past, number of predicted data points. The other hyperparameters concern the model to train (number of layers, number of cells per layer, likelihood function) and the training options (number of epochs, batch size, learning rate...). We use default parameters for every optional parameter in this case (you can always use [Sagemaker Automated Model Tuning](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune them).
```
hyperparameters = {
"time_freq": freq,
"epochs": "400",
"early_stopping_patience": "40",
"mini_batch_size": "64",
"learning_rate": "5E-4",
"context_length": str(context_length),
"prediction_length": str(prediction_length)
}
estimator.set_hyperparameters(**hyperparameters)
```
We are ready to launch the training job. SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.
If you provide the `test` data channel as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test. This is done by predicting the last `prediction_length` points of each time-series in the test set and comparing this to the actual value of the time-series.
**Note:** the next cell may take a few minutes to complete, depending on data size, model complexity, training options.
```
%%time
data_channels = {
"train": "{}/train/clean_train.json".format(s3_data_path),
"test": "{}/test/clean_test.json".format(s3_data_path)
}
estimator.fit(inputs=data_channels, wait=True)
```
Since you pass a test set in this example, accuracy metrics for the forecast are computed and logged (see bottom of the log).
You can find the definition of these metrics from [our documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). You can use these to optimize the parameters and tune your model or use SageMaker's [Automated Model Tuning service](https://aws.amazon.com/blogs/aws/sagemaker-automatic-model-tuning/) to tune the model for you.
### Create endpoint and predictor
Now that we have a trained model, we can use it to perform predictions by deploying it to an endpoint.
**Note: Remember to delete the endpoint after running this experiment. A cell at the very bottom of this notebook will do that: make sure you run it at the end.**
To query the endpoint and perform predictions, we can define the following utility class: this allows making requests using `pandas.Series` objects rather than raw JSON strings.
```
class DeepARPredictor(sagemaker.predictor.RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type=sagemaker.content_types.CONTENT_TYPE_JSON, **kwargs)
def predict(self, ts, cat=None, dynamic_feat=None,
num_samples=100, return_samples=False, quantiles=["0.1", "0.5", "0.9"]):
"""Requests the prediction of for the time series listed in `ts`, each with the (optional)
corresponding category listed in `cat`.
ts -- `pandas.Series` object, the time series to predict
cat -- integer, the group associated to the time series (default: None)
num_samples -- integer, number of samples to compute at prediction time (default: 100)
return_samples -- boolean indicating whether to include samples in the response (default: False)
quantiles -- list of strings specifying the quantiles to compute (default: ["0.1", "0.5", "0.9"])
Return value: list of `pandas.DataFrame` objects, each containing the predictions
"""
prediction_time = ts.index[-1] + 1
quantiles = [str(q) for q in quantiles]
req = self.__encode_request(ts, cat, dynamic_feat, num_samples, return_samples, quantiles)
res = super(DeepARPredictor, self).predict(req)
return self.__decode_response(res, ts.index.freq, prediction_time, return_samples)
def __encode_request(self, ts, cat, dynamic_feat, num_samples, return_samples, quantiles):
instance = series_to_dict(ts, cat if cat is not None else None, dynamic_feat if dynamic_feat else None)
configuration = {
"num_samples": num_samples,
"output_types": ["quantiles", "samples"] if return_samples else ["quantiles"],
"quantiles": quantiles
}
http_request_data = {
"instances": [instance],
"configuration": configuration
}
return json.dumps(http_request_data).encode('utf-8')
def __decode_response(self, response, freq, prediction_time, return_samples):
# we only sent one time series so we only receive one in return
# however, if possible one will pass multiple time series as predictions will then be faster
predictions = json.loads(response.decode('utf-8'))['predictions'][0]
prediction_length = len(next(iter(predictions['quantiles'].values())))
prediction_index = pd.DatetimeIndex(start=prediction_time, freq=freq, periods=prediction_length)
if return_samples:
dict_of_samples = {'sample_' + str(i): s for i, s in enumerate(predictions['samples'])}
else:
dict_of_samples = {}
return pd.DataFrame(data={**predictions['quantiles'], **dict_of_samples}, index=prediction_index)
def set_frequency(self, freq):
self.freq = freq
def encode_target(ts):
return [x if np.isfinite(x) else "NaN" for x in ts]
def series_to_dict(ts, cat=None, dynamic_feat=None):
"""Given a pandas.Series object, returns a dictionary encoding the time series.
ts -- a pands.Series object with the target time series
cat -- an integer indicating the time series category
Return value: a dictionary
"""
obj = {"start": str(ts.index[0]), "target": encode_target(ts)}
if cat is not None:
obj["cat"] = cat
if dynamic_feat is not None:
obj["dynamic_feat"] = dynamic_feat
return obj
```
Now we can deploy the model and create and endpoint that can be queried using our custom DeepARPredictor class.
```
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
predictor_cls=DeepARPredictor)
```
### Make predictions and plot results
Now we can use the `predictor` object to generate predictions.
```
predictor.predict(ts=timeseries[120], quantiles=[0.10, 0.5, 0.90]).head()
```
Below we define a plotting function that queries the model and displays the forecast.
```
def plot(
predictor,
target_ts,
cat=None,
dynamic_feat=None,
forecast_date=end_training,
show_samples=False,
plot_history=7 * 12,
confidence=80
):
print("calling served model to generate predictions starting from {}".format(str(forecast_date)))
assert(confidence > 50 and confidence < 100)
low_quantile = 0.5 - confidence * 0.005
up_quantile = confidence * 0.005 + 0.5
# we first construct the argument to call our model
args = {
"ts": target_ts[:forecast_date],
"return_samples": show_samples,
"quantiles": [low_quantile, 0.5, up_quantile],
"num_samples": 100
}
if dynamic_feat is not None:
args["dynamic_feat"] = dynamic_feat
fig = plt.figure(figsize=(20, 6))
ax = plt.subplot(2, 1, 1)
else:
fig = plt.figure(figsize=(20, 3))
ax = plt.subplot(1,1,1)
if cat is not None:
args["cat"] = cat
ax.text(0.9, 0.9, 'cat = {}'.format(cat), transform=ax.transAxes)
# call the end point to get the prediction
prediction = predictor.predict(**args)
# plot the samples
if show_samples:
for key in prediction.keys():
if "sample" in key:
prediction[key].plot(color='lightskyblue', alpha=0.2, label='_nolegend_')
# plot the target
target_section = target_ts[forecast_date-plot_history:forecast_date+prediction_length]
target_section.plot(color="black", label='target')
# plot the confidence interval and the median predicted
ax.fill_between(
prediction[str(low_quantile)].index,
prediction[str(low_quantile)].values,
prediction[str(up_quantile)].values,
color="b", alpha=0.3, label='{}% confidence interval'.format(confidence)
)
prediction["0.5"].plot(color="b", label='P50')
ax.legend(loc=2)
# fix the scale as the samples may change it
ax.set_ylim(target_section.min() * 0.5, target_section.max() * 1.5)
if dynamic_feat is not None:
for i, f in enumerate(dynamic_feat, start=1):
ax = plt.subplot(len(dynamic_feat) * 2, 1, len(dynamic_feat) + i, sharex=ax)
feat_ts = pd.Series(
index=pd.DatetimeIndex(start=target_ts.index[0], freq=target_ts.index.freq, periods=len(f)),
data=f
)
feat_ts[forecast_date-plot_history:forecast_date+prediction_length].plot(ax=ax, color='g')
```
We can interact with the function previously defined, to look at the forecast of any customer at any point in (future) time.
For each request, the predictions are obtained by calling our served model on the fly.
Here we forecast the consumption of an office after week-end (note the lower week-end consumption).
You can select any time series and any forecast date, just click on `Run Interact` to generate the predictions from our served endpoint and see the plot.
```
style = {'description_width': 'initial'}
@interact_manual(
customer_id=IntSlider(min=0, max=369, value=91, style=style),
forecast_day=IntSlider(min=0, max=100, value=51, style=style),
confidence=IntSlider(min=60, max=95, value=80, step=5, style=style),
history_weeks_plot=IntSlider(min=1, max=20, value=1, style=style),
show_samples=Checkbox(value=False),
continuous_update=False
)
def plot_interact(customer_id, forecast_day, confidence, history_weeks_plot, show_samples):
plot(
predictor,
target_ts=timeseries[customer_id],
forecast_date=end_training + datetime.timedelta(days=forecast_day),
show_samples=show_samples,
plot_history=history_weeks_plot * 12 * 7,
confidence=confidence
)
```
### Delete endpoints
```
predictor.delete_endpoint()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from pyod.utils.data import generate_data, get_outliers_inliers
import warnings
warnings.filterwarnings('ignore')
```
## 샘플 데이터 생성
OCSVM은 Unsupervised Learning Method 중 하나이며, Novelty Detection에서 사용되는 방법 중 하나이다.
따라서, 모든 데이터를 정상이라고 가정하고 모델 훈련을 수행해야 한다.
샘플 데이터를 생성하는 과정은 다음과 같다.
- PyoD 라이브러리를 사용하여 샘플 데이터 생성. 이 때, 실제 Outlier 비율은 전체 데이터의 5%로 지정한다.
- 전체 데이터에서 훈련 데이터와 테스트 데이터를 분할한다.
```
train, test = generate_data(random_state = 42, train_only = True, contamination = 0.05)
X_train, X_test, y_train, y_test = train_test_split(train, test, test_size = 0.2, random_state = 42)
```
## 모델 적합
앞서 말했듯이, OCSVM은 라벨 데이터를 필요로하지 않는다. 따라서, 피쳐 데이터만을 이용해 모델을 적합시킨다.
```
clf = svm.OneClassSVM(nu = 0.1, kernel = 'rbf', gamma = 0.1)
clf.fit(X_train) # Unsupervised Learning Method
```
## 적합 모델을 이용한 라벨 분류
```
class OCSVM:
def __init__(self, nu, kernel, gamma):
self.nu = nu
self.kernel = kernel
self.gamma = gamma
self.result_df = pd.DataFrame()
self.clf = svm.OneClassSVM(nu = self.nu, kernel = self.kernel, gamma = self.gamma)
def fit(self, X_train, ground_truth):
self.X_train = X_train
self.y_train = ground_truth
self.clf.fit(self.X_train)
return self.clf
def predict(self, X_test, is_return = False):
self.X_test = X_test
self.prediction = self.clf.predict(self.X_test)
if is_return:
return self.prediction
def visualization(self):
self.result_df['X1'] = self.X_train[:, 0]
self.result_df['X2'] = self.X_train[:, 1]
self.result_df['Prediction'] = pd.Series(self.prediction).apply(lambda x: 0 if x == 1 else 1)
self.result_df['Actual'] = self.y_train
xx, yy = np.meshgrid(np.linspace(self.result_df['X1'].min() - 1, self.result_df['X1'].max() + 1, 500),
np.linspace(self.result_df['X2'].min() - 1, self.result_df['X2'].max() + 1, 500))
z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
z = z.reshape(xx.shape)
plt.title("Novelty Detection\nNu = {}, Kernel = {}, Gamma = {}".format(self.nu, self.kernel, self.gamma))
plt.contourf(xx, yy, levels = np.linspace(z.min(), 0, 7), cmap = plt.cm.PuBu)
a = plt.contourf(xx, yy, z, level = [0], linewidths = 2, color = 'darkred')
plt.contourf(xx, yy, z, levels=[0, z.max()], colors='palevioletred')
s = 40
b1 = plt.scatter(self.X_train[:, 0], self.X_train[:, 1], c = 'white', s = s, edgecolors = 'k')
outlier = plt.scatter(self.result_df.loc[self.result_df['Prediction'] == 1]['X1'], self.result_df.loc[self.result_df['Prediction'] == 1]['X2'],
c = 'red', edgecolor = 'k')
actual = plt.scatter(self.result_df.loc[self.result_df['Actual'] == 1]['X1'], self.result_df.loc[self.result_df['Actual'] == 1]['X2'],
c = 'gold', edgecolor = 'k', alpha = 0.8)
plt.axis('tight')
plt.xlim((self.result_df['X1'].min() - 1, self.result_df['X1'].max() + 1))
plt.ylim((self.result_df['X2'].min() - 1, self.result_df['X2'].max() + 1))
plt.show()
nu = 0.1
kernel = 'rbf'
gamma = 0.007
model = OCSVM(nu = nu, kernel = kernel, gamma = gamma)
model.fit(X_train, y_train)
model.predict(X_train)
```
## 시각화
```
model.visualization()
```
그래프를 통해 알 수 있다시피, OCSVM 하이퍼파라미터의 Nu는 SVM의 c와 비슷한 의미를 가진다. 다른 의미로 말하면, 오분류 비율에 대한 최대 상한 값이라고 볼 수도 있다. 예를 들어, Nu = 0.05로 설정하면 훈련 데이터의 최대 5%가 잘 못 분류된다고 말할 수 있다.
| github_jupyter |
# The Dataset for Pretraining BERT
:label:`sec_bert-dataset`
To pretrain the BERT model as implemented in :numref:`sec_bert`,
we need to generate the dataset in the ideal format to facilitate
the two pretraining tasks:
masked language modeling and next sentence prediction.
On one hand,
the original BERT model is pretrained on the concatenation of
two huge corpora BookCorpus and English Wikipedia (see :numref:`subsec_bert_pretraining_tasks`),
making it hard to run for most readers of this book.
On the other hand,
the off-the-shelf pretrained BERT model
may not fit for applications from specific domains like medicine.
Thus, it is getting popular to pretrain BERT on a customized dataset.
To facilitate the demonstration of BERT pretraining,
we use a smaller corpus WikiText-2 :cite:`Merity.Xiong.Bradbury.ea.2016`.
Comparing with the PTB dataset used for pretraining word2vec in :numref:`sec_word2vec_data`,
WikiText-2 i) retains the original punctuation, making it suitable for next sentence prediction; ii) retains the original case and numbers; iii) is over twice larger.
```
import os
import random
import torch
from d2l import torch as d2l
```
In the WikiText-2 dataset,
each line represents a paragraph where
space is inserted between any punctuation and its preceding token.
Paragraphs with at least two sentences are retained.
To split sentences, we only use the period as the delimiter for simplicity.
We leave discussions of more complex sentence splitting techniques in the exercises
at the end of this section.
```
#@save
d2l.DATA_HUB['wikitext-2'] = (
'https://s3.amazonaws.com/research.metamind.io/wikitext/'
'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')
#@save
def _read_wiki(data_dir):
file_name = os.path.join(data_dir, 'wiki.train.tokens')
with open(file_name, 'r') as f:
lines = f.readlines()
# Uppercase letters are converted to lowercase ones
paragraphs = [
line.strip().lower().split(' . ') for line in lines
if len(line.split(' . ')) >= 2]
random.shuffle(paragraphs)
return paragraphs
```
## Defining Helper Functions for Pretraining Tasks
In the following,
we begin by implementing helper functions for the two BERT pretraining tasks:
next sentence prediction and masked language modeling.
These helper functions will be invoked later
when transforming the raw text corpus
into the dataset of the ideal format to pretrain BERT.
### Generating the Next Sentence Prediction Task
According to descriptions of :numref:`subsec_nsp`,
the `_get_next_sentence` function generates a training example
for the binary classification task.
```
#@save
def _get_next_sentence(sentence, next_sentence, paragraphs):
if random.random() < 0.5:
is_next = True
else:
# `paragraphs` is a list of lists of lists
next_sentence = random.choice(random.choice(paragraphs))
is_next = False
return sentence, next_sentence, is_next
```
The following function generates training examples for next sentence prediction
from the input `paragraph` by invoking the `_get_next_sentence` function.
Here `paragraph` is a list of sentences, where each sentence is a list of tokens.
The argument `max_len` specifies the maximum length of a BERT input sequence during pretraining.
```
#@save
def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):
nsp_data_from_paragraph = []
for i in range(len(paragraph) - 1):
tokens_a, tokens_b, is_next = _get_next_sentence(
paragraph[i], paragraph[i + 1], paragraphs)
# Consider 1 '<cls>' token and 2 '<sep>' tokens
if len(tokens_a) + len(tokens_b) + 3 > max_len:
continue
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
nsp_data_from_paragraph.append((tokens, segments, is_next))
return nsp_data_from_paragraph
```
### Generating the Masked Language Modeling Task
:label:`subsec_prepare_mlm_data`
In order to generate training examples
for the masked language modeling task
from a BERT input sequence,
we define the following `_replace_mlm_tokens` function.
In its inputs, `tokens` is a list of tokens representing a BERT input sequence,
`candidate_pred_positions` is a list of token indices of the BERT input sequence
excluding those of special tokens (special tokens are not predicted in the masked language modeling task),
and `num_mlm_preds` indicates the number of predictions (recall 15% random tokens to predict).
Following the definition of the masked language modeling task in :numref:`subsec_mlm`,
at each prediction position, the input may be replaced by
a special “<mask>” token or a random token, or remain unchanged.
In the end, the function returns the input tokens after possible replacement,
the token indices where predictions take place and labels for these predictions.
```
#@save
def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,
vocab):
# Make a new copy of tokens for the input of a masked language model,
# where the input may contain replaced '<mask>' or random tokens
mlm_input_tokens = [token for token in tokens]
pred_positions_and_labels = []
# Shuffle for getting 15% random tokens for prediction in the masked
# language modeling task
random.shuffle(candidate_pred_positions)
for mlm_pred_position in candidate_pred_positions:
if len(pred_positions_and_labels) >= num_mlm_preds:
break
masked_token = None
# 80% of the time: replace the word with the '<mask>' token
if random.random() < 0.8:
masked_token = '<mask>'
else:
# 10% of the time: keep the word unchanged
if random.random() < 0.5:
masked_token = tokens[mlm_pred_position]
# 10% of the time: replace the word with a random word
else:
masked_token = random.randint(0, len(vocab) - 1)
mlm_input_tokens[mlm_pred_position] = masked_token
pred_positions_and_labels.append(
(mlm_pred_position, tokens[mlm_pred_position]))
return mlm_input_tokens, pred_positions_and_labels
```
By invoking the aforementioned `_replace_mlm_tokens` function,
the following function takes a BERT input sequence (`tokens`)
as an input and returns indices of the input tokens
(after possible token replacement as described in :numref:`subsec_mlm`),
the token indices where predictions take place,
and label indices for these predictions.
```
#@save
def _get_mlm_data_from_tokens(tokens, vocab):
candidate_pred_positions = []
# `tokens` is a list of strings
for i, token in enumerate(tokens):
# Special tokens are not predicted in the masked language modeling
# task
if token in ['<cls>', '<sep>']:
continue
candidate_pred_positions.append(i)
# 15% of random tokens are predicted in the masked language modeling task
num_mlm_preds = max(1, round(len(tokens) * 0.15))
mlm_input_tokens, pred_positions_and_labels = _replace_mlm_tokens(
tokens, candidate_pred_positions, num_mlm_preds, vocab)
pred_positions_and_labels = sorted(pred_positions_and_labels,
key=lambda x: x[0])
pred_positions = [v[0] for v in pred_positions_and_labels]
mlm_pred_labels = [v[1] for v in pred_positions_and_labels]
return vocab[mlm_input_tokens], pred_positions, vocab[mlm_pred_labels]
```
## Transforming Text into the Pretraining Dataset
Now we are almost ready to customize a `Dataset` class for pretraining BERT.
Before that,
we still need to define a helper function `_pad_bert_inputs`
to append the special “<mask>” tokens to the inputs.
Its argument `examples` contain the outputs from the helper functions `_get_nsp_data_from_paragraph` and `_get_mlm_data_from_tokens` for the two pretraining tasks.
```
#@save
def _pad_bert_inputs(examples, max_len, vocab):
max_num_mlm_preds = round(max_len * 0.15)
all_token_ids, all_segments, valid_lens, = [], [], []
all_pred_positions, all_mlm_weights, all_mlm_labels = [], [], []
nsp_labels = []
for (token_ids, pred_positions, mlm_pred_label_ids, segments,
is_next) in examples:
all_token_ids.append(
torch.tensor(
token_ids + [vocab['<pad>']] * (max_len - len(token_ids)),
dtype=torch.long))
all_segments.append(
torch.tensor(segments + [0] * (max_len - len(segments)),
dtype=torch.long))
# `valid_lens` excludes count of '<pad>' tokens
valid_lens.append(torch.tensor(len(token_ids), dtype=torch.float32))
all_pred_positions.append(
torch.tensor(
pred_positions + [0] *
(max_num_mlm_preds - len(pred_positions)), dtype=torch.long))
# Predictions of padded tokens will be filtered out in the loss via
# multiplication of 0 weights
all_mlm_weights.append(
torch.tensor([1.0] * len(mlm_pred_label_ids) + [0.0] *
(max_num_mlm_preds - len(pred_positions)),
dtype=torch.float32))
all_mlm_labels.append(
torch.tensor(
mlm_pred_label_ids + [0] *
(max_num_mlm_preds - len(mlm_pred_label_ids)),
dtype=torch.long))
nsp_labels.append(torch.tensor(is_next, dtype=torch.long))
return (all_token_ids, all_segments, valid_lens, all_pred_positions,
all_mlm_weights, all_mlm_labels, nsp_labels)
```
Putting the helper functions for
generating training examples of the two pretraining tasks,
and the helper function for padding inputs together,
we customize the following `_WikiTextDataset` class as the WikiText-2 dataset for pretraining BERT.
By implementing the `__getitem__ `function,
we can arbitrarily access the pretraining (masked language modeling and next sentence prediction) examples
generated from a pair of sentences from the WikiText-2 corpus.
The original BERT model uses WordPiece embeddings whose vocabulary size is 30,000 :cite:`Wu.Schuster.Chen.ea.2016`.
The tokenization method of WordPiece is a slight modification of
the original byte pair encoding algorithm in :numref:`subsec_Byte_Pair_Encoding`.
For simplicity, we use the `d2l.tokenize` function for tokenization.
Infrequent tokens that appear less than five times are filtered out.
```
#@save
class _WikiTextDataset(torch.utils.data.Dataset):
def __init__(self, paragraphs, max_len):
# Input `paragraphs[i]` is a list of sentence strings representing a
# paragraph; while output `paragraphs[i]` is a list of sentences
# representing a paragraph, where each sentence is a list of tokens
paragraphs = [
d2l.tokenize(paragraph, token='word') for paragraph in paragraphs]
sentences = [
sentence for paragraph in paragraphs for sentence in paragraph]
self.vocab = d2l.Vocab(
sentences, min_freq=5,
reserved_tokens=['<pad>', '<mask>', '<cls>', '<sep>'])
# Get data for the next sentence prediction task
examples = []
for paragraph in paragraphs:
examples.extend(
_get_nsp_data_from_paragraph(paragraph, paragraphs,
self.vocab, max_len))
# Get data for the masked language model task
examples = [(_get_mlm_data_from_tokens(tokens, self.vocab) +
(segments, is_next))
for tokens, segments, is_next in examples]
# Pad inputs
(self.all_token_ids, self.all_segments, self.valid_lens,
self.all_pred_positions, self.all_mlm_weights, self.all_mlm_labels,
self.nsp_labels) = _pad_bert_inputs(examples, max_len, self.vocab)
def __getitem__(self, idx):
return (self.all_token_ids[idx], self.all_segments[idx],
self.valid_lens[idx], self.all_pred_positions[idx],
self.all_mlm_weights[idx], self.all_mlm_labels[idx],
self.nsp_labels[idx])
def __len__(self):
return len(self.all_token_ids)
```
By using the `_read_wiki` function and the `_WikiTextDataset` class,
we define the following `load_data_wiki` to download and WikiText-2 dataset
and generate pretraining examples from it.
```
#@save
def load_data_wiki(batch_size, max_len):
num_workers = d2l.get_dataloader_workers()
data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')
paragraphs = _read_wiki(data_dir)
train_set = _WikiTextDataset(paragraphs, max_len)
train_iter = torch.utils.data.DataLoader(train_set, batch_size,
shuffle=True,
num_workers=num_workers)
return train_iter, train_set.vocab
```
Setting the batch size to 512 and the maximum length of a BERT input sequence to be 64,
we print out the shapes of a minibatch of BERT pretraining examples.
Note that in each BERT input sequence,
$10$ ($64 \times 0.15$) positions are predicted for the masked language modeling task.
```
batch_size, max_len = 512, 64
train_iter, vocab = load_data_wiki(batch_size, max_len)
for (tokens_X, segments_X, valid_lens_x, pred_positions_X, mlm_weights_X,
mlm_Y, nsp_y) in train_iter:
print(tokens_X.shape, segments_X.shape, valid_lens_x.shape,
pred_positions_X.shape, mlm_weights_X.shape, mlm_Y.shape,
nsp_y.shape)
break
```
In the end, let us take a look at the vocabulary size.
Even after filtering out infrequent tokens,
it is still over twice larger than that of the PTB dataset.
```
len(vocab)
```
## Summary
* Comparing with the PTB dataset, the WikiText-2 dateset retains the original punctuation, case and numbers, and is over twice larger.
* We can arbitrarily access the pretraining (masked language modeling and next sentence prediction) examples generated from a pair of sentences from the WikiText-2 corpus.
## Exercises
1. For simplicity, the period is used as the only delimiter for splitting sentences. Try other sentence splitting techniques, such as the spaCy and NLTK. Take NLTK as an example. You need to install NLTK first: `pip install nltk`. In the code, first `import nltk`. Then, download the Punkt sentence tokenizer: `nltk.download('punkt')`. To split sentences such as `sentences = 'This is great ! Why not ?'`, invoking `nltk.tokenize.sent_tokenize(sentences)` will return a list of two sentence strings: `['This is great !', 'Why not ?']`.
1. What is the vocabulary size if we do not filter out any infrequent token?
[Discussions](https://discuss.d2l.ai/t/1496)
| github_jupyter |
```
from regular_expression_visualization.visualize_reg import search_pattern
```
search_pattern is a helper function that cross matches several regular expressions against several strings. It visulizes the result by surrounding the matched substring in red border. Only the first matched substring is bordered.
## Simple pattern
```
patterns = [
'ee', # exactly ee
'ea', # exactly ea
'ai',
'aa'
]
strings = ['tee', 'tea', 'bail']
search_pattern(patterns, strings)
```
## One of the pattern
Use ```|``` to seperate several pattern
```
patterns = [
'ee|ea|ai', # ee or ea or ai
]
strings = ['tee', 'tea', 'bail']
search_pattern(patterns, strings)
```
Pattern order matters
```
patterns = [
'oo|ooo', # oo is tried first
'ooo|oo', # ooo is tried first
]
strings = ['loong', 'looong', 'long']
search_pattern(patterns, strings)
```
When "one of pattern" is followed by or following other regular expressions, use () to seperate to seperate from them
```
patterns = [
'b(ea|ee)', # b + (ea or ee)
'bea|ee' # bea or ee
]
strings = ['bead', 'bee']
search_pattern(patterns, strings)
```
## Qualifiers
### appear m to n times
Use ```{m,n}```
```
patterns = [
'ooo', # o, three times
'o{3}', # o, three times
'o{2,3}', # o, 2~3 time
'o{2, 3}', # o, Not working! Don't put in the blank!
'o{2,}', # o, more than 2 times
'lo{,3}', # l + o, o appears 0 to 3 times
'o{,3}', # seems not working alone
]
strings = ['looong', 'long', 'loong']
search_pattern(patterns, strings)
```
### appear at least once
```
patterns = [
'o+n', # o, at least 1 time
'o{1,}n'# same as above
]
strings = ['looong', 'long', 'bug']
search_pattern(patterns, strings)
```
### appear zero or more times
```
patterns = [
'lo*ng', # long, o appears zero or more time
'lo{0,}ng' # same as above
]
strings = ['long', 'lng', 'loong', 'leong']
search_pattern(patterns, strings)
```
### appear zero or one time
```
patterns = [
'apples?', # apple, ending s may not appear
'apples{0,1}' # same as above
]
strings = ['apple', 'apples']
search_pattern(patterns, strings)
```
## Sub expression
use ```()```
```
patterns = [
'ba(na){2}', # b + na, na appears two times
'banana', # same as above
'bana{2}', # ban + a, a appear 2 times,
'banaa', # same as above
]
strings = ['banana', 'banaa']
search_pattern(patterns, strings)
patterns = [
'(a+_+){2}', # two consecutive pattern which match a+_+, they are not necessarily the same string
'a+_+a+_+', # same as above
'a+_+'
]
strings = ['aa_a__', 'a_', 'a__a_a_']
search_pattern(patterns, strings)
```
## Character Set
### Any character
```.``` stands for any character
```
patterns = [
'b.d', # b + any character + d
'be..' # b + e + any character + any character
]
strings = ['bed', 'bid','bee', 'benign', 'beed']
search_pattern(patterns, strings)
```
### Any character in a set
Use ```[...]```
```
patterns = [
'b[ei]d', # b + e or i + d
'bed|bid' # same as above
]
strings = ['bed', 'bid', 'bee', 'bud']
search_pattern(patterns, strings)
```
Use ```-``` for character range
```
patterns = [
'id_[0-5]', # id_ + any number in 0 to 5
'id_[012345]' # same as above
]
strings = ['id_1', 'id_6']
search_pattern(patterns, strings)
patterns = [
'type_[a-ex]', # type_ + any character in range a to e and x,
'type_[abcdex]', # same as above
'type_[a-zA-Z]' # any letter
]
strings = ['type_a', 'type_b', 'type_x', 'type_Z']
search_pattern(patterns, strings)
```
### Any character not in set
Use ```[^...]```
```
patterns = [
'type_[^a-z]' # type_ + any character not in a to z
]
strings = ['type_1', 'type_a', 'type_c']
search_pattern(patterns, strings)
```
### Any number
Use ```\d```
```
patterns = [
'id_\d\d', # id_ + any number character + any number character
'id_[0-9][0-9]' # same as above
]
strings = ['id_12', 'id_0', 'id']
search_pattern(patterns, strings)
```
### Any non-number character
Use ```\D```
```
patterns = [
'e\D', # e + any character which is not number character
'e[^0-9]' # same as above
]
strings = ['bee', 'tel', 'te1']
search_pattern(patterns, strings)
```
### Any word charcters
Use ```\w```, word character means a-z, A-Z, 0-9 and _
```
patterns = [
'\w+', # any word character, more than one time
'[a-zA-Z0-9_]+' # same as above
]
strings = [':id_1.']
search_pattern(patterns, strings)
```
### Any non-word characters
Use ```\W```
```
patterns = [
'\W+', # any non-word character, more than one time
'[^a-zA-Z0-9_]+'# same as above
]
strings = ['id_1 + id_2']
search_pattern(patterns, strings)
```
### Any space
Use ```\s```
```
patterns = [
'\s.*\s', # blank + any string + blank
'[\t\n\f\r ].*[\t\n\f\r ]' # same as above
]
strings = ['Monkey D Luffy']
search_pattern(patterns, strings)
```
### Any Non Space
```
patterns = [
'\S.+\S', # any character except space + any string + any character except space
'[^\t\n\f\r ].*[^\t\n\f\r ]' # same as above
]
strings = ['on the\ntree']
search_pattern(patterns, strings)
```
## Escaping
As you see, many characters like ```(```,```.```,```+``` have special means in regular expression. If you want to disable these and search for these characters, add ```\``` before them
```
patterns = [
'($\d+.\d+)', # $ . + are not treated as characters
'\(\$\d+\.\d+\)' # $ . + are treated as characters
]
strings = ['apple ($3.25)']
search_pattern(patterns, strings)
```
## Anchor
Anchor are searched but won't be part be of the matching result
### followed by
Use ```(?=...)```
```
patterns = [
'\w+(?=\.)', # word character string, followed by comma. the comma is not returned in the matching result
'\w+\.' # comma is returned in the matching result
]
strings = ['Apple juice.']
search_pattern(patterns, strings)
```
### Not followed by
Use ```(?!...)```
```
patterns = [
'\w+(?!\.)', # word character string, not followed by comma
'\w+[^\.]' # word character string, followed by any character which is not comma
]
strings = ['Apple juice.']
search_pattern(patterns, strings)
```
### Following
Use ```(?<=...)```
```
patterns = [
'(?<=:)\d+', # number character string, following :
':\d+' # : + number character string
]
strings = ['apple:10']
search_pattern(patterns, strings)
```
### not following
Use ```(?<!)```
```
patterns = [
'(?<!A)\d+', # number character string, not followed by A
'[^A]\d+' # any character expect A + number character string
]
strings = ['A123 123']
search_pattern(patterns, strings)
```
### border of word
```
patterns = [
r'\beat\b', # eat surrounded by border of word, (whole word searching)
'eat' #
]
strings = ['I eat food', 'beat']
search_pattern(patterns, strings)
```
Why use ```r``` in ```r'\beat\b'```? ```\b``` in python has special meaning (like ```+``` has special meaning in regular expression), it represents a back space character [(see here)](https://stackoverflow.com/questions/25065608/what-does-backward-slash-b-do-in-python)
To disable this behaviour, add ```r``` in front of the string. (Like we add ```\``` before ```+``` in regular expression)
### not border of word
```
patterns = [
r'\Beat\B', # eat, not following or followed by word border, (appear within a word)
'eat'
]
strings = ['I eat food', 'beats']
search_pattern(patterns, strings)
```
## Exercises
Try to search valid email address
```
patterns = [
'^[^\d@][\w+\.]+@\w+(\.com)?\.(cn|com|org)',
]
strings = [
'frednes7@hotmail.com',
'fredness7@@hotmail.com',
'frendess7@htcom',
'frendess7@ht.com.cn',
'@ht.com.cn',
'7frendess@ht.com.cn',
'@frendess@ht.com.cn',
]
search_pattern(patterns, strings)
```
| github_jupyter |
```
from requests import get
from requests.exceptions import RequestException
from contextlib import closing
from bs4 import BeautifulSoup
from datetime import timedelta, date
import sys
```
## All library invoke(s)
```
base_link = "https://thegreyhoundrecorder.com.au/results/search/"
#html_file_name = link_date + ".html"
links=list()
```
## URL connection functions
```
def simple_get(url):
"""
Attempts to get the content at `url` by making an HTTP GET request.
If the content-type of response is some kind of HTML/XML, return the
text content, otherwise return None.
"""
try:
with closing(get(url, stream=True)) as resp:
if is_good_response(resp):
return resp.content
else:
return None
except RequestException as e:
log_error('Error during requests to {0} : {1}'.format(url, str(e)))
return None
def is_good_response(resp):
"""
Returns True if the response seems to be HTML, False otherwise.
"""
content_type = resp.headers['Content-Type'].lower()
return (resp.status_code == 200
and content_type is not None
and content_type.find('html') > -1)
def log_error(e):
"""
It is always a good idea to log errors.
This function just prints them, but you can
make it do anything.
"""
print(e)
```
## Function for each Date. The link-date is a global variable
```
def date_execute():
link = base_link+link_date+'/' #Creation of link
raw_html = simple_get(link) #Connection complete
html = BeautifulSoup(raw_html, 'html.parser')
race_date = html.find('h2').decode_contents()
print(race_date) #Progress tracking output statements
print(len(raw_html))
print(len(html))
result_div = html.findAll("div", {"class": "resultsTblWrap"})
#print(result_div)
anchors = result_div[0].findAll('a') #selecting all the race hyperlinks
i=0
links.append(link_date)
while (i < len(anchors)):
#print((anchors[i]['href']))
s=anchors[i]['href']
links.append(s) #adding them to a global list
i=i+1
```
## library function to traverse through all possible dates
```
def daterange(start_date, end_date):
for n in range(int ((end_date - start_date).days)):
yield start_date + timedelta(n)
start_date = date(2016, 1, 1)
end_date = date(2016, 2, 1) # Feb 1 2016 is the sop-date
for single_date in daterange(start_date, end_date):
link_date=single_date.strftime("%Y-%m-%d")
#print(link_date)
date_execute() #Calling day-wise function to execute
```
## Check out the outcome
```
print("\n".join(links))
```
## storing the entire list into a file
```
with open('datefile.txt', 'w') as filehandle:
filehandle.writelines("%s\n" % place for place in links)
```
### ignore | code snippet to redirect system-output to text file
| github_jupyter |
```
# !/usr/bin/python3
### 1. Data import
import sys ## system
import numpy as np ## Matrix Calculate
import glob ## global variable
import random ## random sample
from math import exp, gamma, log, sqrt ## exp,log,sqrt cal
import scipy.stats as ss
import time
from copy import deepcopy as dc
import time
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import logging
from Utils.Utils import *
import csv
import os
import pandas as pd
import numpy as np
from scipy.linalg import cholesky
# path_psig, path_gamg, path_fg, path_ gg, path_bhg, path_ghg C path_deltag, path_llg
###### Important Options
np.random.seed(1028)
UPDATE = False # <- update from last MCMC
FIX = True # <- if FIX, then fix the sigma.a, omega.b (sigma, delta) as 1.
VV = 1
# Total Iteration = burnin + mcmc
burnin = 0
mcmc = 200000
target_mh = 0.25
SIG = 1
if sys.argv[1] == "soda":
PATH = "carb/"
else:
PATH = "yogurt/"
ff = sys.argv[1]
if sys.argv[2] == '0': MODEL = MODEL + '_FIX'
MODEL = "SSDF_Simu"
PATH = "sims6/s%i/" % int(sys.argv[2]) # <- Simulation
ff = "sims6"
if UPDATE:
FN = PATH + MODEL + "_update.npz"
LOGFILE = "Logs/%s_%s_update.log" % (MODEL, ff)
else:
FN = PATH + MODEL + ".npz"
LOGFILE = "Logs/sims6/%s_%s_%s.log" % (MODEL, ff, sys.argv[2])
logging.basicConfig(level=logging.INFO,
filename=LOGFILE,
format="%(asctime)s %(levelname)-7s %(message)s")
with open(LOGFILE, "w"):
pass
if not os.path.isdir(PATH + MODEL):
os.mkdir(PATH + MODEL)
if not os.path.isdir(PATH + MODEL + '/MCMC'):
os.mkdir(PATH + MODEL + '/MCMC')
if not os.path.isdir('Logs'):
os.mkdir('Logs')
PATH_PSIG = PATH + MODEL + '/MCMC/psig.csv'
PATH_GAMG = PATH + MODEL + '/MCMC/gamg.csv'
PATH_FG = PATH + MODEL + '/MCMC/fg.csv'
PATH_GG = PATH + MODEL + '/MCMC/gg.csv'
PATH_BHG = PATH + MODEL + '/MCMC/bhg.csv'
PATH_GHG = PATH + MODEL + '/MCMC/ghg.csv'
PATH_DELTAG = PATH + MODEL + '/MCMC/deltag.csv'
PATH_SIGMAG = PATH + MODEL + '/MCMC/sigmag.csv'
PATH_LLG = PATH + MODEL + '/MCMC/llg.csv'
PATH_PHIG = PATH + MODEL + '/MCMC/phig.csv'
PATH_VBG = PATH + MODEL + '/MCMC/vbg.csv'
PATH_VBG = PATH + MODEL + '/MCMC/vbg.csv'
PATH_VGAM = PATH + MODEL + '/MCMC/vgam.csv'
PATH_GAMMU = PATH + MODEL + '/MCMC/gammu.csv'
PATH_GAMMUH = PATH + MODEL + '/MCMC/gammuh.csv'
PATH_VPSI = PATH + MODEL + '/MCMC/vpsi.csv'
PATH_PSIMU = PATH + MODEL + '/MCMC/psimu.csv'
PATH_PSIMUH = PATH + MODEL + '/MCMC/psimuh.csv'
PATH_BETAG = PATH + MODEL + '/MCMC/betag.csv'
PATH_KG = PATH + MODEL + '/MCMC/kg.csv'
with open(PATH_PSIG, 'w') as f:
pass
with open(PATH_GAMG, 'w') as f:
pass
with open(PATH_FG, 'w') as f:
pass
with open(PATH_GG, 'w') as f:
pass
with open(PATH_BHG, 'w') as f:
pass
with open(PATH_GHG, 'w') as f:
pass
with open(PATH_DELTAG, 'w') as f:
pass
with open(PATH_SIGMAG, 'w') as f:
pass
with open(PATH_LLG, 'w') as f:
pass
with open(PATH_PHIG, 'w') as f:
pass
with open(PATH_VBG, 'w') as f:
pass
with open(PATH_VGAM, 'w') as f:
pass
with open(PATH_GAMMU, 'w') as f:
pass
with open(PATH_GAMMUH, 'w') as f:
pass
with open(PATH_VPSI, 'w') as f:
pass
with open(PATH_PSIMU, 'w') as f:
pass
with open(PATH_PSIMUH, 'w') as f:
pass
with open(PATH_KG, 'w') as f:
pass
with open(PATH_BETAG, 'w') as f:
pass
list_of_files = sorted(glob.glob(PATH + 'demand/*.csv'))
list_of_files2 = sorted(glob.glob(PATH + 'prices/*.csv'))
# C = pd.read_csv(PATH + "c.csv").iloc[:, 2:]
## Remove last one day
demand = [np.loadtxt(f, skiprows=1, delimiter=",") for f in list_of_files]
prices = [np.loadtxt(f, skiprows=1, delimiter=",") for f in list_of_files2]
# demand = [np.loadtxt(f, skiprows=1, delimiter=",") for f in list_of_files[:50]]
# prices = [np.loadtxt(f, skiprows=1, delimiter=",") for f in list_of_files2[:50]]
demand = [d[:, :] for d in demand]
prices = [p[:, :] for p in prices]
# demand = [d[:-1, :] for d in demand]
# prices = [p[:-1, :] for p in prices]
##################### 2. Fucntions
###prior : 0 = pu0 , 1 = pv0, 2 = gu0 , 3 = gv0 , 4 = tpp , 5 = tpg
def rwmh(de, pr, x, ll, alpha, f, psi, g, kt, r, betai, zphi, bh, gh, abh, sigma, delta, vbi, tpa, tpp,
llb1, sdm, gammu, vgam, gamm, psimu, vpsi, psim, household):
## alpha <- gamma ## a <- alpha
k = de.shape[1] ###
kp = k - 1
af = f.dot(gh.T) # <- gh.T like t(gh) in R
bg = g.dot(bh.T)
af2 = af + gammu
bg2 = bg + psimu
n = psi.shape[0]
ite = np.zeros((n, 2)) ### num of mcmc iteration
## sampling psi,gamma
#"""
for i in range(n):
## -------------
## M-H Sampling
rpsi = psi[i:(i + 1)].dot(C.T).reshape(-1)
rpsi[-1] = 0
rnalpha = alpha[i:(i + 1)].dot(C.T).reshape(-1)
llb = ll[i] + logpdf(alpha[i, :], af2[i, :], sigma)
nalpha = alpha[i] + mvn(np.identity(ck) * tpa[i])
rnalpha = nalpha.reshape(1, -1).dot(C.T).reshape(-1)
nll = mdcev(rpsi, rnalpha, de[i, :], pr[i, :], SIG)
nllb = nll + logpdf(nalpha, af2[i, :], sigma )
if nllb - llb > np.log(random.uniform(0, 1)):
alpha[i] = nalpha
ll[i] = nll
llb1[i] = nllb
ite[i, 0] += 1
else:
rnalpha = alpha[i:(i + 1)].dot(C.T).reshape(-1)
llb = ll[i] + logpdf(psi[i], bg2[i], delta)
npsi = psi[i] + mvn(np.identity(ck) * tpp[i])
rnpsi = npsi.reshape(1, -1).dot(C.T).reshape(-1)
rnpsi[-1] = 0
nll = mdcev(rnpsi, rnalpha, de[i, :], pr[i, :], SIG)
nllb = nll + logpdf(npsi, bg2[i, :], delta)
if nllb - llb > np.log(random.uniform(0, 1)):
psi[i] = npsi.transpose()
ll[i] = nll
llb1[i] = nllb
ite[i, 1] += 1
#"""
# gammu
y = alpha - af
y2 = psi - bg
for i in range(1, ck):
gammu[i] = bmreg1(y[:, i:(i + 1)], np.ones((n, 1)), None, 1, gamm[i], vgam[i, i], e=False)
for i in range(ck):
psimu[i] = bmreg1(y2[:, i:(i + 1)], np.ones((n, 1)), None, 1, psim[i], vpsi[i, i], e=False)
#psimu[:] = 0
#gammu[:] = 0
### f & g
gmat = np.zeros((kl, kl))
gmat[0, 0] = 1
gmat[1:, 0] = betai
#gmat = np.eye(3)
w = np.identity(3) #* (0.01 ** 2)
r = 0
if ck == 12:
theta, kt = FFBS(np.concatenate([alpha - gammu, psi - psimu], axis=1), abh, sdm , gmat, w, m0, c0, kt, r)
else:
theta, kt = FFBS(np.concatenate([alpha - gammu, (psi - psimu)[:,:-1]], axis=1), abh, sdm, gmat, w, m0, c0, kt, r)
f = theta[:, 0:(1)]
g = theta[:, 1:(1 + 2)]
## delta_ht
xf = f[:-1][kt[1:] == 1] # Collect f_h(t-1) when f_h(t-1) > 0
for j in range(2):
# yg = g[kt == 1, j]
yg = g[1:][kt[1:] == 1, j] # collect g_h(t) when f_h(t-1) > 0
if yg.shape[0] > 0:
bs = np.linalg.inv(vbi[j, j] + xf.T.dot(xf) / vg[j, j])
bm = bs.dot(vbi[j, j] * zphi[j] + (xf.T.dot(yg)) / vg[j, j])
else: # when all f_h(t) < 0, then sample from delta_bar
bs = [1 / (vbi[j, j])]
bm = zphi[j]
betai[j] = bm + mvn([bs])
return [alpha, f, psi, g, ll, ite, kt, betai, gammu, psimu, llb1]
def FFBS(data, F, V, G, W, m0, c0, kt, r):
# W system var
# F True_t = F * True_t-1 + u +G * w
t = data.shape[0]
m = data.shape[1]
p = F.shape[1]
m = int(p * (p + 1) / 2)
mt = np.zeros((t, p))
ct = np.zeros((t, m))
m_t = m0.copy()
Ct = c0.copy()
k_t = 1
kt = kt.reshape(kt.shape[0])
Ft = F.copy()
atrow = np.zeros((t, p, 1))
rtrow = np.zeros((t, p, p))
gtrow = np.zeros((t, p, p))
ctrow = np.zeros((t, p, p))
wtrow = np.zeros((t, p, p))
rtmat = np.zeros((t, p, p))
theta = np.zeros((t, p))
## Forward Filtering
for i in range(t):
kt[i] = k_t
if k_t == 1:
Gt = G.copy()
Wt = W.copy()
rtrow[i] = np.eye(3)
else:
Gt = np.identity(3)
#Gt[0, 0] = 0
Wt = W.copy()
#Gt = G.copy()
Wt[1:,1:] = np.eye(2) * 0
if i == 0:
Gt = np.identity(3) #* 0
######################################################
### predict theta t|y-1
at = Gt.dot(m_t)
Rt = Gt.dot(Ct).dot(Gt.T) + Wt
sft = Ft.dot(at)
Qt = Ft.dot(Rt).dot(Ft.T) + V
Qti = np.linalg.inv(Qt)
KGt = Rt.dot(Ft.T).dot(Qti)
m_t = at + KGt.dot(data[i].reshape((cmn, 1)) - sft)
atrow[i] = at.copy()
rtrow[i] = Rt.copy()
gtrow[i] = Gt.copy()
# Ct = Rt - KGt.dot(Ft).dot(Rt)
Ct = np.round(Rt - KGt.dot(Ft).dot(Rt), 10)
mt[i] = m_t.transpose()
ctrow[i] = Ct.copy()
wtrow[i] = Wt.copy()
if m_t[0] > r:
k_t = 1
else:
k_t = 0
"""
try:
if Ct[1, 1] == 0:
theta[i] = m_t.reshape(-1)
theta[i, 0] += mvn([[Ct[0, 0]]]) # cholesky([H_t[0,0]], lower=True).dot(np.random.rand(1))
else:
theta[i] = m_t.reshape(-1) + mvn(
Ct) # + cholesky(H_t, lower=True).dot(np.random.rand(p))#rtrow[i].dot()
except:
print(H_t, kt[i], i)
1 + "a"
"""
## Back_Sampling
H_t = ctrow[-1]
h_t = mt[-1]
for i in range(t - 1, -1, -1):
try:
if H_t[1, 1] == 0:
theta[i] = h_t.reshape(-1)
theta[i, 0] += mvn([[H_t[0, 0]]]) # cholesky([H_t[0,0]], lower=True).dot(np.random.rand(1))
else:
theta[i] = h_t.reshape(-1) + mvn(
H_t) # + cholesky(H_t, lower=True).dot(np.random.rand(p))#rtrow[i].dot()
except:
print(H_t, kt[i], i)
1 + "a"
if i != 0:
Gt = gtrow[i]
a_t = atrow[i]
m_t = mt[i - 1]
C_t = ctrow[i - 1]
R_tp1 = rtrow[(i)]
CGRi = C_t.dot(Gt.T).dot(np.linalg.inv(R_tp1))
h_t = m_t.reshape(p, 1) + CGRi.dot(theta[i].reshape(p, 1) - a_t.reshape(p, 1))
H_t = C_t - CGRi.dot(Gt).dot(C_t)
H_t = np.round(H_t, 10)
return theta, kt
# Form a shared array and a lock, to protect access to shared memory.
k = demand[1].shape[1]
kp = k - 1
# ck = C.shape[1]
h = len(demand)
if k == 35:
try:
C = np.array(pd.read_csv(PATH + "c.csv").iloc[:, 2:])
except:
C = np.array(pd.read_csv("sims6/c.csv").iloc[:, 2:])
else:
C = np.eye(k)
ck = C.shape[1]
ckp = ck - 1
psia = np.zeros((h, ck))
psid = {}
glist = list()
start = 0
ind = np.zeros((h, 2)) # np.array([0] * h * 2).reshape((h, 2))
psih = []
gamh = []
for i in range(h):
# psid[i] = np.zeros((demand[i].shape[0], ck))
ind[i, 0] = start
ind[i, 1] = start + demand[i].shape[0]
glist.append(range(start, start + demand[i].shape[0]))
start = start + demand[i].shape[0]
# print(glist)
# print(ind)
# psi = shared_array((t,k))
# psi = np.zeros((t,k))#np.concatenate([psid[x] for x in sorted(psid)], 0) #+ 0.01
dems = demand[0]
pris = prices[0]
cg = np.eye(ck)
for i in range(1, h):
dems = np.vstack((dems, demand[i]))
pris = np.vstack((pris, prices[i]))
dems = np.array(dems).reshape((-1, k))
pris = np.array(pris).reshape((-1, k))
t = dems.shape[0]
psi = np.zeros((t, ck)) # np.concatenate([psid[x] for x in sorted(psid)], 0) #+ 0.01
if len(sys.argv) > 2:
c = int(sys.argv[2])
else:
c = 4
thin = 1
nmcmc = burnin + thin * mcmc
### 3. Prior Setting
m_psi = 0
tau_psi = 1
m_gamma = 0
tau_gamma = 1
### 4. Initial Value
zdata = np.array([1.] * h).reshape((h, 1))
rankz = zdata.shape[1]
gu0 = np.zeros((ck, 1)) + 0.01
for i0 in range(1):
gu0[i0, i0] = 1
gv0 = np.diag([100.] * 1)
gv0i = np.linalg.inv(gv0)
ss0 = 2
sr0 = 2
srn = sr0 + t
bu0 = np.zeros((ck, 2)) + 0.01
for i in range(2):
bu0[i, i] = 1
bv0 = np.diag([100.] * 2)
bv0i = np.linalg.inv(bv0)
ds0 = 2
dr0 = 2
drn = dr0 + t
pu0 = np.zeros((1, 1))
pv0 = np.diag([100.] * 1)
pv0i = np.linalg.inv(pv0)
pv0iu0 = pv0i.dot(pu0)
##############vb
f0 = 1 + 2
f0n = f0 + h
g0 = np.diag([f0]) * 100
tpa = np.zeros(t) + 0.05
tpp = np.zeros(t) + 0.05
# tpa = np.diag([0.1] * k)
# tpp = np.diag([0.1] * kp)
gama = np.zeros((t, ck))
gamm = np.zeros((t, ck))
gams = np.zeros((t, ck))
gh = dc(gu0)
fa = np.zeros((t, 1))
fm = np.zeros((t, 1))
fs = np.zeros((t, 1))
sigma = np.diag([1.] * ck)
sigmai = np.linalg.inv(sigma)
### baseline
psia = np.zeros((t, ck))
psim = np.zeros((t, ck))
psis = np.zeros((t, ck))
bh = dc(bu0)
ga = np.zeros((t, 2))
gm = np.zeros((t, 2))
gs = np.zeros((t, 2))
vg = np.identity(2) #* 0.01
delta = np.diag([1.] * ck)
deltai = np.linalg.inv(delta)
betaa = np.zeros((h, 2)) + 0.01
betam = np.zeros((h, 2))
betas = np.zeros((h, 2))
phi = np.zeros((1, 2))
vb = np.diag([1.] * 2) #* 0.01
vbi = np.linalg.inv(vb)
m = prices[0].shape[0]
mn = m - 1
if ck!= 12:
cmn = ck * 2 - 1
else:
cmn = ck * 2
kl = 1 + 2
m0 = np.zeros((kl, 1))
c0 = np.identity(kl) # * 100
# print("c0",c0)
abh = np.zeros((cmn, kl))
sdm = np.diag([1.] * cmn)
kt = np.ones(t)
# kt = np.array([1.]*t).reshape((t,1))
ktm = np.zeros(t)
kts = dc(kt)
r = np.zeros((h, 1))
lla = np.zeros((t, 1))
llb = np.zeros((t, 1))
rpsi = psi.dot(C.T)
rgama = gama.dot(C.T)
for i in range(t):
lla[i] = mdcev(rpsi[i], rgama[i], dems[i], pris[i], SIG)
art = np.zeros((t, 2))
itea = np.array([0] * h * 2).reshape((h, 2))
# v = np.diag([0.1] * ck)
# iv = np.linalg.inv(v)
## psi_mu, gam_mu
gammu = np.zeros((h, ck))
gammum = np.zeros((h, ck))
gammus = np.zeros((h, ck))
gamhmu = np.zeros(ck)
gamhmum = np.zeros(ck)
gamhmus = np.zeros(ck)
psimu = np.zeros((h, ck))
psimum = np.zeros((h, ck))
psimus = np.zeros((h, ck))
psihmu = np.zeros(ck)
psihmum = np.zeros(ck)
psihmus = np.zeros(ck)
vgam = np.identity(ck)
vpsi = np.identity(ck)
executor = ProcessPoolExecutor()
start = time.time()
psilogm = np.zeros((t, ck))
gamlogm = np.zeros((t, ck))
psilogs = np.zeros((t, ck))
gamlogs = np.zeros((t, ck))
import pandas as pd
if __name__ == '__main__':
# betaa = np.array(pd.read_csv(PATH + "betas.csv"))[:, 1:]
init = True
#psi = np.array(pd.read_csv(PATH + "psis.csv"))[:, 1:] # + np.random.normal(()) # Remove Index
#gama = np.array(pd.read_csv(PATH + "gams.csv"))[:, 1:]
# init = False
if init:
try:
psi = np.array(pd.read_csv(PATH + "psis.csv"))[:, 1:] # + np.random.normal(()) # Remove Index
gama = np.array(pd.read_csv(PATH + "gams.csv"))[:, 1:]
betaa = np.array(pd.read_csv(PATH + "betas.csv"))[:, 1:]
fa = np.array(pd.read_csv(PATH + "f.csv"))
ga = np.array(pd.read_csv(PATH + "g.csv"))
gh = np.array(pd.read_csv(PATH + "a.csv"))[:, 1:] # Remove Index
bh = np.array(pd.read_csv(PATH + "b.csv"))[:, 1:] # Remove Index
phi = np.array([[-0.5, 0.5]])
except:
psi = np.array(pd.read_csv('sims6/' + "psis.csv"))[:, 1:] # Remove Index
gama = np.array(pd.read_csv('sims6/' + "gams.csv"))[:, 1:]
betaa = np.array(pd.read_csv('sims6/' + "betas.csv"))[:, 1:]
fa = np.array(pd.read_csv('sims6/' + "f.csv"))
ga = np.array(pd.read_csv('sims6/' + "g.csv"))
gh = np.array(pd.read_csv('sims6/' + "a.csv"))[:, 1:] # Remove Index
bh = np.array(pd.read_csv('sims6/' + "b.csv"))[:, 1:] # Remove Index
tgh = gh.copy()
tbh = bh.copy()
for imcmc in range(nmcmc):
if imcmc % 10 == 0: # print(imcmc); print(bh); print(phi); print(vb)
print("Iter: %i, LML: %.1f" % (imcmc, lla.sum()))
print("spend time:", time.time() - start)
print('abh', abh)
print('fa', np.concatenate([fa, ga], 1)[:40])
# print("psim",phi,"\n",betaa)
# print("simga",np.diag(sigma), "delta", np.diag(delta))
# print("Iter: %i, POS: %.1f" % (imcmc, llb.sum()))
if imcmc % 100 == 0:
logging.info("Iter: %i" % imcmc)
logging.info("spend time: %.3f" % (time.time() - start))
logging.info("lml:%.3f" % lla.sum())
logging.info(phi)
logging.info(abh)
logging.info(psihmu)
logging.info(gamhmu)
start = time.time()
itea = np.zeros((t, 2))
zphi = zdata.dot(phi)
futures = []
for i in range(h):
futures.append(
executor.submit(rwmh, demand[i], prices[i], 1, lla[glist[i]], gama[glist[i], :], fa[glist[i], :],
psi[glist[i], :], ga[glist[i], :], kt[glist[i]], r[i], betaa[i, :].transpose(),
zphi[i, :].transpose(), bh, gh, abh, sigma, delta, vbi, tpa[glist[i]], tpp[glist[i]],
llb[glist[i]], sdm, gammu[i, :], vgam, gamhmu, psimu[i, :], vpsi, psihmu, i))
for i in range(h):
# alpha,f,psi,g,ll,ite,kt,betai.T,p,a
gama[glist[i], :], fa[glist[i], :], psi[glist[i], :], ga[glist[i], :], lla[glist[i]], itea[glist[i], :], kt[
glist[i]], betaa[i, :], gammu[i, :], psimu[i, :], llb[glist[i]] = futures[i].result()
art += itea
gamab = gama.copy()
psib = psi.copy()
for i in range(h):
gamab[glist[i]] -= gammu[i]
psib[glist[i]] -= psimu[i]
# gamab = gama - np.vstack( [np.repeat(gammu[i], len(gg), 0) for i,gg in enumerate(glist)])
# psib = psi - np.vstack( [ [np.repeat(psimu[i], len(gg), 0) for i,gg in enumerate(glist)], np.zeros((t, 1))])
for i in range(ck):
if i == 0:
res1 = psib[:, i:(i + 1)] - ga.dot(bh[i:(i + 1), :].T)
delta[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res1.T.dot(res1)) / 2).rvs()
res2 = gamab[:, i:(i + 1)] - fa.dot(gh[i:(i + 1), :].T)
sigma[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res2.T.dot(res2)) / 2).rvs()
elif i == 1:
bh[i, 0], delta[i, i] = bmreg1(psib[:, i:(i + 1)] - ga[:, 1:2], ga[:, 0:1], None,
delta[i, i])
gh[i], sigma[i, i] = bmreg1(gamab[:, i:(i + 1)], fa, None, sigma[i, i])
else:
# if i != k - 1:
# bh[i, 0], delta[i, i] = bmreg1(psib[:, i:(i + 1)] - ga[:, 1:2], ga[:, 0:1], None,delta[i, i])
if i != k - 1:
bh[i], delta[i, i] = bmreg1(psib[:, i:(i + 1)], ga, None, delta[i, i],)
gh[i], sigma[i, i] = bmreg1(gamab[:, i:(i + 1)], fa, None, sigma[i, i])
#bh[:,0] = tbh[:,0].copy()
""" Fix to 0.01
for i in range(ck):
if i == 0:
res1 = psib[:, i:(i + 1)] - ga.dot(bh[i:(i + 1), :].T)
delta[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res1.T.dot(res1)) / 2).rvs()
res2 = gamab[:, i:(i + 1)] - fa.dot(gh[i:(i + 1), :].T)
sigma[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res2.T.dot(res2)) / 2).rvs()
elif i == 1:
bh[i, 0], delta[i, i] = bmreg1(psib[:, i:(i + 1)] - ga[:, 1:2], ga[:, 0:1], None,
0.01)
gh[i], sigma[i, i] = bmreg1(gamab[:, i:(i + 1)], fa, None, 0.01)
else:
if i != k - 1:
bh[i], delta[i, i] = bmreg(psib[:, i:(i + 1)], ga, None, 0.01)
gh[i], sigma[i, i] = bmreg(gamab[:, i:(i + 1)], fa, None, 0.01)
"""
""" Fix prior to True Value
for i in range(ck):
if i == 0:
res1 = psib[:, i:(i + 1)] - ga.dot(bh[i:(i + 1), :].T)
delta[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res1.T.dot(res1)) / 2).rvs()
res2 = gamab[:, i:(i + 1)] - fa.dot(gh[i:(i + 1), :].T)
sigma[i, i] = ss.invgamma(a=(0.5 + t) / 2, scale=(0.5 + res2.T.dot(res2)) / 2).rvs()
elif i == 1:
bh[i, 0], delta[i, i] = bmreg1(psib[:, i:(i + 1)] - ga[:, 1:2], ga[:, 0:1], None,
delta[i, i], tbh[i, 0].reshape(1,1), np.eye(1) *0.01)
gh[i], sigma[i, i] = bmreg1(gamab[:, i:(i + 1)], fa, None, sigma[i, i], tgh[i].reshape(1,1), np.eye(1) *0.01 )
else:
# if i != k - 1:
bh[i], delta[i, i] = bmreg(psib[:, i:(i + 1)], ga, None, delta[i, i], tbh[i].reshape(2,1), np.eye(2) * 0.01)
gh[i], sigma[i, i] = bmreg(gamab[:, i:(i + 1)], fa, None, sigma[i, i],tgh[i].reshape(1,1), np.eye(1) *0.01)
"""
if FIX:
sigma = np.identity(ck) * VV # * 0.1
delta = np.identity(ck) * VV # * 0.1
for i in range(1, ck):
gamhmu[i], vgam[i, i] = bmreg1(gammu[:, i:(i + 1)], np.ones((h, 1)), None, vgam[i, i])
# ************
#for i in range(ck ):
for i in range(ck-1):
psihmu[i], vpsi[i, i] = bmreg1(psimu[:, i:(i + 1)], np.ones((h, 1)), None, vpsi[i, i])
#gamhmu[:] = 0
#psihmu[:] = 0
# print(np.diag(sigma))
# print(np.diag(delta))
if ck == 12:
sdm = np.zeros((ck * 2, ck * 2))
sdm[:ck, :ck] = sigma.copy()
sdm[ck:, ck:] = delta.copy()
abh[0:(ck), 0] = gh.reshape((ck))
abh[ck:(ck * 2), 1:(1 + 2)] = bh[0:ck, :]
else:
sdm = np.zeros((ck * 2 - 1, ck * 2 - 1))
sdm[:(ck-1), :(ck-1)] = sigma[:-1,:-1].copy()
sdm[(ck-1):, (ck-1):] = delta.copy()
abh[0:(ck), 0] = gh.reshape((ck))
abh[ck:, 1:] = bh[0:(ck-1), :]
## phi & vb
for j in range(2):
phi[0, j], vb[j, j] = bmreg1(betaa[:, j].reshape((h, 1)), zdata, phi, vb[j,j])#vb[j,j])
#phi = np.array([[-0.5, 0.5]])
#vb = np.identity(2) * 0.1
vbi = np.linalg.inv(vb)
# Update Shock
if imcmc < burnin and imcmc % 10 == 0:
t_a = art[:, 0] / 10
t_p = art[:, 1] / 10
tpa = tpa * np.sqrt(1 + (t_a - target_mh))
tpp = tpp * np.sqrt(1 + (t_p - target_mh))
art = np.zeros((t, 2))
# print(tpp,tpa)
if imcmc >= burnin: #Save
jmcmc = int((imcmc - burnin) / thin)
lmcmc = jmcmc - int((mcmc / 2))
if lmcmc >= 0:
psilogm += psi.copy()
psilogs += psi ** 2
gamlogm += gama.copy()
gamlogs += gama ** 2
gamm += gama.copy()
gams += gama ** 2
fm += fa.copy()
fs += fa ** 2
## baseline
psim += psi.copy()
psis += psi ** 2
gm += ga
gs += ga ** 2
betam += betaa.copy()
betas += betaa ** 2
ktm += kt
gammum += gammu.copy()
gammus += gammu ** 2
psimum += psimu
psimus += psimu ** 2
with open(PATH_PSIG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.mean(psi, axis=0))
with open(PATH_GAMG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.mean(gama, axis=0))
# with open(PATH_FG, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(fa.reshape(-1))
# with open(PATH_GG, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(ga.reshape(-1))
with open(PATH_BHG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(bh.reshape(-1))
with open(PATH_GHG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(gh.reshape(-1))
with open(PATH_SIGMAG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.diag(sigma))
with open(PATH_DELTAG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.diag(delta))
with open(PATH_VBG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(vb.reshape(-1))
# with open(PATH_LLG, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(lla.reshape(-1))
with open(PATH_PHIG, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(phi.reshape(-1))
with open(PATH_VGAM, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.diag(vgam))
with open(PATH_GAMMU, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(gamhmu)
# with open(PATH_GAMMUH, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(gammu.reshape(-1))
with open(PATH_VPSI, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(np.diag(vpsi))
with open(PATH_PSIMU, 'a', newline='') as f:
writer = csv.writer(f)
writer.writerow(psihmu)
# with open(PATH_PSIMUH, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(psimu.reshape(-1))
# with open(PATH_BETAG, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(betaa.reshape(-1))
# with open(PATH_KG, 'a', newline='') as f:
# writer = csv.writer(f)
# writer.writerow(kt.reshape(-1))
if jmcmc % 100 == 0:
np.savez(FN, psim=psim, psis=psis, ktm=ktm, fm=fm, fs=fs, gm=gm, gs=gs,
gamm=gamm, gams=gams, mcmc=mcmc,
betam=betam, betas=betas, glist=glist, dems=dems, pris=pris, ind=ind, gammum=gammum,
gammus=gammus, psimum=psimum, psimus=psimus, fa=fa, ga=ga, kt=kt, psi=psi, gamma=gama,
betaa=betaa,
bh=bh, gh=gh)
# print("spend time:", time.time() - start)
np.savez(FN, psim=psim, psis=psis, ktm=ktm, fm=fm, fs=fs, gm=gm, gs=gs,
gamm=gamm, gams=gams, mcmc=mcmc,
betam=betam, betas=betas, glist=glist, dems=dems, pris=pris, ind=ind, gammum=gammum, gammus=gammus,
psimum=psimum, psimus=psimus, psilogm=psilogm, psilogs=psilogs, gamlogm=gamlogm, gamlogs=gamlogs, fa=fa,
ga=ga)
'''
res = rmultireg(y=psia[:,0:kp],x=zdata,bbar=tu0,a=tv0,nu=pf0,v=pg0,n=1)
print("B:",res["B"])
print("Sigma:", res["Sigma"][0])
print( xpnd(np.array([1,2,3,4,5,6]) ) )
print(rwishart(nu=pf0,v=pg0))
'''
# Functions
import numpy as np
from math import gamma, sqrt
from scipy.stats import invgamma
## Common Functions
# Likelihood
def mdcev(psid, gamd, dem, pri, sigma=1):
x = np.array(dem)
p = np.array(pri)
k = x.shape[0]
n = 1
expgam = np.array(np.exp(gamd))
temp = psid
temp2 = expgam
v = temp - np.log(x * temp2 + 1) - np.log(p)
xp = x > 0
vi = np.copy(v)
xi = np.copy(x)
mv = xp.sum() ## 0 = col 1 = row
lli = (vi[xp]/sigma).sum() - mv * np.log(np.exp(vi/sigma).sum()) + np.log(gamma(mv)) - np.log(sigma ** (mv-1))
gamj = expgam[xp]
pij = p[xp]
xij = xi[xp]
jacv = 1 / (xij + 1/gamj) ##jacobian
ll = lli + np.log(jacv).sum() + np.log((pij / jacv).sum())
# print("sumll",sumll)
return ll
def h2t(x, l):
res = None
for h, i in zip(x, l):
if res is None:
res = np.tile(h, (i, 1))
else:
res = np.concatenate([res, np.tile(h, (i, 1))], axis=0)
return res
# Fast sampling random value from multivariate-normal distribution
def mvn(cov):
L = np.linalg.cholesky(cov)
z = np.random.standard_normal(len(cov))
return np.dot(L, z)
# Fast Logpdf from MVN
def logpdf(x, mean, cov):
# `eigh` assumes the matrix is Hermitian.
vals, vecs = np.linalg.eigh(cov)
logdet = np.sum(np.log(vals))
valsinv = np.array([1./v for v in vals])
# `vecs` is R times D while `vals` is a R-vector where R is the matrix
# rank. The asterisk performs element-wise multiplication.
U = vecs * np.sqrt(valsinv)
rank = len(vals)
dev = x - mean
# "maha" for "Mahalanobis distance".
maha = np.square(np.dot(dev, U)).sum()
log2pi = np.log(2 * np.pi)
return -0.5 * (rank * log2pi + maha + logdet)
#cholesky
def chol(x):
if (x.shape[1] == 1 and x.shape[0] == 1):
x = np.array(np.sqrt(x)).reshape((1, 1))
else:
x = np.linalg.cholesky(x)
return x
# xpnd function in R
def xpnd(x):
dim = int((-1 + sqrt(1 + 8 * len(x))) / 2)
new = np.zeros((dim, dim))
inds = np.tril_indices_from(new)
new[inds] = x
new[(inds[1], inds[0])] = x
return new
# Bayes Reg
def bmreg(y, x, theta=None, Lambda=None, u0=None, v0=None, f0=None, g0=None):
n = y.shape[0]
m = y.shape[1]
k = x.shape[1]
l = m * k
if u0 is None: u0 = np.zeros((k, m))
if v0 is None: v0 = np.eye(k) * 100
if f0 is None: f0 = k
if g0 is None: g0 = np.eye(m)
try:
lambdai = np.linalg.inv(Lambda)
except:
lambdai = 1 / Lambda
v0i = np.linalg.inv(v0)
var = np.linalg.inv(x.transpose().dot(x) * lambdai + v0i)
mean = var.dot((x.transpose() * lambdai).dot(y) + v0i.dot(u0)).reshape(k)
s_theta = (mean + mvn(var)).reshape(k,m)
## generate lambda
res = y - x.dot(s_theta)
s_lambda = invgamma(a=1 + n/2, scale=1 + (res.T.dot(res)) / 2).rvs()
return [s_theta.reshape(k), s_lambda]
#
def bmreg1(y, x, theta=None, Lambda=None, u0=None, v0=None, e=True):
n = y.shape[0]
m = y.shape[1]
k = x.shape[1]
l = m * k
if u0 is None: u0 = np.zeros((k, m))
if v0 is None: v0 = 100
lambdai = 1 / Lambda
v0i = 1/ v0
var = np.linalg.inv(x.T.dot(x) * lambdai + v0i)
mean = var.dot((x.T * lambdai).dot(y) + u0*v0i).reshape(k)
## generate lambda
if e:
s_theta = (mean + mvn(var)).reshape(k, m)
res = y - x.dot(s_theta)
s_lambda = invgamma(a=0.5 + n/2, scale=0.5 + (res.T.dot(res)) / 2).rvs()
return [s_theta.reshape(k), s_lambda]
else:
return mean + mvn(var)
# For multiple core
def divide_work(h, coren):
part = []
rest = h % coren
o = int((h - rest) / coren)
temp = 0
for i in range(coren):
if i == (coren - 1):
temp = rest
part.append(range(i * o, i * o + o + temp))
return part
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Pradyumna1312/ML_SelfStudy/blob/main/ML_SelfStudy_RF.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Using the iris flower dataset, create a Random Forest model in Python to categorise the type
of flower. It includes the sepal length, sepal breadth, petal length, petal width, and floral type.
Setosa, versicolor, and virginia are the three species or classes. You may find the dataset in the
Scikit-learn package or get it from the UCI Machine Learning Repository.
Implement **Random Forest Algorithm** in Python
* Import the datasets
```
import numpy as np
import pandas as pd
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris
import sklearn.metrics as metrics
```
* Print the labels and feature names
```
iris_data=load_iris()
iris=pd.DataFrame(iris_data.data)
print("IRIS Target names", iris_data.target_names)
print("IRIS Features name", iris_data.feature_names)
```
* Separate the columns into dependent and independent variables
```
X=iris.values
Y=iris_data.target
```
* Split those variables into a training and test set (70% and 30% respectively)
* Train the model on the training set.
* Perform predictions on the test set.
* Predict the accuracy of the model
* Make a prediction for the input sample: sepal length = 4, sepal width = 3, petal length =
5, petal width = 1.5
```
X_train, X_test, Y_train, Y_test= train_test_split(X,Y,test_size=0.3, random_state=0) # Splitting
clf=RandomForestClassifier(random_state=0)
clf.fit(X_train, Y_train) # Training
Y_pred=clf.predict(X_test) # Predicting
print("Accuracy of the model: ",metrics.accuracy_score(Y_test, Y_pred)) # Accuracy
print("Prediction through Random Forest =",clf.predict([[4,3,5,1.5]])) # Prediction for the given
```
Repeat the above steps by reducing each iris species to 25 samples and comment on the class
prediction accuracy.
```
x1=X[0:25]
x2=X[50:75]
x3=X[100:125]
y1=Y[0:25]
y2=Y[50:75]
y3=Y[100:125]
X1=[0]*75
Y1=[0]*75
for i in range(25):
X1[i]=x1[i]
Y1[i]=y1[i]
for i in range(25):
X1[i+25]=x2[i]
Y1[i+25]=y2[i]
for i in range(25):
X1[i+50]=x3[i]
Y1[i+50]=y3[i]
X2=np.array(X1)
Y2=np.array(Y1)
X_train, X_test, Y_train, Y_test= train_test_split(X2,Y2,test_size=0.3, random_state=0) # Splitting
clf=RandomForestClassifier(random_state=0)
clf.fit(X_train, Y_train) # Training
Y_pred=clf.predict(X_test) # Predicting
print("Accuracy of the model: ",metrics.accuracy_score(Y_test, Y_pred)) # Accuracy
print("Prediction through Random Forest =",clf.predict([[4,3,5,1.5]])) # Prediction for the given
```
| github_jupyter |
## Spam Classification
In this notebook we demonstrate how to classify if an image is SPAM or HAM using the SMS Spam Collection Dataset which can be found [here](https://www.kaggle.com/uciml/sms-spam-collection-dataset#spam.csv)
```
!pip install fastai==1.0.60
!pip install wget
import pandas as pd
import wget
import os
from zipfile import ZipFile
try :
from google.colab import files
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip
!unzip smsspamcollection.zip
df = pd.read_csv('SMSSpamCollection', sep='\t', header=None, names=['target', 'text'])
except ModuleNotFoundError :
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
path = os.getcwd()+'\Data'
wget.download(url,path)
temp=path+'\smsspamcollection.zip'
file = ZipFile(temp)
file.extractall(path)
file.close()
df = pd.read_csv(path + '\SMSSpamCollection', sep='\t', header=None, names=['target', 'text'])
import fastai
from fastai import *
from fastai.text import *
import pandas as pd
import numpy as np
from functools import partial
import io
import os
df.head()
display(df.shape) #Number of rows (instances) and columns in the dataset
df["target"].value_counts()/df.shape[0] #Class distribution in the dataset
from sklearn.model_selection import train_test_split
# split data into training and validation set
df_train, df_test = train_test_split(df,stratify = df['target'], test_size = 0.2, random_state = 2020)
df_train.shape,df_test.shape
# Language model data
data_lm = TextLMDataBunch.from_df(train_df = df_train, valid_df = df_test, path = "")
# Classifier model data
data_class = TextClasDataBunch.from_df(path = "", train_df = df_train, valid_df = df_test, vocab=data_lm.train_ds.vocab, bs=32)
df_test
```
TextLMDataBunch applies some text preprocessing tasks to help the algorithm perform better. Altough we commonly remove stoopwords and punctuations, here we do not do it. This model can handle semantics, deleting such information might do more harm than good with respect to accuracy
Now lets look at our training data
```
data_lm.show_batch()
```
Those 'xxmaj','xxbos', 'xxup' etc are all special tokens for the NN. xxbos stands for begin of sentence, xxmaj indicates that the first letter of the next word is in capital letter, 'xxup' is used to indicate the entire next word is in captital letters. You can view the entire set of tokens [here.](https://docs.fast.ai/text.transform.html)
```
model = language_model_learner(data_lm, arch = AWD_LSTM, pretrained = True, drop_mult=0.5)
```
We will use a pretrained model. You can learn more about it [here.](https://docs.fast.ai/text.models.html#Language-model-modules)
Now lets test our language model. Its is giving sensible outputs as it is pre trained on wiki corpus.
```
for i in range(10):
print(model.predict("The food is", n_words=15))
```
We will now need to fine tune our model for our particular task. <br>
```
model.lr_find() # you can find more details about this at https://docs.fast.ai/basic_train.html
model.recorder.plot(suggestion=True)
model.fit_one_cycle(4, max_lr= 5e-02)#you can freeze and unfreeze different layers and by doing so we can have different lr for each layer
#for freezing and unfreezing code you can refer https://docs.fast.ai/text.html
for i in range(10):
print(model.predict("The food is", n_words=15))
```
Note that now the model is predicting ':)' and other such characters which can generally be seen in SMS messages. With further more fine tuning and running it for more cyles you can get the model to predict more characters which are found in SMS messages.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from __future__ import division
import pickle
import os
from collections import defaultdict
import types
import numpy as np
import pandas as pd
from statsmodels.stats.anova import AnovaRM
import statsmodels.api as sm
from sensei.envs import GridWorldNavEnv, GuideEnv
from sensei import utils
from sensei import ase
from sensei.gw_user_study import HumanGridWorldUser
from sensei.guide_models import GridWorldGuide
from matplotlib import pyplot as plt
import matplotlib as mpl
%matplotlib inline
mpl.rcParams.update({'font.size': 18})
data_dir = utils.gw_human_data_dir
fig_dir = os.path.join(data_dir, 'figures')
if not os.path.exists(fig_dir):
os.makedirs(fig_dir)
user_ids = [str(i) for i in range(12) if str(i) in os.listdir(data_dir)]
baseline_guide_evals_of_user = {}
train_logs_of_user = {}
for user_id in user_ids:
user_data_dir = os.path.join(data_dir, user_id)
baselines_eval_path = os.path.join(user_data_dir, 'guide_evals.pkl')
with open(baselines_eval_path, 'rb') as f:
baseline_guide_evals = pickle.load(f)
train_logs_path = os.path.join(user_data_dir, 'train_logs.pkl')
with open(train_logs_path, 'rb') as f:
train_logs = pickle.load(f)
baseline_guide_evals_of_user[user_id] = baseline_guide_evals
train_logs_of_user[user_data_dir] = train_logs
perf_of_guide = {}
rollouts_of_guide = defaultdict(list)
for user_id, baseline_guide_evals in baseline_guide_evals_of_user.items():
for guide_name, guide_eval in baseline_guide_evals.items():
rollouts = guide_eval['rollouts']
rollouts_of_guide[guide_name].extend(rollouts)
for guide_name, guide_eval_rollouts in rollouts_of_guide.items():
perf = utils.compute_perf_metrics(guide_eval_rollouts, None, max_ep_len=25)
perf_of_guide[guide_name] = perf
plt.xlabel('Time')
plt.ylabel('Distance to Goal')
plt.title('2D Navigation')
for guide_name in ['iden', 'naive', 'learned']:
perf = perf_of_guide[guide_name]
tilts = perf['dist_to_goal_t']
tilt_stderrs = perf['dist_to_goal_stderr_t']
label = utils.label_of_guide[guide_name]
color = utils.color_of_guide[guide_name]
xs = np.arange(0, len(tilts), 1)
ys = np.array(tilts)
yerrs = np.array(tilt_stderrs)
y_mins = ys - yerrs
y_maxs = ys + yerrs
plt.fill_between(
xs,
y_mins,
y_maxs,
where=y_maxs >= y_mins,
interpolate=False,
label=label,
color=color,
alpha=0.5)
plt.plot(xs, ys, color=color)
plt.legend(loc='upper right', prop={'size': 18})
plt.savefig(os.path.join(fig_dir, 'gw-user-study.pdf'), bbox_inches='tight')
plt.show()
n_users = len(baseline_guide_evals_of_user)
depvar = 'response'
subject = 'user_id'
within = 'condition'
metrics = ['rollout_len']
for metric in metrics:
rows = []
for user_id, baseline_guide_evals in baseline_guide_evals_of_user.items():
rows.append({subject: user_id, depvar: baseline_guide_evals['iden']['perf'][metric], within: 'unassisted'})
rows.append({subject: user_id, depvar: baseline_guide_evals['learned']['perf'][metric], within: 'assisted'})
data = pd.DataFrame(rows)
aovrm = AnovaRM(data=data, depvar=depvar, subject=subject, within=[within])
res = aovrm.fit()
print(res)
questions = [
'I was often able to infer my current position and orientation',
'I was often able to move toward the goal',
'I often found the guidance helpful',
'I relied primarily on the most recent guidance to infer my current position and orientation',
'I relied primarily on past guidance and recent movements to infer my current position and orientation',
'I often forgot which position and orientation I believed was in'
]
responses = [
[[6, 5, 6, 4, 7, 5], [6, 6, 6, 7, 4, 7], [7, 7, 7, 7, 3, 1]],
[[7, 6, 7, 7, 3, 2], [5, 5, 4, 3, 6, 5], [7, 7, 7, 7, 3, 1]],
[[5, 6, 6, 6, 4, 4], [6, 6, 6, 6, 5, 3], [7, 7, 7, 6, 5, 1]],
[[6, 6, 6, 6, 2, 4], [6, 6, 6, 6, 3, 4], [7, 7, 7, 7, 2, 1]],
[[2, 3, 6, 5, 6, 5], [6, 6, 6, 5, 6, 2], [7, 7, 7, 5, 7, 1]],
[[5, 5, 7, 6, 6, 3], [6, 6, 6, 6, 6, 1], [7, 7, 7, 7, 6, 1]],
[[6, 6, 6, 1, 6, 1], [6, 6, 6, 1, 6, 2], [7, 7, 7, 1, 6, 1]],
[[6, 6, 6, 2, 6, 2], [5, 6, 5, 4, 6, 3], [7, 7, 7, 7, 5, 2]],
[[5, 4, 4, 3, 6, 3], [4, 4, 3, 2, 6, 3], [6, 6, 7, 4, 6, 2]],
[[6, 7, 6, 5, 5, 5], [6, 7, 6, 5, 5, 4], [7, 7, 6, 6, 4, 4]],
[[7, 7, 7, 4, 4, 1], [7, 4, 7, 6, 6, 2], [7, 7, 7, 7, 2, 1]],
[[5, 5, 5, 4, 4, 3], [5, 5, 5, 4, 5, 3], [6, 6, 7, 6, 3, 1]],
]
n_users = len(responses)
n_phases = len(responses[0])
responses_of_q = [[[np.nan for _ in range(n_users)] for _ in questions] for _ in range(n_phases)]
for phase_idx in range(n_phases):
for user_idx, user_responses in enumerate(responses):
for q_idx, response in enumerate(responses[user_idx][phase_idx]):
responses_of_q[phase_idx][q_idx][user_idx] = response
# one-way repeated measures ANOVA with the presence of assistance as a factor influencing responses
n_users = len(responses)
depvar = 'response'
subject = 'user_id'
within = 'condition'
assistant_labels = [
'\\multirow{4}{*}{\\rotatebox[origin=c]{90}{Naive ASE}}',
'\\multirow{4}{*}{\\rotatebox[origin=c]{90}{ASE}}'
]
for assisted_phase in [1, 2]:
for i, q in enumerate(questions):
if i == 0:
assistant_label = assistant_labels[assisted_phase-1]
else:
assistant_label = ''
rows = []
for user_id in user_ids:
user_id = int(user_id)
rows.append({subject: user_id, depvar: responses_of_q[0][i][user_id], within: 'unassisted'})
rows.append({subject: user_id, depvar: responses_of_q[assisted_phase][i][user_id], within: 'assisted'})
data = pd.DataFrame(rows)
aovrm = AnovaRM(data=data, depvar=depvar, subject=subject, within=[within])
res = aovrm.fit()
p = res.anova_table['Pr > F'].values[0]
print('%s & %s & $%s%s%s$ & %0.2f & %s%0.2f%s \\\\' % (assistant_label, q, '\\mathbf{' if p < 0.05 else '', utils.discretize_p_value(p), '}' if p < 0.05 else '', np.nanmean(responses_of_q[0][i]), '\\textbf{' if p < 0.05 else '', np.nanmean(responses_of_q[assisted_phase][i]), '}' if p < 0.05 else ''))
if assisted_phase == 1:
print('\midrule')
guide_names = ['prac', 'iden', 'learned']
n_rollouts_of_guide = {
'prac': 3,
'iden': 5,
'learned': 5
}
perfs_of_guide = {guide_name: [[] for _ in range(n_rollouts)] for guide_name, n_rollouts in n_rollouts_of_guide.items()}
for guide_name, n_rollouts in n_rollouts_of_guide.items():
for i in range(n_rollouts):
for baseline_guide_evals in baseline_guide_evals_of_user.values():
rollouts = [baseline_guide_evals[guide_name]['rollouts'][i]]
if guide_name == 'iden':
rollouts.append(baseline_guide_evals['naive']['rollouts'][i])
for rollout in rollouts:
perf = utils.compute_perf_metrics(rollouts, None, max_ep_len=25)
perfs_of_guide[guide_name][i].append(perf)
metric = 'rollout_len'
plt.xlabel('Episode Number')
plt.ylabel(utils.label_of_perf_met[metric])
plt.title('2D Navigation')
guide_names = ['iden', 'learned']
for i, guide_name in enumerate(guide_names):
perfs = perfs_of_guide[guide_name]
all_perfs = [user_perf[metric] for perf in perfs for user_perf in perf]
if guide_name == 'learned':
label = 'ASE (Our Method)'
elif guide_name == 'iden':
label = 'Unassisted + Naive ASE (counterbalanced)'
else:
label = utils.label_of_guide[guide_name]
color = utils.color_of_guide[guide_name]
shift = sum(len(perfs_of_guide[guide_names[j]]) for j in range(i))
n_users = len(perfs[0])
xs = np.tile(np.arange(1 + shift, 1 + len(perfs) + shift, 1), n_users)
ys = np.array(all_perfs)
plt.scatter(xs, ys, color=color, alpha=0.25)
results = sm.OLS(ys,sm.add_constant(xs - shift - 1)).fit()
X_plot = np.linspace(1, len(perfs), 100)
plt.plot(X_plot + shift, X_plot*results.params[1] + results.params[0], label=label, color=color, linestyle='--', linewidth=2)
xs = np.arange(1 + shift, 1 + len(perfs) + shift, 1)
ys = np.array([np.mean([user_perf[metric] for user_perf in perf]) for perf in perfs])
stderr = lambda x: np.std(x) / np.sqrt(len(x))
yerrs = np.array([stderr([user_perf[metric] for user_perf in perf]) for perf in perfs])
plt.legend(loc='upper left', prop={'size': 12}, bbox_to_anchor=(0.025, -0.2))
plt.savefig(os.path.join(fig_dir, 'gw-user-study-learning-effect.pdf'), bbox_inches='tight')
plt.show()
gw_size = 5
n_goals = gw_size**2
n_states = 4*gw_size**2
n_objes_per_set = gw_size**2
n_obj_instances_of_set = [1, 2, 1]
n_obj_sets = len(n_obj_instances_of_set)
n_objes = n_objes_per_set*n_obj_sets
n_obses = n_objes + n_obj_sets
ground_truth = np.zeros((n_obses, n_states))
ticks = np.arange(0, gw_size, 1)
poses = utils.enumerate_gw_poses(ticks, ticks)
poses_of_obs = [[] for _ in range(n_obses)]
for obj_set in range(n_obj_sets):
for obj in range(n_objes_per_set):
obs = obj_set*(n_objes_per_set+1)+obj
obj_poses = [poses[obj*4]]
for i in range(1, n_obj_instances_of_set[obj_set]):
obj_poses.append(poses[np.random.choice(list(range(n_objes_per_set)))*4])
poses_of_obs[obs] = obj_poses
for obj_pos in obj_poses:
for state, user_pos in enumerate(poses):
conds = []
conds.append(obj_pos[0] == user_pos[0] and obj_pos[1] == user_pos[1] + 1 and user_pos[2] == 2)
conds.append(obj_pos[0] == user_pos[0] and obj_pos[1] == user_pos[1] - 1 and user_pos[2] == 0)
conds.append(obj_pos[1] == user_pos[1] and obj_pos[0] == user_pos[0] + 1 and user_pos[2] == 3)
conds.append(obj_pos[1] == user_pos[1] and obj_pos[0] == user_pos[0] - 1 and user_pos[2] == 1)
if any(conds):
ground_truth[obs, state] = 1
for obj_set in range(n_obj_sets):
obs = obj_set*(n_objes_per_set+1)+n_objes_per_set
for state, user_pos in enumerate(poses):
conds = []
conds.append(user_pos[0] == 0 and user_pos[2] == 1)
conds.append(user_pos[0] == gw_size - 1 and user_pos[2] == 3)
conds.append(user_pos[1] == 0 and user_pos[2] == 0)
conds.append(user_pos[1] == gw_size - 1 and user_pos[2] == 2)
if any(conds):
ground_truth[obs, state] = 1
ground_truth = utils.smooth_matrix(ground_truth, n_states, eps=1e-6)
ground_truth_obs_model = np.log(ground_truth)
max_ep_len = gw_size**2
env = GridWorldNavEnv(
gw_size=gw_size,
n_goals=n_goals,
max_ep_len=max_ep_len,
ground_truth_obs_model=ground_truth_obs_model
)
env.n_objes_per_set = n_objes_per_set
env.n_obj_sets = n_obj_sets
def is_obs_informative(self, obs):
n_uninf_obses = self.n_obses // self.n_obj_sets
return obs >= n_uninf_obses
env.is_obs_informative = types.MethodType(is_obs_informative, env)
env.practice = False
def set_practice_mode(self, mode):
self.practice = mode
env.set_practice_mode = types.MethodType(set_practice_mode, env)
sess = utils.make_tf_session(gpu_mode=False)
masked_obses = np.arange(0, env.n_obses // env.n_obj_sets, 1)
internal = np.exp(env.ground_truth_obs_model)
obs_weights = np.ones(env.n_obses)
for obs in masked_obses:
obs_weights[obs] = 1e-6
internal = utils.smooth_matrix(internal, env.n_obses, eps=(1-obs_weights[:, np.newaxis]))
internal = np.log(internal)
internal_obs_model = internal
user_init_belief_conf = 1e-9
user_model = HumanGridWorldUser(
env,
internal_obs_model,
env.make_dynamics_model(eps=1e-6),
q_func=env.Q,
init_belief_conf=user_init_belief_conf
)
guide_env = GuideEnv(env, user_model, n_obs_per_act=1)
def get_theta_of_user(user_id):
user_data_dir = os.path.join(utils.gw_human_data_dir, user_id)
init_belief_conf = 1-1e-9
dynamics_model = env.make_dynamics_model(eps=1e-9)
internal_dynamics_model = env.make_dynamics_model(eps=0.1)
tabular_obs_model_kwargs = {
'scope_file': os.path.join(user_data_dir, 'guide_scope.pkl'),
'tf_file': os.path.join(user_data_dir, 'guide.tf'),
'user_init_belief_conf': user_init_belief_conf,
'obs_params_only': True,
'prior_coeff': 0.,
'warm_start': False
}
guide_train_kwargs = {
'iterations': 1000,
'ftol': 1e-6,
'batch_size': 32,
'learning_rate': 1e-2,
'val_update_freq': 100,
'verbose': True,
'show_plots': False
}
guide_model = GridWorldGuide(
sess,
env,
env.ground_truth_obs_model,
dynamics_model,
env.Q,
n_obs_per_act=guide_env.n_obs_per_act,
prior_internal_obs_model=env.ground_truth_obs_model,
internal_dynamics_model=internal_dynamics_model,
tabular_obs_model_kwargs=tabular_obs_model_kwargs,
learn_internal_obs_model=True,
init_belief_conf=init_belief_conf,
user_init_belief_conf=user_init_belief_conf
)
guide_evals = baseline_guide_evals_of_user[user_id]
init_train_rollouts = guide_evals['iden']['rollouts']
guide_optimizer = ase.InteractiveGuideOptimizer(sess, env, guide_env)
guide_optimizer.run(
guide_model,
n_train_batches=0,
n_rollouts_per_batch=0,
guide_train_kwargs={'iterations': 0, 'verbose': False},
verbose=True,
init_train_rollouts=init_train_rollouts,
n_eval_rollouts=None
)
guide_model.load()
theta = sess.run(guide_model.internal_obs_model.obs_weights)[0, 0, 0]
return theta
thetas = [get_theta_of_user(user_id) for user_id in user_ids]
thetas
plt.title('2D Navigation')
plt.xlabel(r'Learned Model of User Bias $\hat{\theta}$')
plt.ylabel('Number of Users')
plt.hist(thetas, bins=20, color='orange', label='ASE (Our Method)', align='left')
plt.hist(np.ones(len(thetas)), bins=20, color='teal', label='Naive ASE (Baseline)', align='left')
plt.axvline(x=0, linestyle='--', color='black', label='Ground Truth')
plt.xlim([-0.1, 1.1])
plt.yticks(range(0, 14, 2))
plt.legend(loc='upper center')
plt.savefig(os.path.join(fig_dir, 'gw-learned-theta.pdf'), bbox_inches='tight', dpi=500)
plt.show()
```
| github_jupyter |
# Hugging Face Transformers with `Pytorch`
### Text Classification Example using vanilla `Pytorch`, `Transformers`, `Datasets`
# Introduction
Welcome to this end-to-end multilingual Text-Classification example using PyTorch. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with `Pytorch` to fine-tune a multilingual transformer for text-classification. This example is a derived version of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook and uses Amazon SageMaker for distributed training. In the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) we showed how to fine-tune `distilbert-base-multilingual-cased` on the `amazon_reviews_multi` dataset for `sentiment-analysis`. This dataset has over 1.2 million data points, which is huge. Running training would take on 1x NVIDIA V100 takes around 6,5h for `batch_size` 16, which is quite long.
To scale and accelerate our training we will use [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/), which provides two strategies for [distributed training](https://huggingface.co/docs/sagemaker/train#distributed-training), [data parallelism](https://huggingface.co/docs/sagemaker/train#data-parallelism) and model parallelism. Data parallelism splits a training set across several GPUs, while [model parallelism](https://huggingface.co/docs/sagemaker/train#model-parallelism) splits a model across several GPUs. We are going to use [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which has been built into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API. To be able use data-parallelism we only have to define the `distribution` parameter in our `HuggingFace` estimator.
I moved the "training" part of the [text-classificiaton.ipynb](https://github.com/philschmid/transformers-pytorch-text-classification/blob/main/text-classification.ipynb) notebook into a separate training script [train.py](./scripts/train.py), which accepts the same hyperparameter and can be run on Amazon SageMaker using the `HuggingFace` estimator.
Our goal is to decrease the training duration by scaling our global/effective batch size from 16 up to 128, which is 8x bigger than before. For monitoring our training we will use the new Training Metrics support by the [Hugging Face Hub](hf.co/models)
### Installation
```
#!pip install sagemaker
!pip install transformers datasets tensorboard datasets[s3] --upgrade
```
This example will use the [Hugging Face Hub](https://huggingface.co/models) as remote model versioning service. To be able to push our model to the Hub, you need to register on the [Hugging Face](https://huggingface.co/join).
If you already have an account you can skip this step.
After you have an account, we will use the `notebook_login` util from the `huggingface_hub` package to log into our account and store our token (access key) on the disk.
```
from huggingface_hub import notebook_login
notebook_login()
```
## Setup & Configuration
In this step we will define global configurations and parameters, which are used across the whole end-to-end fine-tuning proccess, e.g. `tokenizer` and `model` we will use.
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
_Note: The execution role is only available when running a notebook within SageMaker (SageMaker Notebook Instances or Studio). If you run `get_execution_role` in a notebook not on SageMaker, expect a region error._
You can comment in the cell below and provide a an IAM Role name with SageMaker permissions to setup your environment out side of SageMaker.
```
# import sagemaker
# import boto3
# import os
# os.environ["AWS_DEFAULT_REGION"]="your-region"
# # This ROLE needs to exists with your associated AWS Credentials and needs permission for SageMaker
# ROLE_NAME='role-name-of-your-iam-role-with-right-permissions'
# iam_client = boto3.client('iam')
# role = iam_client.get_role(RoleName=ROLE_NAME)['Role']['Arn']
# sess = sagemaker.Session()
# print(f"sagemaker role arn: {role}")
# print(f"sagemaker bucket: {sess.default_bucket()}")
# print(f"sagemaker session region: {sess.boto_region_name}")
```
In this example are we going to fine-tune the [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) a multilingual DistilBERT model.
```
model_id = "distilbert-base-multilingual-cased"
# name for our repository on the hub
model_name = model_id.split("/")[-1] if "/" in model_id else model_id
repo_name = f"{model_name}-sentiment"
```
## Dataset & Pre-processing
As Dataset we will use the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) a multilingual text-classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
```
dataset_id="amazon_reviews_multi"
dataset_config="all_languages"
seed=33
```
To load the `amazon_reviews_multi` dataset, we use the `load_dataset()` method from the 🤗 Datasets library.
```
from datasets import load_dataset
dataset = load_dataset(dataset_id,dataset_config)
```
### Pre-processing & Tokenization
The [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) has 5 classes (`stars`) to match those into a `sentiment-analysis` task we will map those star ratings to the following classes `labels`:
* `[1-2]`: `Negative`
* `[3]`: `Neutral`
* `[4-5]`: `Positive`
Those `labels` can be later used to create a user friendly output after we fine-tuned our model.
```
from datasets import ClassLabel
def map_start_to_label(review):
if review["stars"] < 3:
review["stars"] = 0
elif review["stars"] == 3:
review["stars"] = 1
else:
review["stars"] = 2
return review
# convert 1-5 star reviews to 0,1,2
dataset = dataset.map(map_start_to_label)
# convert feature from Value to ClassLabel
class_feature = ClassLabel(names=['negative','neutral', 'positive'])
dataset = dataset.cast_column("stars", class_feature)
# rename our target column to labels
dataset = dataset.rename_column("stars","labels")
# drop columns that are not needed
dataset = dataset.remove_columns(['review_id', 'product_id', 'reviewer_id', 'review_title', 'language', 'product_category'])
dataset["train"].features
```
Before we prepare the dataset for training. Lets take a quick look at the class distribution of the dataset.
```
import pandas as pd
df = dataset["train"].to_pandas()
df.hist()
```
The Distribution is not perfect, but lets give it a try and improve on this later.
To train our model we need to convert our "Natural Language" to token IDs. This is done by a 🤗 Transformers Tokenizer which will tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary). If you are not sure what this means check out [chapter 6](https://huggingface.co/course/chapter6/1?fw=tf) of the Hugging Face Course.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
```
Additionally we add the `truncation=True` and `max_length=512` to align the length and truncate texts that are bigger than the maximum size allowed by the model.
```
def process(examples):
tokenized_inputs = tokenizer(
examples["review_body"], truncation=True, max_length=512
)
return tokenized_inputs
tokenized_datasets = dataset.map(process, batched=True)
tokenized_datasets["train"].features
```
Before we can start our distributed Training, we need to upload our already pre-processed dataset to Amazon S3. Therefore we will use the built-in utils of `datasets`
```
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{dataset_id}/train'
tokenized_datasets["train"].save_to_disk(training_input_path, fs=s3)
# save validation_dataset to s3
eval_input_path = f's3://{sess.default_bucket()}/{dataset_id}/test'
tokenized_datasets["validation"].save_to_disk(eval_input_path, fs=s3)
```
## Creating an Estimator and start a training job
Last step before we can start our managed training is to define our Hyperparameters, create our sagemaker `HuggingFace` estimator and configure distributed training.
```
from sagemaker.huggingface import HuggingFace
from huggingface_hub import HfFolder
# hyperparameters, which are passed into the training job
hyperparameters={
'model_id':'distilbert-base-multilingual-cased',
'epochs': 3,
'per_device_train_batch_size': 16,
'per_device_eval_batch_size': 16,
'learning_rate': 3e-5*8,
'fp16': True,
# logging & evaluation strategie
'strategy':'steps',
'steps':5_000,
'save_total_limit':2,
'load_best_model_at_end':True,
'metric_for_best_model':"f1",
# push to hub config
'push_to_hub': True,
'hub_model_id': 'distilbert-base-multilingual-cased-sentiment-2',
'hub_token': HfFolder.get_token()
}
# configuration for running training on smdistributed Data Parallel
distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py',
source_dir = './scripts',
instance_type = 'ml.p3.16xlarge',
instance_count = 1,
role = role,
transformers_version = '4.12',
pytorch_version = '1.9',
py_version = 'py38',
hyperparameters = hyperparameters,
distribution = distribution
)
```
Since, we are using SageMaker Data Parallelism our total_batch_size will be per_device_train_batch_size * n_gpus.
```
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'eval': eval_input_path
}
# starting the train job with our uploaded datasets as input
# setting wait to False to not expose the HF Token
huggingface_estimator.fit(data,wait=False)
```
Since we are using the Hugging Face Hub intergration with Tensorboard we can inspect our progress directly on the hub, as well as testing checkpoints during the training.
```
from huggingface_hub import HfApi
whoami = HfApi().whoami()
username = whoami['name']
print(f"https://huggingface.co/{username}/{hyperparameters['hub_model_id']}")
```

| github_jupyter |
# Quantum teleportation
By the end of this post, we will teleport the quantum state
$$\sqrt{0.70}\vert0\rangle + \sqrt{0.30}\vert1\rangle$$ from Alice's qubit to Bob's qubit.
Recall that the teleportation algorithm consists of four major components:
1. Initializing the state to be teleported. We will do this on Alice's qubit `q0`.
2. Creating entanglement between two qubits. We will use qubits `q1` and `q2` for this. Recall that Alice owns `q1`, and Bob owns `q2`.
3. Applying a Bell measurement on Alice's qubits `q0` and `q1`.
4. Applying classically controlled operations on Bob's qubit `q2` depending on the outcomes of the Bell measurement on Alice's qubits.
This exercise guides you through each of these steps.
### Initializing the state to be teleported
First, we create a quantum circuit that has the state $$\sqrt{0.70}\vert0\rangle + \sqrt{0.30}\vert1\rangle$$ We can do this by using `Qiskit`'s `initialize` function.
```
import numpy as np
import math
def initialize_qubit(given_circuit, qubit_index):
### WRITE YOUR CODE BETWEEN THESE LINES - START
desired_vector = [math.sqrt(0.70),math.sqrt(0.30)]
given_circuit.initialize(desired_vector, [all_qubits_Alice[qubit_index]])
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Next, we need to create entanglement between Alice's and Bob's qubits.
```
def entangle_qubits(given_circuit, qubit_Alice, qubit_Bob):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.h(qubit_Alice)
given_circuit.cx(qubit_Alice,qubit_Bob)
mycircuit.barrier()
given_circuit.cx(0,1)
given_circuit.h(0)
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Next, we need to do a Bell measurement of Alice's qubits.
```
def bell_meas_Alice_qubits(given_circuit, qubit1_Alice, qubit2_Alice, clbit1_Alice, clbit2_Alice):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.measure([0,1], [0,1])
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
Finally, we apply controlled operations on Bob's qubit. Recall that the controlled operations are applied in this order:
- an $X$ gate is applied on Bob's qubit if the measurement coutcome of Alice's second qubit, `clbit2_Alice`, is `1`.
- a $Z$ gate is applied on Bob's qubit if the measurement coutcome of Alice's first qubit, `clbit1_Alice`, is `1`.
```
def controlled_ops_Bob_qubit(given_circuit, qubit_Bob, clbit1_Alice, clbit2_Alice):
### WRITE YOUR CODE BETWEEN THESE LINES - START
given_circuit.x(qubit_Bob).c_if(clbit2_Alice, 1)
given_circuit.z(qubit_Bob).c_if(clbit1_Alice, 1)
### WRITE YOUR CODE BETWEEN THESE LINES - END
return given_circuit
```
The next lines of code put everything together.
```
### imports
from qiskit import QuantumRegister, ClassicalRegister
### set up the qubits and classical bits
all_qubits_Alice = QuantumRegister(2)
all_qubits_Bob = QuantumRegister(1)
creg1_Alice = ClassicalRegister(1)
creg2_Alice = ClassicalRegister(1)
### quantum teleportation circuit here
# Initialize
mycircuit = QuantumCircuit(all_qubits_Alice, all_qubits_Bob, creg1_Alice, creg2_Alice)
initialize_qubit(mycircuit, 0)
mycircuit.barrier()
# Entangle
entangle_qubits(mycircuit, 1, 2)
mycircuit.barrier()
# Do a Bell measurement
bell_meas_Alice_qubits(mycircuit, all_qubits_Alice[0], all_qubits_Alice[1], creg1_Alice, creg2_Alice)
mycircuit.barrier()
# Apply classically controlled quantum gates
controlled_ops_Bob_qubit(mycircuit, all_qubits_Bob[0], creg1_Alice, creg2_Alice)
### Look at the complete circuit
mycircuit.draw()
from qiskit import BasicAer
from qiskit.visualization import plot_histogram, plot_bloch_multivector
backend = BasicAer.get_backend('statevector_simulator')
out_vector = execute(mycircuit, backend).result().get_statevector()
plot_bloch_multivector(out_vector)
```
As you can see, the state from the qubit 1 is teleported to the qubit 2. However, please note something, in the teleportation process the original qubit state was destroyed when we did the measurements of Alice's qubits.
## References:
The original lab can be found in the link: https://qiskit.org/learn/intro-qc-qh/
Here is just my solution to the original lab file. I made some modifications to fit the style of the blog as well.
| github_jupyter |
```
import time
import pandas as pd
import numpy as np
import nltk
nltk.download('gutenberg')
import tensorflow as tf
keras = tf.keras
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from tqdm import tqdm
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
# NLP Concepts #6 - RNNs in practice
## Define helpers
```
def get_slices(text, slice_len=100):
text_split = text.split(' ')
n_chunks = int(len(text_split) / slice_len)
current_start_id = 0
slices = []
for i in range(n_chunks + 1):
current_slice = text_split[current_start_id:current_start_id + slice_len]
if len(current_slice) > 0:
slices.append(' '.join(current_slice))
current_start_id += slice_len
return slices
```
## Get and prepare data
```
# Print corpora and their lengths
for i in nltk.corpus.gutenberg.fileids():
src = nltk.corpus.gutenberg.words(i)
print(i, len(src))
```
### Join and check lengths
```
# Shakespeare's "Macbeth"
shkspr = nltk.corpus.gutenberg.words('shakespeare-macbeth.txt')
shkspr_join = ' '.join(shkspr)
len(shkspr)
# Carroll's "Alice's adventures (...)"
carroll = nltk.corpus.gutenberg.words('carroll-alice.txt')[:23140]
carroll_join = ' '.join(carroll)
len(carroll)
```
### Get slices and generate labels
```
# Get slices
shkspr_slices = get_slices(shkspr_join, 250)
carroll_slices = get_slices(carroll_join, 250)
len(shkspr_slices), len(carroll_slices)
# Create X
X = shkspr_slices + carroll_slices
# Create y
y = np.array([0] * 93 + [1] * 93)
```
### Train test split
```
# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
```
### Tokenize texts
```
# Initialize a tokenizer
VOCAB_SIZE = 20000
tokenizer = tf.keras.preprocessing.text.Tokenizer(
num_words=VOCAB_SIZE,
lower=True,
oov_token=1
)
# Fit the toknizer
tokenizer.fit_on_texts(X_train)
# Tokenize
X_train_tok = tokenizer.texts_to_sequences(X_train)
X_test_tok = tokenizer.texts_to_sequences(X_test)
# Plot seq lens
seq_lens_train = [len(seq) for seq in X_train_tok]
seq_lens_test = [len(seq) for seq in X_test_tok]
plt.hist(seq_lens_train, density=True, alpha=.7, label='Train')
plt.hist(seq_lens_test, density=True, alpha=.7, label='Test')
plt.legend()
plt.show()
# Find maxlen
MAXLEN = max([len(x.split(' ')) for x in X_train])
# Pad sequences
X_train_tok_pad = pad_sequences(X_train_tok, maxlen=MAXLEN, padding='post')
X_test_tok_pad = pad_sequences(X_test_tok, maxlen=MAXLEN, padding='post')
```
## Classification example
```
def train_and_evaluate(model, X_train, y_train, X_val, y_val, epochs=30, lr=1e-4, verbose=2):
# Compile
model.compile(loss = 'binary_crossentropy',
optimizer = tf.keras.optimizers.Adam(lr),
metrics = ['accuracy'])
# Callbacks
early = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
# Time it
start = time.time()
# Fit
history = model.fit(X_train, y_train,
validation_data = (X_val, y_val),
callbacks = [early],
epochs = epochs,
verbose = verbose)
# Time it
training_time = time.time() - start
# Plot learning curve
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='val')
plt.legend()
plt.title('Loss')
plt.subplot(122)
plt.plot(history.history['accuracy'], label='train')
plt.plot(history.history['val_accuracy'], label='val')
plt.legend()
plt.title('Accuracy')
plt.show()
# Evaluate
loss, acc = model.evaluate(X_val, y_val, verbose=0)
print(f'Val. accuracy: {acc}')
print(f'Training time: {training_time:.02f} seconds')
```
### Build a simple model
```
model = tf.keras.Sequential([
tf.keras.layers.Embedding(
input_dim = VOCAB_SIZE,
output_dim = 100,
mask_zero = True,
input_length = MAXLEN),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
train_and_evaluate(model, X_train_tok_pad, y_train, X_test_tok_pad, y_test, verbose=0, epochs=30)
```
### Build a deeper model
```
model_2 = tf.keras.Sequential([
tf.keras.layers.Embedding(
input_dim = VOCAB_SIZE,
output_dim = 100,
mask_zero = True,
input_length = MAXLEN),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(128),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model_2.summary()
train_and_evaluate(model_2, X_train_tok_pad, y_train, X_test_tok_pad, y_test, verbose=0, epochs=30)
```
## Build a bi-directional model
```
model_3 = tf.keras.Sequential([
tf.keras.layers.Embedding(
input_dim = VOCAB_SIZE,
output_dim = 100,
mask_zero = True,
input_length = MAXLEN),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model_3.summary()
train_and_evaluate(model_3, X_train_tok_pad, y_train, X_test_tok_pad, y_test, verbose=0, epochs=30)
```
## Build a deep bi-directional model
```
model_4 = tf.keras.Sequential([
tf.keras.layers.Embedding(
input_dim = VOCAB_SIZE,
output_dim = 100,
mask_zero = True,
input_length = MAXLEN),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model_4.summary()
train_and_evaluate(model_4, X_train_tok_pad, y_train, X_test_tok_pad, y_test, verbose=0, epochs=30)
```
## Long distance dependencies - `SimpleRNN()` and `LSTM()`
#### Experiment:
We will put a negative number at the beginnig of a random sequence of positive numbers. We'll manipulate sequence length and check how it affects performance of `SimpleRNN` and `LSTM` in a classification task.
<br>
<img src="https://www.hackingwithswift.com/uploads/matrix.jpg" alt="Numbers" style="width: 400px;"/>
```
LENGTHS = [10, 20, 50, 200]
def build_dataset(length, n_examples):
X = []
y = []
for i in range(n_examples):
class_ = np.random.choice([0, 1])
if class_ == 1:
row = np.array([-1] + list(np.random.choice(np.arange(0, 1, .01), length - 1)))
elif class_ == 0:
row = np.random.choice(np.arange(0, 1, .01), length)
X.append(row)
y.append(class_)
return np.array(X)[:, :, np.newaxis], np.array(y)
def build_model(rnn_type, len_):
if rnn_type == 'rnn':
rnn_layer = tf.keras.layers.SimpleRNN
elif rnn_type == 'lstm':
rnn_layer = tf.keras.layers.LSTM
model = tf.keras.Sequential([
rnn_layer(64, input_shape=(len_, 1), return_sequences=True),
rnn_layer(128),
tf.keras.layers.Dense(32, activation='tanh'),
tf.keras.layers.Dropout(.2),
tf.keras.layers.Dense(1, activation='sigmoid')
])
return model
for len_ in LENGTHS:
# Prep data
print(f'Buidling dataset of length {len_}')
X, y = build_dataset(len_, 200)
# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
# Build models
rnn_model = build_model('rnn', len_)
lstm_model = build_model('lstm', len_)
# Train and evaluate
print(f'\nRNN for {len_}')
train_and_evaluate(rnn_model, X_train, y_train, X_test, y_test, verbose=0, epochs=30)
print(f'\nLSTM for {len_}')
train_and_evaluate(lstm_model, X_train, y_train, X_test, y_test, verbose=0, epochs=30)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from pathlib import Path
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
train_df.shape
test_df.shape
# Drop the label to create the X data
train_clean = train_df.drop('target', axis=1)
test_clean=test_df.drop('target', axis=1)
# # Convert categorical data to numeric and separate target feature for training data
X_train_dummies = pd.get_dummies(train_clean)
X_test_dummies =pd.get_dummies(test_clean)
# add missing dummy variables to testing set
set(X_train_dummies)^ set(X_test_dummies)
debt_settlement_flag_Y=0
X_test_dummies[debt_settlement_flag_Y]=debt_settlement_flag_Y
X_test_dummies
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train_dummies,y_train)
# Convert categorical data to numeric and separate target feature for training data
print(f"Training Data Score: {classifier.score(X_train_dummies, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_dummies, y_test)}")
# Predict
predictions = classifier.predict(X_test_dummies)
print(f"The new point was classified as: {predictions}")
predictions=classifier.predict(X_test_dummies)
pd.DataFrame({"Predication": predictions,"Actual": y_test})
print(f'Actual:\t\t{list(y_test[:10])}')
print(f'Predicted:\t{list(classifier.predict(X_test_dummies[:10]))}')
# Train the Logistic Regression model on the unscaled data and print the model score
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier
classifier.fit(X_train_dummies, y_train)
# Train a Random Forest Classifier model and print the model score
#Import Random Forest Model
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train_dummies,y_train)
y_pred=clf.predict(X_test_dummies)
print(f'Training Score: {clf.score(X_train_dummies, y_train)}')
print(f'Testing Score: {clf.score(X_test_dummies, y_test)}')
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
# Scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train_dummies)
X_train_scaled = scaler.transform(X_train_dummies)
X_test_scaled = scaler.transform(X_test_dummies)
# Train the Logistic Regression model on the scaled data and print the model score
clf = LogisticRegression().fit(X_train_scaled, y_train)
print(f'Training Score: {clf.score(X_train_scaled, y_train)}')
print(f'Testing Score: {clf.score(X_test_scaled, y_test)}')
```
| github_jupyter |
```
#hide
#default_exp export
#default_cls_lvl 3
from nbdev.showdoc import show_doc
#export
from nbdev.imports import *
from fastcore.script import *
from fastcore.foundation import *
from keyword import iskeyword
import nbformat
```
# Export to modules
> The functions that transform notebooks in a library
The most important function defined in this module is `notebooks2script`, so you may want to jump to it before scrolling though the rest, which explain the details behind the scenes of the conversion from notebooks to library. The main things to remember are:
- put `# export` on each cell you want exported
- put `# exports` on each cell you want exported with the source code shown in the docs
- put `# exporti` on each cell you want exported without it being added to `__all__`, and without it showing up in the docs.
- one cell should contain `# default_exp` followed by the name of the module (with points for submodules and without the py extension) everything should be exported in (if one specific cell needs to be exported in a different module, just indicate it after `#export`: `#export special.module`)
- all left members of an equality, functions and classes will be exported and variables that are not private will be put in the `__all__` automatically
- to add something to `__all__` if it's not picked automatically, write an exported cell with something like `#add2all "my_name"`
## Basic foundations
For bootstrapping `nbdev` we have a few basic foundations defined in <code>imports</code>, which we test a show here. First, a simple config file class, `Config` that read the content of your `settings.ini` file and make it accessible:
```
show_doc(Config, title_level=3)
create_config("github", "nbdev", user='fastai', path='..', tst_flags='tst', cfg_name='test_settings.ini')
cfg = Config(cfg_name='test_settings.ini')
test_eq(cfg.lib_name, 'nbdev')
test_eq(cfg.git_url, "https://github.com/fastai/nbdev/tree/master/")
# test_eq(cfg.path("lib_path"), Path.cwd().parent/'nbdev')
# test_eq(cfg.path("nbs_path"), Path.cwd())
# test_eq(cfg.path("doc_path"), Path.cwd().parent/'docs')
test_eq(cfg.custom_sidebar, 'False')
```
## Reading a notebook
### What's a notebook?
A jupyter notebook is a json file behind the scenes. We can just read it with the json module, which will return a nested dictionary of dictionaries/lists of dictionaries, but there are some small differences between reading the json and using the tools from `nbformat` so we'll use this one.
```
#export
def read_nb(fname):
"Read the notebook in `fname`."
with open(Path(fname),'r', encoding='utf8') as f: return nbformat.reads(f.read(), as_version=4)
```
`fname` can be a string or a pathlib object.
```
test_nb = read_nb('00_export.ipynb')
```
The root has four keys: `cells` contains the cells of the notebook, `metadata` some stuff around the version of python used to execute the notebook, `nbformat` and `nbformat_minor` the version of nbformat.
```
test_nb.keys()
test_nb['metadata']
f"{test_nb['nbformat']}.{test_nb['nbformat_minor']}"
```
The cells key then contains a list of cells. Each one is a new dictionary that contains entries like the type (code or markdown), the source (what is written in the cell) and the output (for code cells).
```
test_nb['cells'][0]
```
### Finding patterns
The following functions are used to catch the flags used in the code cells.
```
#export
def check_re(cell, pat, code_only=True):
"Check if `cell` contains a line with regex `pat`"
if code_only and cell['cell_type'] != 'code': return
if isinstance(pat, str): pat = re.compile(pat, re.IGNORECASE | re.MULTILINE)
return pat.search(cell['source'])
```
`pat` can be a string or a compiled regex. If `code_only=True`, this function ignores non-code cells, such as markdown.
```
cell = test_nb['cells'][1].copy()
assert check_re(cell, '#export') is not None
assert check_re(cell, re.compile('#export')) is not None
assert check_re(cell, '# bla') is None
cell['cell_type'] = 'markdown'
assert check_re(cell, '#export') is None
assert check_re(cell, '#export', code_only=False) is not None
#export
def check_re_multi(cell, pats, code_only=True):
"Check if `cell` contains a line matching any regex in `pats`, returning the first match found"
return L(pats).map_first(partial(check_re, cell, code_only=code_only))
cell = test_nb['cells'][0].copy()
cell['source'] = "a b c"
assert check_re(cell, 'a') is not None
assert check_re(cell, 'd') is None
# show that searching with patterns ['d','b','a'] will match 'b'
# i.e. 'd' is not found and we don't search for 'a'
assert check_re_multi(cell, ['d','b','a']).span() == (2,3)
#export
def _mk_flag_re(body, n_params, comment):
"Compiles a regex for finding nbdev flags"
assert body!=True, 'magics no longer supported'
prefix = r"\s*\#\s*"
param_group = ""
if n_params == -1: param_group = r"[ \t]+(.+)"
if n_params == 1: param_group = r"[ \t]+(\S+)"
if n_params == (0,1): param_group = r"(?:[ \t]+(\S+))?"
return re.compile(rf"""
# {comment}:
^ # beginning of line (since re.MULTILINE is passed)
{prefix}
{body}
{param_group}
[ \t]* # any number of spaces and/or tabs
$ # end of line (since re.MULTILINE is passed)
""", re.MULTILINE | re.VERBOSE)
```
This function returns a regex object that can be used to find nbdev flags in multiline text
- `body` regex fragment to match one or more flags,
- `n_params` number of flag parameters to match and catch (-1 for any number of params; `(0,1)` for 0 for 1 params),
- `comment` explains what the compiled regex should do.
```
#hide
re_blank_test = _mk_flag_re('export[si]?', 0, "test")
re_mod_test = _mk_flag_re('export[si]?', 1, "test")
re_opt_test = _mk_flag_re('export[si]?', (0,1), "test")
for f in ['export', 'exports', 'exporti']:
cell = nbformat.v4.new_code_cell(f'#{f} \n some code')
assert check_re(cell, re_blank_test) is not None
assert check_re(cell, re_mod_test) is None
assert check_re(cell, re_opt_test) is not None
test_eq(check_re(cell, re_opt_test).groups()[0], None)
cell.source = f'#{f} special.module \n some code'
assert check_re(cell, re_blank_test) is None
assert check_re(cell, re_mod_test) is not None
test_eq(check_re(cell, re_mod_test).groups()[0], 'special.module')
assert check_re(cell, re_opt_test) is not None
test_eq(check_re(cell, re_opt_test).groups()[0], 'special.module')
#export
_re_blank_export = _mk_flag_re("export[si]?", 0,
"Matches any line with #export, #exports or #exporti without any module name")
#export
_re_mod_export = _mk_flag_re("export[si]?", 1,
"Matches any line with #export, #exports or #exporti with a module name and catches it in group 1")
#export
_re_internal_export = _mk_flag_re("exporti", (0,1),
"Matches any line with #exporti with or without a module name")
#exporti
def _is_external_export(tst):
"Check if a cell is an external or internal export. `tst` is an re match"
return _re_internal_export.search(tst.string) is None
#export
def is_export(cell, default):
"Check if `cell` is to be exported and returns the name of the module to export it if provided"
tst = check_re(cell, _re_blank_export)
if tst:
if default is None:
print(f"No export destination, ignored:\n{cell['source']}")
return default, _is_external_export(tst)
tst = check_re(cell, _re_mod_export)
if tst: return os.path.sep.join(tst.groups()[0].split('.')), _is_external_export(tst)
else: return None
```
`is_export` returns;
- a tuple of ("module name", "external boolean" (`False` for an internal export)) if `cell` is to be exported or
- `None` if `cell` will not be exported.
The cells to export are marked with `#export`/`#exporti`/`#exports`, potentially with a module name where we want it exported. The default module is given in a cell of the form `#default_exp bla` inside the notebook (usually at the top), though in this function, it needs the be passed (the final script will read the whole notebook to find it).
- a cell marked with `#export`/`#exporti`/`#exports` will be exported to the default module
- an exported cell marked with `special.module` appended will be exported in `special.module` (located in `lib_name/special/module.py`)
- a cell marked with `#export` will have its signature added to the documentation
- a cell marked with `#exports` will additionally have its source code added to the documentation
- a cell marked with `#exporti` will not show up in the documentation, and will also not be added to `__all__`.
```
cell = test_nb['cells'][1].copy()
test_eq(is_export(cell, 'export'), ('export', True))
cell['source'] = "# exports"
test_eq(is_export(cell, 'export'), ('export', True))
cell['source'] = "# exporti"
test_eq(is_export(cell, 'export'), ('export', False))
cell['source'] = "# export mod"
test_eq(is_export(cell, 'export'), ('mod', True))
#hide
cell['source'] = "# export mod.file"
test_eq(is_export(cell, 'export'), (f'mod{os.path.sep}file', True))
cell['source'] = "# exporti mod.file"
test_eq(is_export(cell, 'export'), (f'mod{os.path.sep}file', False))
cell['source'] = "# expt mod.file"
assert is_export(cell, 'export') is None
cell['source'] = "# exportmod.file"
assert is_export(cell, 'export') is None
cell['source'] = "# exportsmod.file"
assert is_export(cell, 'export') is None
cell['source'] = "# exporti mod file"
assert is_export(cell, 'export') is None
#export
_re_default_exp = _mk_flag_re('default_exp', 1, "Matches any line with #default_exp with a module name")
#export
def find_default_export(cells):
"Find in `cells` the default export module."
res = L(cells).map_first(check_re, pat=_re_default_exp)
return res.groups()[0] if res else None
```
Stops at the first cell containing `# default_exp` (if there are several) and returns the value behind. Returns `None` if there are no cell with that code.
```
test_eq(find_default_export(test_nb['cells']), 'export')
assert find_default_export(test_nb['cells'][2:]) is None
#hide
mods = [f'mod{i}' for i in range(3)]
cells = [{'cell_type': 'code', 'source': f'#default_exp {mod}'} for mod in mods]
for i, mod in enumerate(mods): test_eq(mod, find_default_export(cells[i:]))
```
### Listing all exported objects
The following functions make a list of everything that is exported to prepare a proper `__all__` for our exported module.
```
#export
_re_patch_func = re.compile(r"""
# Catches any function decorated with @patch, its name in group 1 and the patched class in group 2
@patch # At any place in the cell, something that begins with @patch
(?:\s*@.*)* # Any other decorator applied to the function
\s*def # Any number of whitespace (including a new line probably) followed by def
\s+ # One whitespace or more
([^\(\s]+) # Catch a group composed of anything but whitespace or an opening parenthesis (name of the function)
\s*\( # Any number of whitespace followed by an opening parenthesis
[^:]* # Any number of character different of : (the name of the first arg that is type-annotated)
:\s* # A column followed by any number of whitespace
(?: # Non-catching group with either
([^,\s\(\)]*) # a group composed of anything but a comma, a parenthesis or whitespace (name of the class)
| # or
(\([^\)]*\))) # a group composed of something between parenthesis (tuple of classes)
\s* # Any number of whitespace
(?:,|\)) # Non-catching group with either a comma or a closing parenthesis
""", re.VERBOSE)
tst = _re_patch_func.search("""
@patch
@log_args(a=1)
def func(obj:Class):""")
tst, tst.groups()
#hide
tst = _re_patch_func.search("""
@patch
def func(obj:Class):""")
test_eq(tst.groups(), ("func", "Class", None))
tst = _re_patch_func.search("""
@patch
def func (obj:Class, a)""")
test_eq(tst.groups(), ("func", "Class", None))
tst = _re_patch_func.search("""
@patch
def func (obj:(Class1, Class2), a)""")
test_eq(tst.groups(), ("func", None, "(Class1, Class2)"))
tst = _re_patch_func.search("""
@patch
def func (obj:(Class1, Class2), a:int)->int:""")
test_eq(tst.groups(), ("func", None, "(Class1, Class2)"))
tst = _re_patch_func.search("""
@patch
@log_args(but='a,b')
@funcs_kwargs
def func (obj:(Class1, Class2), a:int)->int:""")
test_eq(tst.groups(), ("func", None, "(Class1, Class2)"))
tst = _re_patch_func.search("""
@patch
@contextmanager
def func (obj:Class, a:int)->int:""")
test_eq(tst.groups(), ("func", "Class", None))
#export
_re_typedispatch_func = re.compile(r"""
# Catches any function decorated with @typedispatch
(@typedispatch # At any place in the cell, catch a group with something that begins with @typedispatch
\s*def # Any number of whitespace (including a new line probably) followed by def
\s+ # One whitespace or more
[^\(]+ # Anything but whitespace or an opening parenthesis (name of the function)
\s*\( # Any number of whitespace followed by an opening parenthesis
[^\)]* # Any number of character different of )
\)[\s\S]*:) # A closing parenthesis followed by any number of characters and whitespace (type annotation) and :
""", re.VERBOSE)
#hide
assert _re_typedispatch_func.search("@typedispatch\ndef func(a, b):").groups() == ('@typedispatch\ndef func(a, b):',)
assert (_re_typedispatch_func.search("@typedispatch\ndef func(a:str, b:bool)->int:").groups() ==
('@typedispatch\ndef func(a:str, b:bool)->int:',))
#export
_re_class_func_def = re.compile(r"""
# Catches any 0-indented function or class definition with its name in group 1
^ # Beginning of a line (since re.MULTILINE is passed)
(?:async\sdef|def|class) # Non-catching group for def or class
\s+ # One whitespace or more
([^\(\s]+) # Catching group with any character except an opening parenthesis or a whitespace (name)
\s* # Any number of whitespace
(?:\(|:) # Non-catching group with either an opening parenthesis or a : (classes don't need ())
""", re.MULTILINE | re.VERBOSE)
#hide
test_eq(_re_class_func_def.search("class Class:").groups(), ('Class',))
test_eq(_re_class_func_def.search("def func(a, b):").groups(), ('func',))
test_eq(_re_class_func_def.search("def func(a:str, b:bool)->int:").groups(), ('func',))
test_eq(_re_class_func_def.search("async def func(a, b):").groups(), ('func',))
#export
_re_obj_def = re.compile(r"""
# Catches any 0-indented object definition (bla = thing) with its name in group 1
^ # Beginning of a line (since re.MULTILINE is passed)
([_a-zA-Z]+[a-zA-Z0-9_\.]*) # Catch a group which is a valid python variable name
\s* # Any number of whitespace
(?::\s*\S.*|)= # Non-catching group of either a colon followed by a type annotation, or nothing; followed by an =
""", re.MULTILINE | re.VERBOSE)
#hide
test_eq(_re_obj_def.search("a = 1").groups(), ('a',))
test_eq(_re_obj_def.search("a.b = 1").groups(), ('a.b',))
test_eq(_re_obj_def.search("_aA1=1").groups(), ('_aA1',))
test_eq(_re_obj_def.search("a : int =1").groups(), ('a',))
test_eq(_re_obj_def.search("a:f(':=')=1").groups(), ('a',))
assert _re_obj_def.search("@abc=2") is None
assert _re_obj_def.search("a a=2") is None
#export
def _not_private(n):
for t in n.split('.'):
if (t.startswith('_') and not t.startswith('__')) or t.startswith('@'): return False
return '\\' not in t and '^' not in t and '[' not in t and t != 'else'
def export_names(code, func_only=False):
"Find the names of the objects, functions or classes defined in `code` that are exported."
#Format monkey-patches with @patch
def _f(gps):
nm, cls, t = gps.groups()
if cls is not None: return f"def {cls}.{nm}():"
return '\n'.join([f"def {c}.{nm}():" for c in re.split(', *', t[1:-1])])
code = _re_typedispatch_func.sub('', code)
code = _re_patch_func.sub(_f, code)
names = _re_class_func_def.findall(code)
if not func_only: names += _re_obj_def.findall(code)
return [n for n in names if _not_private(n) and not iskeyword(n)]
```
This function only picks the zero-indented objects on the left side of an =, functions or classes (we don't want the class methods for instance) and excludes private names (that begin with `_`) but no dunder names. It only returns func and class names (not the objects) when `func_only=True`.
To work properly with fastai added python functionality, this function ignores function decorated with `@typedispatch` (since they are defined multiple times) and unwraps properly functions decorated with `@patch`.
```
test_eq(export_names("def my_func(x):\n pass\nclass MyClass():"), ["my_func", "MyClass"])
#hide
#Indented funcs are ignored (funcs inside a class)
test_eq(export_names(" def my_func(x):\n pass\nclass MyClass():"), ["MyClass"])
#Private funcs are ignored, dunder are not
test_eq(export_names("def _my_func():\n pass\nclass MyClass():"), ["MyClass"])
test_eq(export_names("__version__ = 1:\n pass\nclass MyClass():"), ["MyClass", "__version__"])
#trailing spaces
test_eq(export_names("def my_func ():\n pass\nclass MyClass():"), ["my_func", "MyClass"])
#class without parenthesis
test_eq(export_names("def my_func ():\n pass\nclass MyClass:"), ["my_func", "MyClass"])
#object and funcs
test_eq(export_names("def my_func ():\n pass\ndefault_bla=[]:"), ["my_func", "default_bla"])
test_eq(export_names("def my_func ():\n pass\ndefault_bla=[]:", func_only=True), ["my_func"])
#Private objects are ignored
test_eq(export_names("def my_func ():\n pass\n_default_bla = []:"), ["my_func"])
#Objects with dots are privates if one part is private
test_eq(export_names("def my_func ():\n pass\ndefault.bla = []:"), ["my_func", "default.bla"])
test_eq(export_names("def my_func ():\n pass\ndefault._bla = []:"), ["my_func"])
#Monkey-path with @patch are properly renamed
test_eq(export_names("@patch\ndef my_func(x:Class):\n pass"), ["Class.my_func"])
test_eq(export_names("@patch\ndef my_func(x:Class):\n pass", func_only=True), ["Class.my_func"])
test_eq(export_names("some code\n@patch\ndef my_func(x:Class, y):\n pass"), ["Class.my_func"])
test_eq(export_names("some code\n@patch\ndef my_func(x:(Class1,Class2), y):\n pass"), ["Class1.my_func", "Class2.my_func"])
#Check delegates
test_eq(export_names("@delegates(keep=True)\nclass someClass:\n pass"), ["someClass"])
#Typedispatch decorated functions shouldn't be added
test_eq(export_names("@patch\ndef my_func(x:Class):\n pass\n@typedispatch\ndef func(x: TensorImage): pass"), ["Class.my_func"])
#try, except and other keywords should not be picked up (these can look like object def with type annotation)
test_eq(export_names("try:\n a=1\nexcept:\n b=2"), [])
test_eq(export_names("try:\n this_might_work\nexcept:\n b=2"), [])
#export
_re_all_def = re.compile(r"""
# Catches a cell with defines \_all\_ = [\*\*] and get that \*\* in group 1
^_all_ # Beginning of line (since re.MULTILINE is passed)
\s*=\s* # Any number of whitespace, =, any number of whitespace
\[ # Opening [
([^\n\]]*) # Catching group with anything except a ] or newline
\] # Closing ]
""", re.MULTILINE | re.VERBOSE)
#Same with __all__
_re__all__def = re.compile(r'^__all__\s*=\s*\[([^\]]*)\]', re.MULTILINE)
#export
def extra_add(flags, code):
"Catch adds to `__all__` required by a cell with `_all_=`"
m = check_re({'source': code}, _re_all_def, False)
if m:
code = m.re.sub('#nbdev_' + 'comment \g<0>', code)
code = re.sub(r'([^\n]|^)\n*$', r'\1', code)
if not m: return [], code
def clean_quotes(s):
"Return `s` enclosed in single quotes, removing double quotes if needed"
if s.startswith("'") and s.endswith("'"): return s
if s.startswith('"') and s.endswith('"'): s = s[1:-1]
return f"'{s}'"
return [clean_quotes(s) for s in parse_line(m.group(1))], code
```
Sometimes objects are not picked to be automatically added to the `__all__` of the module so you will need to add them manually. To do so, create an exported cell with the following code `_all_ = ["name", "name2"]`
```
#hide
for code, expected in [
['_all_ = ["func", "func1", "func2"]',
(["'func'", "'func1'", "'func2'"],'#nbdev_comment _all_ = ["func", "func1", "func2"]')],
['_all_=[func, func1, func2]',
(["'func'", "'func1'", "'func2'"],'#nbdev_comment _all_=[func, func1, func2]')],
["_all_ = ['func', 'func1', 'func2']",
(["'func'", "'func1'", "'func2'"],"#nbdev_comment _all_ = ['func', 'func1', 'func2']")],
['_all_ = ["func", "func1" , "func2"]',
(["'func'", "'func1'", "'func2'"],'#nbdev_comment _all_ = ["func", "func1" , "func2"]')],
["_all_ = ['func','func1', 'func2']\n",
(["'func'", "'func1'", "'func2'"],"#nbdev_comment _all_ = ['func','func1', 'func2']")],
["_all_ = ['func']\n_all_ = ['func1', 'func2']\n",
(["'func'"],"#nbdev_comment _all_ = ['func']\n#nbdev_comment _all_ = ['func1', 'func2']")],
['code\n\n_all_ = ["func", "func1", "func2"]',
(["'func'", "'func1'", "'func2'"],'code\n\n#nbdev_comment _all_ = ["func", "func1", "func2"]')],
['code\n\n_all_ = [func]\nmore code',
(["'func'"],'code\n\n#nbdev_comment _all_ = [func]\nmore code')]]:
test_eq(extra_add('', code), expected)
# line breaks within the list of names means _all_ is ignored
test_eq(extra_add('', "_all_ = ['func',\n'func1', 'func2']\n"), ([],"_all_ = ['func',\n'func1', 'func2']\n"))
#export
_re_from_future_import = re.compile(r"^from[ \t]+__future__[ \t]+import.*$", re.MULTILINE)
def _from_future_import(fname, flags, code, to_dict=None):
"Write `__future__` imports to `fname` and return `code` with `__future__` imports commented out"
from_future_imports = _re_from_future_import.findall(code)
if from_future_imports: code = _re_from_future_import.sub('#nbdev' + '_comment \g<0>', code)
else: from_future_imports = _re_from_future_import.findall(flags)
if not from_future_imports or to_dict is not None: return code
with open(fname, 'r', encoding='utf8') as f: text = f.read()
start = _re__all__def.search(text).start()
with open(fname, 'w', encoding='utf8') as f:
f.write('\n'.join([text[:start], *from_future_imports, '\n', text[start:]]))
return code
```
If you need a `from __future__ import` in your library, you can export your cell with special comments:
```python
#export
from __future__ import annotations
class ...
```
Notice that `#export` is after the `__future__` import. Because `__future__` imports must occur at the beginning of the file, nbdev allows `__future__` imports in the flags section of a cell.
```
#hide
txt = """
# AUTOHEADER ... File to edit: mod.ipynb (unless otherwise specified).
__all__ = [my_file, MyClas]
# Cell
def valid_code(): pass"""
expected_txt = """
# AUTOHEADER ... File to edit: mod.ipynb (unless otherwise specified).
from __future__ import annotations
from __future__ import generator_stop
__all__ = [my_file, MyClas]
# Cell
def valid_code(): pass"""
flags="# export"
code = """
# comment
from __future__ import annotations
valid_code = False # but _from_future_import will work anyway
from __future__ import generator_stop
from __future__ import not_zero_indented
valid_code = True
"""
expected_code = """
# comment
#nbdev_comment from __future__ import annotations
valid_code = False # but _from_future_import will work anyway
#nbdev_comment from __future__ import generator_stop
from __future__ import not_zero_indented
valid_code = True
"""
def _run_from_future_import_test():
fname = 'test_from_future_import.txt'
with open(fname, 'w', encoding='utf8') as f: f.write(txt)
actual_code=_from_future_import(fname, flags, code, {})
test_eq(expected_code, actual_code)
with open(fname, 'r', encoding='utf8') as f: test_eq(f.read(), txt)
actual_code=_from_future_import(fname, flags, code)
test_eq(expected_code, actual_code)
with open(fname, 'r', encoding='utf8') as f: test_eq(f.read(), expected_txt)
os.remove(fname)
_run_from_future_import_test()
flags="""from __future__ import annotations
from __future__ import generator_stop
#export"""
code = ""
expected_code = ""
fname = 'test_from_future_import.txt'
_run_from_future_import_test()
#export
def _add2all(fname, names, line_width=120):
if len(names) == 0: return
with open(fname, 'r', encoding='utf8') as f: text = f.read()
tw = TextWrapper(width=120, initial_indent='', subsequent_indent=' '*11, break_long_words=False)
re_all = _re__all__def.search(text)
start,end = re_all.start(),re_all.end()
text_all = tw.wrap(f"{text[start:end-1]}{'' if text[end-2]=='[' else ', '}{', '.join(names)}]")
with open(fname, 'w', encoding='utf8') as f: f.write(text[:start] + '\n'.join(text_all) + text[end:])
#hide
fname = 'test_add.txt'
with open(fname, 'w', encoding='utf8') as f: f.write("Bla\n__all__ = [my_file, MyClas]\nBli")
_add2all(fname, ['new_function'])
with open(fname, 'r', encoding='utf8') as f:
test_eq(f.read(), "Bla\n__all__ = [my_file, MyClas, new_function]\nBli")
_add2all(fname, [f'new_function{i}' for i in range(10)])
with open(fname, 'r', encoding='utf8') as f:
test_eq(f.read(), """Bla
__all__ = [my_file, MyClas, new_function, new_function0, new_function1, new_function2, new_function3, new_function4,
new_function5, new_function6, new_function7, new_function8, new_function9]
Bli""")
os.remove(fname)
#export
def relative_import(name, fname):
"Convert a module `name` to a name relative to `fname`"
mods = name.split('.')
splits = str(fname).split(os.path.sep)
if mods[0] not in splits: return name
i=len(splits)-1
while i>0 and splits[i] != mods[0]: i-=1
splits = splits[i:]
while len(mods)>0 and splits[0] == mods[0]: splits,mods = splits[1:],mods[1:]
return '.' * (len(splits)) + '.'.join(mods)
```
When we say from
``` python
from lib_name.module.submodule import bla
```
in a notebook, it needs to be converted to something like
```
from .module.submodule import bla
```
or
```from .submodule import bla```
depending on where we are. This function deals with those imports renaming.
Note that import of the form
```python
import lib_name.module
```
are left as is as the syntax `import module` does not work for relative imports.
```
test_eq(relative_import('nbdev.core', Path.cwd()/'nbdev'/'data.py'), '.core')
test_eq(relative_import('nbdev.core', Path('nbdev')/'vision'/'data.py'), '..core')
test_eq(relative_import('nbdev.vision.transform', Path('nbdev')/'vision'/'data.py'), '.transform')
test_eq(relative_import('nbdev.notebook.core', Path('nbdev')/'data'/'external.py'), '..notebook.core')
test_eq(relative_import('nbdev.vision', Path('nbdev')/'vision'/'learner.py'), '.')
#export
_re_import = ReLibName(r'^(\s*)from (LIB_NAME\.\S*) import (.*)$')
#export
def _deal_import(code_lines, fname):
def _replace(m):
sp,mod,obj = m.groups()
return f"{sp}from {relative_import(mod, fname)} import {obj}"
return [_re_import.re.sub(_replace,line) for line in code_lines]
#hide
lines = ["from nbdev.core import *",
"nothing to see",
" from nbdev.vision import bla1, bla2",
"from nbdev.vision import models",
"import nbdev.vision"]
test_eq(_deal_import(lines, Path.cwd()/'nbdev'/'data.py'), [
"from .core import *",
"nothing to see",
" from .vision import bla1, bla2",
"from .vision import models",
"import nbdev.vision"
])
```
## Create the library
### Saving an index
To be able to build back a correspondence between functions and the notebooks they are defined in, we need to store an index. It's done in the private module <code>_nbdev</code> inside your library, and the following function are used to define it.
```
#export
_re_index_custom = re.compile(r'def custom_doc_links\(name\):(.*)$', re.DOTALL)
#export
def reset_nbdev_module():
"Create a skeleton for <code>_nbdev</code>"
fname = Config().path("lib_path")/'_nbdev.py'
fname.parent.mkdir(parents=True, exist_ok=True)
sep = '\n'* (int(Config().get('cell_spacing', '1'))+1)
if fname.is_file():
with open(fname, 'r') as f: search = _re_index_custom.search(f.read())
else: search = None
prev_code = search.groups()[0] if search is not None else ' return None\n'
with open(fname, 'w') as f:
f.write(f"# AUTOGENERATED BY NBDEV! DO NOT EDIT!")
f.write('\n\n__all__ = ["index", "modules", "custom_doc_links", "git_url"]')
f.write('\n\nindex = {}')
f.write('\n\nmodules = []')
f.write(f'\n\ndoc_url = "{Config().doc_host}{Config().doc_baseurl}"')
f.write(f'\n\ngit_url = "{Config().git_url}"')
f.write(f'{sep}def custom_doc_links(name):{prev_code}')
#export
class _EmptyModule():
def __init__(self):
self.index,self.modules = {},[]
self.doc_url,self.git_url = f"{Config().doc_host}{Config().doc_baseurl}",Config().git_url
def custom_doc_links(self, name): return None
#export
def get_nbdev_module():
"Reads <code>_nbdev</code>"
try:
spec = importlib.util.spec_from_file_location(f"{Config().lib_name}._nbdev", Config().path("lib_path")/'_nbdev.py')
mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(mod)
return mod
except: return _EmptyModule()
#export
_re_index_idx = re.compile(r'index\s*=\s*{[^}]*}')
_re_index_mod = re.compile(r'modules\s*=\s*\[[^\]]*\]')
#export
def save_nbdev_module(mod):
"Save `mod` inside <code>_nbdev</code>"
fname = Config().path("lib_path")/'_nbdev.py'
with open(fname, 'r') as f: code = f.read()
t = r',\n '.join([f'"{k}": "{v}"' for k,v in mod.index.items()])
code = _re_index_idx.sub("index = {"+ t +"}", code)
t = r',\n '.join(['"' + f.replace('\\','/') + '"' for f in mod.modules])
code = _re_index_mod.sub(f"modules = [{t}]", code)
with open(fname, 'w') as f: f.write(code)
#hide
ind,ind_bak = Config().path("lib_path")/'_nbdev.py',Config().path("lib_path")/'_nbdev.bak'
if ind.exists(): shutil.move(ind, ind_bak)
try:
reset_nbdev_module()
mod = get_nbdev_module()
test_eq(mod.index, {})
test_eq(mod.modules, [])
mod.index = {'foo':'bar'}
mod.modules.append('lala.bla')
save_nbdev_module(mod)
mod = get_nbdev_module()
test_eq(mod.index, {'foo':'bar'})
test_eq(mod.modules, ['lala.bla'])
finally:
if ind_bak.exists(): shutil.move(ind_bak, ind)
```
### Create the modules
```
#export
def split_flags_and_code(cell, return_type=list):
"Splits the `source` of a cell into 2 parts and returns (flags, code)"
code_lines = cell['source'].split('\n')
split_pos = 0 if code_lines[0].strip().startswith('#') else -1
for i, line in enumerate(code_lines):
if not line.startswith('#') and line.strip() and not _re_from_future_import.match(line): break
split_pos+=1
res = code_lines[:split_pos], code_lines[split_pos:]
if return_type is list: return res
return tuple('\n'.join(r) for r in res)
```
`return_type` tells us if the tuple returned will contain `list`s of lines or `str`ings with line breaks.
We treat the first comment line as a flag
<img alt="split_flags_and_code example" width="450" align="left" src="images/split_flags_and_code.png" />
```
def _test_split_flags_and_code(expected_flags, expected_code):
cell = nbformat.v4.new_code_cell('\n'.join(expected_flags + expected_code))
test_eq((expected_flags, expected_code), split_flags_and_code(cell))
expected=('\n'.join(expected_flags), '\n'.join(expected_code))
test_eq(expected, split_flags_and_code(cell, str))
_test_split_flags_and_code([
'#export'],
['# TODO: write this function',
'def func(x): pass'])
#export
def create_mod_file(fname, nb_path, bare=False):
"Create a module file for `fname`."
bare = str(Config().get('bare', bare)) == 'True'
fname.parent.mkdir(parents=True, exist_ok=True)
file_path = os.path.relpath(nb_path, Config().config_file.parent).replace('\\', '/')
with open(fname, 'w') as f:
if not bare: f.write(f"# AUTOGENERATED! DO NOT EDIT! File to edit: {file_path} (unless otherwise specified).")
f.write('\n\n__all__ = []')
```
A new module filename is created each time a notebook has a cell marked with `#default_exp`. In your collection of notebooks, you should only have one notebook that creates a given module since they are re-created each time you do a library build (to ensure the library is clean). Note that any file you create manually will never be overwritten (unless it has the same name as one of the modules defined in a `#default_exp` cell) so you are responsible to clean up those yourself.
`fname` is the notebook that contained the `#default_exp` cell.
```
#export
def create_mod_files(files, to_dict=False, bare=False):
"Create mod files for default exports found in `files`"
modules = []
for f in files:
fname = Path(f)
nb = read_nb(fname)
default = find_default_export(nb['cells'])
if default is not None:
default = os.path.sep.join(default.split('.'))
modules.append(default)
if not to_dict:
create_mod_file(Config().path("lib_path")/f'{default}.py', Config().path("nbs_path")/f'{fname}', bare=bare)
return modules
```
Create module files for all `#default_export` flags found in `files` and return a list containing the names of modules created.
Note: The number if modules returned will be less that the number of files passed in if files do not `#default_export`.
By creating all module files before calling `_notebook2script`, the order of execution no longer matters - so you can now export to a notebook that is run "later".
You might still have problems when
- converting a subset of notebooks or
- exporting to a module that does not have a `#default_export` yet
in which case `_notebook2script` will print warnings like;
```
Warning: Exporting to "core.py" but this module is not part of this build
```
If you see a warning like this
- and the module file (e.g. "core.py") does not exist, you'll see a `FileNotFoundError`
- if the module file exists, the exported cell will be written - even if the exported cell is already in the module file
```
#export
def _notebook2script(fname, modules, silent=False, to_dict=None, bare=False):
"Finds cells starting with `#export` and puts them into a module created by `create_mod_files`"
bare = str(Config().get('bare', bare)) == 'True'
if os.environ.get('IN_TEST',0): return # don't export if running tests
sep = '\n'* (int(Config().get('cell_spacing', '1'))+1)
fname = Path(fname)
nb = read_nb(fname)
default = find_default_export(nb['cells'])
if default is not None:
default = os.path.sep.join(default.split('.'))
mod = get_nbdev_module()
exports = [is_export(c, default) for c in nb['cells']]
cells = [(i,c,e) for i,(c,e) in enumerate(zip(nb['cells'],exports)) if e is not None]
for i,c,(e,a) in cells:
if e not in modules: print(f'Warning: Exporting to "{e}.py" but this module is not part of this build')
fname_out = Config().path("lib_path")/f'{e}.py'
if bare: orig = "\n"
else: orig = (f'# {"" if a else "Internal "}C' if e==default else f'# Comes from {fname.name}, c') + 'ell\n'
flag_lines,code_lines = split_flags_and_code(c)
code_lines = _deal_import(code_lines, fname_out)
code = sep + orig + '\n'.join(code_lines)
names = export_names(code)
flags = '\n'.join(flag_lines)
extra,code = extra_add(flags, code)
code = _from_future_import(fname_out, flags, code, to_dict)
if a:
if to_dict is None: _add2all(fname_out, [f"'{f}'" for f in names if '.' not in f and len(f) > 0] + extra)
mod.index.update({f: fname.name for f in names})
code = re.sub(r' +$', '', code, flags=re.MULTILINE)
if code != sep + orig[:-1]:
if to_dict is not None: to_dict[e].append((i, fname, code))
else:
with open(fname_out, 'a', encoding='utf8') as f: f.write(code)
if f'{e}.py' not in mod.modules: mod.modules.append(f'{e}.py')
save_nbdev_module(mod)
if not silent: print(f"Converted {fname.name}.")
return to_dict
#hide
if not os.environ.get('IN_TEST',0):
modules = create_mod_files(glob.glob('00_export.ipynb'))
_notebook2script('00_export.ipynb', modules)
#hide
with open(Config().path("lib_path")/('export.py')) as f: l = f.readline()
test_eq(l, '# AUTOGENERATED! DO NOT EDIT! File to edit: nbs/00_export.ipynb (unless otherwise specified).\n')
#export
def add_init(path):
"Add `__init__.py` in all subdirs of `path` containing python files if it's not there already"
for p,d,f in os.walk(path):
for f_ in f:
if f_.endswith('.py'):
if not (Path(p)/'__init__.py').exists(): (Path(p)/'__init__.py').touch()
break
with tempfile.TemporaryDirectory() as d:
os.makedirs(Path(d)/'a', exist_ok=True)
(Path(d)/'a'/'f.py').touch()
os.makedirs(Path(d)/'a/b', exist_ok=True)
(Path(d)/'a'/'b'/'f.py').touch()
add_init(d)
assert not (Path(d)/'__init__.py').exists()
for e in [Path(d)/'a', Path(d)/'a/b']:
assert (e/'__init__.py').exists()
#export
_re_version = re.compile('^__version__\s*=.*$', re.MULTILINE)
#export
def update_version():
"Add or update `__version__` in the main `__init__.py` of the library"
fname = Config().path("lib_path")/'__init__.py'
if not fname.exists(): fname.touch()
version = f'__version__ = "{Config().version}"'
with open(fname, 'r') as f: code = f.read()
if _re_version.search(code) is None: code = version + "\n" + code
else: code = _re_version.sub(version, code)
with open(fname, 'w') as f: f.write(code)
#export
_re_baseurl = re.compile('^baseurl\s*:.*$', re.MULTILINE)
#export
def update_baseurl():
"Add or update `baseurl` in `_config.yml` for the docs"
fname = Config().path("doc_path")/'_config.yml'
if not fname.exists(): return
with open(fname, 'r') as f: code = f.read()
if _re_baseurl.search(code) is None: code = code + f"\nbaseurl: {Config().doc_baseurl}"
else: code = _re_baseurl.sub(f"baseurl: {Config().doc_baseurl}", code)
with open(fname, 'w') as f: f.write(code)
#export
def notebook2script(fname=None, silent=False, to_dict=False, bare=False):
"Convert notebooks matching `fname` to modules"
# initial checks
if os.environ.get('IN_TEST',0): return # don't export if running tests
if fname is None:
reset_nbdev_module()
update_version()
update_baseurl()
files = [f for f in Config().path("nbs_path").glob('*.ipynb') if not f.name.startswith('_')]
else: files = glob.glob(fname)
d = collections.defaultdict(list) if to_dict else None
modules = create_mod_files(files, to_dict, bare=bare)
for f in sorted(files): d = _notebook2script(f, modules, silent=silent, to_dict=d, bare=bare)
if to_dict: return d
else: add_init(Config().path("lib_path"))
```
Finds cells starting with `#export` and puts them into the appropriate module. If `fname` is not specified, this will convert all notebook not beginning with an underscore in the `nb_folder` defined in `setting.ini`. Otherwise `fname` can be a single filename or a glob expression.
`silent` makes the command not print any statement and `to_dict` is used internally to convert the library to a dictionary.
```
#export
class DocsTestClass:
"for tests only"
def test(): pass
#hide
#exporti
#for tests only
def update_lib_with_exporti_testfn(): pass
```
## Export -
```
#hide
notebook2script()
```
| github_jupyter |
# base
```
import vectorbt as vbt
from vectorbt.base import column_grouper, array_wrapper, combine_fns, index_fns, indexing, reshape_fns
import numpy as np
import pandas as pd
from datetime import datetime
from numba import njit
import itertools
v1 = 0
a1 = np.array([1])
a2 = np.array([1, 2, 3])
a3 = np.array([[1, 2, 3]])
a4 = np.array([[1], [2], [3]])
a5 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
sr_none = pd.Series([1])
print(sr_none)
sr1 = pd.Series([1], index=pd.Index(['x1'], name='i1'), name='a1')
print(sr1)
sr2 = pd.Series([1, 2, 3], index=pd.Index(['x2', 'y2', 'z2'], name='i2'), name='a2')
print(sr2)
df_none = pd.DataFrame([[1]])
print(df_none)
df1 = pd.DataFrame(
[[1]],
index=pd.Index(['x3'], name='i3'),
columns=pd.Index(['a3'], name='c3'))
print(df1)
df2 = pd.DataFrame(
[[1], [2], [3]],
index=pd.Index(['x4', 'y4', 'z4'], name='i4'),
columns=pd.Index(['a4'], name='c4'))
print(df2)
df3 = pd.DataFrame(
[[1, 2, 3]],
index=pd.Index(['x5'], name='i5'),
columns=pd.Index(['a5', 'b5', 'c5'], name='c5'))
print(df3)
df4 = pd.DataFrame(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
index=pd.Index(['x6', 'y6', 'z6'], name='i6'),
columns=pd.Index(['a6', 'b6', 'c6'], name='c6'))
print(df4)
multi_i = pd.MultiIndex.from_arrays([['x7', 'y7', 'z7'], ['x8', 'y8', 'z8']], names=['i7', 'i8'])
multi_c = pd.MultiIndex.from_arrays([['a7', 'b7', 'c7'], ['a8', 'b8', 'c8']], names=['c7', 'c8'])
df5 = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=multi_i, columns=multi_c)
print(df5)
```
## column_grouper
```
some_columns = pd.MultiIndex.from_arrays([
[1, 1, 1, 1, 0, 0, 0, 0],
[3, 3, 2, 2, 1, 1, 0, 0],
[7, 6, 5, 4, 3, 2, 1, 0]
], names=['first', 'second', 'third'])
print(column_grouper.group_by_to_index(some_columns, group_by=0))
print(column_grouper.group_by_to_index(some_columns, group_by='first'))
print(column_grouper.group_by_to_index(some_columns, group_by=[0, 1]))
print(column_grouper.group_by_to_index(some_columns, group_by=['first', 'second']))
print(column_grouper.group_by_to_index(some_columns, group_by=np.array([3, 2, 1, 1, 1, 0, 0, 0])))
print(column_grouper.group_by_to_index(some_columns, group_by=pd.Index([3, 2, 1, 1, 1, 0, 0, 0], name='fourth')))
# group_arr comes always from 0 to n, also keeps order
print(column_grouper.get_groups_and_index(some_columns, 0))
print(column_grouper.get_groups_and_index(some_columns, [0, 1]))
print(column_grouper.get_groups_and_index(some_columns, np.array([3, 2, 1, 1, 1, 0, 0, 0])))
print(column_grouper.get_group_lens_nb(np.array([0, 0, 0, 0, 1, 1, 1, 1])))
print(column_grouper.get_group_lens_nb(np.array([0, 1])))
print(column_grouper.get_group_lens_nb(np.array([0, 0])))
print(column_grouper.get_group_lens_nb(np.array([0])))
print(column_grouper.get_group_lens_nb(np.array([])))
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).group_by)
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_groups_and_columns())
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_groups())
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_columns())
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_group_lens())
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_group_start_idxs())
print(column_grouper.ColumnGrouper(sr2.to_frame().columns, group_by=np.array([0])).get_group_end_idxs())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).group_by)
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_groups_and_columns())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_groups())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_columns())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_group_lens())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_group_start_idxs())
print(column_grouper.ColumnGrouper(df4.columns, group_by=np.array([0, 0, 1])).get_group_end_idxs())
```
## array_wrapper
```
sr2_wrapper = array_wrapper.ArrayWrapper.from_obj(sr2)
df4_wrapper = array_wrapper.ArrayWrapper.from_obj(df4)
sr2_wrapper_co = sr2_wrapper.copy(column_only_select=True)
df4_wrapper_co = df4_wrapper.copy(column_only_select=True)
sr2_grouped_wrapper = sr2_wrapper.copy(group_by=np.array([0]))
df4_grouped_wrapper = df4_wrapper.copy(group_by=np.array([0, 0, 1]))
sr2_grouped_wrapper_co = sr2_grouped_wrapper.copy(column_only_select=True)
df4_grouped_wrapper_co = df4_grouped_wrapper.copy(column_only_select=True)
# test indexing
print(sr2_wrapper._indexing_func_meta(lambda x: x.iloc[:2])[1:])
print(df4_wrapper._indexing_func_meta(lambda x: x.iloc[0, :2])[1:])
print(df4_wrapper._indexing_func_meta(lambda x: x.iloc[:2, 0])[1:])
print(df4_wrapper._indexing_func_meta(lambda x: x.iloc[:2, [0]])[1:])
print(df4_wrapper._indexing_func_meta(lambda x: x.iloc[:2, :2])[1:])
print(df4_wrapper_co._indexing_func_meta(lambda x: x.iloc[0])[1:])
print(df4_wrapper_co._indexing_func_meta(lambda x: x.iloc[[0]])[1:])
print(df4_wrapper_co._indexing_func_meta(lambda x: x.iloc[:2])[1:])
print(sr2_grouped_wrapper._indexing_func_meta(lambda x: x.iloc[:2])[1:])
print(df4_grouped_wrapper._indexing_func_meta(lambda x: x.iloc[:2, 0])[1:])
print(df4_grouped_wrapper._indexing_func_meta(lambda x: x.iloc[:2, 1])[1:])
print(df4_grouped_wrapper._indexing_func_meta(lambda x: x.iloc[:2, [1]])[1:])
print(df4_grouped_wrapper._indexing_func_meta(lambda x: x.iloc[:2, :2])[1:])
print(df4_grouped_wrapper_co._indexing_func_meta(lambda x: x.iloc[0])[1:])
print(df4_grouped_wrapper_co._indexing_func_meta(lambda x: x.iloc[1])[1:])
print(df4_grouped_wrapper_co._indexing_func_meta(lambda x: x.iloc[[1]])[1:])
print(df4_grouped_wrapper_co._indexing_func_meta(lambda x: x.iloc[:2])[1:])
print(sr2_wrapper.iloc[:2].index)
print(sr2_wrapper.iloc[:2].columns)
print(sr2_wrapper.iloc[:2].ndim)
print(df4_wrapper.iloc[0, :2].index)
print(df4_wrapper.iloc[0, :2].columns)
print(df4_wrapper.iloc[0, :2].ndim)
print(df4_wrapper.iloc[:2, 0].index)
print(df4_wrapper.iloc[:2, 0].columns)
print(df4_wrapper.iloc[:2, 0].ndim)
print(df4_wrapper.iloc[:2, [0]].index)
print(df4_wrapper.iloc[:2, [0]].columns)
print(df4_wrapper.iloc[:2, [0]].ndim)
print(df4_wrapper.iloc[:2, :2].index)
print(df4_wrapper.iloc[:2, :2].columns)
print(df4_wrapper.iloc[:2, :2].ndim)
print(df4_wrapper_co.iloc[0].index)
print(df4_wrapper_co.iloc[0].columns)
print(df4_wrapper_co.iloc[0].ndim)
print(df4_wrapper_co.iloc[[0]].index)
print(df4_wrapper_co.iloc[[0]].columns)
print(df4_wrapper_co.iloc[[0]].ndim)
print(df4_wrapper_co.iloc[:2].index)
print(df4_wrapper_co.iloc[:2].columns)
print(df4_wrapper_co.iloc[:2].ndim)
print(sr2_grouped_wrapper.iloc[:2].index)
print(sr2_grouped_wrapper.iloc[:2].columns)
print(sr2_grouped_wrapper.iloc[:2].ndim)
print(sr2_grouped_wrapper.iloc[:2].grouped_ndim)
print(sr2_grouped_wrapper.iloc[:2].grouper.group_by)
print(df4_grouped_wrapper.iloc[:2, 0].index)
print(df4_grouped_wrapper.iloc[:2, 0].columns)
print(df4_grouped_wrapper.iloc[:2, 0].ndim)
print(df4_grouped_wrapper.iloc[:2, 0].grouped_ndim)
print(df4_grouped_wrapper.iloc[:2, 0].grouper.group_by)
print(df4_grouped_wrapper.iloc[:2, 1].index)
print(df4_grouped_wrapper.iloc[:2, 1].columns)
print(df4_grouped_wrapper.iloc[:2, 1].ndim)
print(df4_grouped_wrapper.iloc[:2, 1].grouped_ndim)
print(df4_grouped_wrapper.iloc[:2, 1].grouper.group_by)
print(df4_grouped_wrapper.iloc[:2, [1]].index)
print(df4_grouped_wrapper.iloc[:2, [1]].columns)
print(df4_grouped_wrapper.iloc[:2, [1]].ndim)
print(df4_grouped_wrapper.iloc[:2, [1]].grouped_ndim)
print(df4_grouped_wrapper.iloc[:2, [1]].grouper.group_by)
print(df4_grouped_wrapper.iloc[:2, :2].index)
print(df4_grouped_wrapper.iloc[:2, :2].columns)
print(df4_grouped_wrapper.iloc[:2, :2].ndim)
print(df4_grouped_wrapper.iloc[:2, :2].grouped_ndim)
print(df4_grouped_wrapper.iloc[:2, :2].grouper.group_by)
print(df4_grouped_wrapper_co.iloc[0].index)
print(df4_grouped_wrapper_co.iloc[0].columns)
print(df4_grouped_wrapper_co.iloc[0].ndim)
print(df4_grouped_wrapper_co.iloc[0].grouped_ndim)
print(df4_grouped_wrapper_co.iloc[0].grouper.group_by)
print(df4_grouped_wrapper_co.iloc[1].index)
print(df4_grouped_wrapper_co.iloc[1].columns)
print(df4_grouped_wrapper_co.iloc[1].ndim)
print(df4_grouped_wrapper_co.iloc[1].grouped_ndim)
print(df4_grouped_wrapper_co.iloc[1].grouper.group_by)
print(df4_grouped_wrapper_co.iloc[[1]].index)
print(df4_grouped_wrapper_co.iloc[[1]].columns)
print(df4_grouped_wrapper_co.iloc[[1]].ndim)
print(df4_grouped_wrapper_co.iloc[[1]].grouped_ndim)
print(df4_grouped_wrapper_co.iloc[[1]].grouper.group_by)
print(df4_grouped_wrapper_co.iloc[:2].index)
print(df4_grouped_wrapper_co.iloc[:2].columns)
print(df4_grouped_wrapper_co.iloc[:2].ndim)
print(df4_grouped_wrapper_co.iloc[:2].grouped_ndim)
print(df4_grouped_wrapper_co.iloc[:2].grouper.group_by)
big_df = pd.DataFrame(np.empty((1000, 1000)))
big_df_wrapper = array_wrapper.ArrayWrapper.from_obj(big_df)
big_df_wrapper_co = big_df_wrapper.copy(column_only_select=True)
big_df_grouped_wrapper = df4_wrapper.copy(group_by=np.array([0, 0, 1]))
big_df_grouped_wrapper_co = big_df_grouped_wrapper.copy(column_only_select=True)
%timeit big_df_wrapper.iloc[:, 0]
%timeit big_df_wrapper.iloc[:, :]
%timeit big_df_wrapper_co.iloc[0]
%timeit big_df_wrapper_co.iloc[:]
%timeit big_df_grouped_wrapper.iloc[:, 0]
%timeit big_df_grouped_wrapper.iloc[:, :]
%timeit big_df_grouped_wrapper_co.iloc[0]
%timeit big_df_grouped_wrapper_co.iloc[:]
print(df4_grouped_wrapper_co.wrap(np.array([[1, 2], [3, 4], [5, 6]])))
print(df4_grouped_wrapper_co.wrap_reduced(np.array([1, 2])))
print(df4_grouped_wrapper_co.wrap(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), group_by=False))
print(df4_grouped_wrapper_co.wrap_reduced(np.array([1, 2, 3]), group_by=False))
print(df4_grouped_wrapper_co.iloc[0].wrap(np.array([1, 2, 3])))
print(df4_grouped_wrapper_co.iloc[0].wrap_reduced(np.array([1])))
print(df4_grouped_wrapper_co.iloc[0].wrap(np.array([[1, 2], [3, 4], [5, 6]]), group_by=False))
print(df4_grouped_wrapper_co.iloc[0].wrap_reduced(np.array([1, 2]), group_by=False))
print(df4_grouped_wrapper_co.iloc[[0]].wrap(np.array([1, 2, 3])))
print(df4_grouped_wrapper_co.iloc[[0]].wrap_reduced(np.array([1])))
print(df4_grouped_wrapper_co.iloc[[0]].wrap(np.array([[1, 2], [3, 4], [5, 6]]), group_by=False))
print(df4_grouped_wrapper_co.iloc[[0]].wrap_reduced(np.array([1, 2]), group_by=False))
print(df4_grouped_wrapper_co.iloc[1].wrap(np.array([1, 2, 3])))
print(df4_grouped_wrapper_co.iloc[1].wrap_reduced(np.array([1])))
print(df4_grouped_wrapper_co.iloc[1].wrap(np.array([1, 2, 3]), group_by=False))
print(df4_grouped_wrapper_co.iloc[1].wrap_reduced(np.array([1]), group_by=False))
print(df4_grouped_wrapper_co.iloc[[1]].wrap(np.array([1, 2, 3])))
print(df4_grouped_wrapper_co.iloc[[1]].wrap_reduced(np.array([1])))
print(df4_grouped_wrapper_co.iloc[[1]].wrap(np.array([1, 2, 3]), group_by=False))
print(df4_grouped_wrapper_co.iloc[[1]].wrap_reduced(np.array([1]), group_by=False))
```
## index_fns
```
i1 = index_fns.index_from_values([0.1, 0.2], name='a')
i2 = index_fns.index_from_values(np.tile(np.arange(1, 4)[:, None][:, None], (1, 3, 3)), name='b')
i3 = index_fns.index_from_values(np.random.uniform(size=(3, 3, 3)), name='c')
print(i1)
print(i2)
print(i3)
print(index_fns.repeat_index(i2, 3))
print(index_fns.repeat_index(multi_i, 3))
print(index_fns.tile_index(i2, 3))
print(index_fns.tile_index(multi_i, 3))
i23 = index_fns.stack_indexes(i2, i3)
i32 = index_fns.stack_indexes(i3, i2)
print(i23)
print(i32)
print(index_fns.stack_indexes(multi_i, multi_i, drop_duplicates=False))
print(index_fns.stack_indexes(multi_i, multi_i, drop_duplicates=True))
print(index_fns.stack_indexes([0, 1], ['a', 'b'], drop_redundant=False))
print(index_fns.stack_indexes([0, 1], ['a', 'b'], drop_redundant=True))
print(index_fns.stack_indexes(pd.Index([0, 1], name='test_name'), ['a', 'b'], drop_redundant=True))
print(index_fns.stack_indexes(['a', 'a'], ['a', 'b'], drop_redundant=True))
print(index_fns.stack_indexes(pd.Index(['a', 'a'], name='test_name'), ['a', 'b'], drop_redundant=True))
print(index_fns.combine_indexes(pd.Index([1]), pd.Index([2, 3]), drop_duplicates=False))
print(index_fns.combine_indexes(pd.Index([1]), pd.Index([2, 3]), drop_duplicates=True))
print(index_fns.combine_indexes(pd.Index([1, 2]), pd.Index([3]), drop_duplicates=False))
print(index_fns.combine_indexes(pd.Index([1, 2]), pd.Index([3]), drop_duplicates=True))
print(index_fns.combine_indexes(i1, i2)) # combine_fns uses stack
print(index_fns.combine_indexes(i2, i3))
print(index_fns.combine_indexes(i23, i23))
print(index_fns.drop_levels(multi_i, 'i10'))
print(index_fns.drop_levels(multi_i, ['i7', 'i8']))
print(index_fns.rename_levels(pd.Int64Index([1, 2, 3], name='i'), {'i': 'f'}))
print(index_fns.rename_levels(multi_i, {'i7': 'f7', 'i8': 'f8'}))
print(index_fns.select_levels(multi_i, 'i7'))
print(index_fns.select_levels(multi_i, ['i7']))
print(index_fns.select_levels(multi_i, ['i7', 'i8']))
print(index_fns.drop_redundant_levels(pd.Index(['a', 'a']))) # ignores levels with single element
print(index_fns.drop_redundant_levels(pd.Index(['a', 'a'], name='hi')))
print(index_fns.drop_redundant_levels(pd.MultiIndex.from_arrays([['a', 'a'], ['b', 'b']], names=['hi', 'hi2'])))
print(index_fns.drop_redundant_levels(pd.MultiIndex.from_arrays([['a', 'b'], ['a', 'b']], names=['hi', 'hi2'])))
print(index_fns.drop_redundant_levels(pd.MultiIndex.from_arrays([[0, 1], ['a', 'b']], names=[None, 'hi2']))) # ignores 0-to-n
print(index_fns.drop_redundant_levels(pd.MultiIndex.from_arrays([[0, 2], ['a', 'b']], names=[None, 'hi2']))) # legit
print(index_fns.drop_redundant_levels(pd.MultiIndex.from_arrays([[0, 1], ['a', 'b']], names=['hi', 'hi2']))) # legit (w/ name)
print(index_fns.drop_duplicate_levels(pd.MultiIndex.from_arrays(
[[1, 2, 3], [1, 2, 3]], names=['a', 'a'])))
print(index_fns.drop_duplicate_levels(pd.MultiIndex.from_tuples(
[(0, 1, 2, 1), ('a', 'b', 'c', 'b')], names=['x', 'y', 'z', 'y']), keep='last'))
print(index_fns.drop_duplicate_levels(pd.MultiIndex.from_tuples(
[(0, 1, 2, 1), ('a', 'b', 'c', 'b')], names=['x', 'y', 'z', 'y']), keep='first'))
multi_c1 = pd.MultiIndex.from_arrays([['a8', 'b8']], names=['c8'])
multi_c2 = pd.MultiIndex.from_arrays([['a7', 'a7', 'c7', 'c7'], ['a8', 'b8', 'a8', 'b8']], names=['c7', 'c8'])
index_fns.align_index_to(multi_c1, multi_c2)
print(index_fns.pick_levels(multi_c, required_levels=[], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=['c8'], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=['c8'], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=['c7', 'c8'], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=['c8', None], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=[None, None], optional_levels=[]))
print(index_fns.pick_levels(multi_c, required_levels=[None], optional_levels=['c8']))
print(index_fns.pick_levels(multi_c, required_levels=['c8'], optional_levels=[None]))
print(index_fns.pick_levels(multi_c, required_levels=[], optional_levels=['c7', 'c8']))
```
## reshape_fns
```
print(reshape_fns.soft_to_ndim(a2, 1))
print(reshape_fns.soft_to_ndim(sr2, 1))
print(reshape_fns.soft_to_ndim(df2, 1))
print(reshape_fns.soft_to_ndim(df4, 1)) # cannot -> do nothing
print(reshape_fns.soft_to_ndim(a2, 2))
print(reshape_fns.soft_to_ndim(sr2, 2))
print(reshape_fns.soft_to_ndim(df2, 2))
print(reshape_fns.to_1d(None))
print(reshape_fns.to_1d(v1))
print(reshape_fns.to_1d(a1))
print(reshape_fns.to_1d(a2))
print(reshape_fns.to_1d(sr1))
print(reshape_fns.to_1d(sr2))
print(reshape_fns.to_1d(df1))
print(reshape_fns.to_1d(df2))
print(reshape_fns.to_2d(None))
print(reshape_fns.to_2d(v1))
print(reshape_fns.to_2d(a1))
print(reshape_fns.to_2d(a2))
print(reshape_fns.to_2d(sr1))
print(reshape_fns.to_2d(sr2))
print(reshape_fns.to_2d(sr2, expand_axis=0))
print(reshape_fns.repeat(v1, 3, axis=0))
print(reshape_fns.repeat(a1, 3, axis=0))
print(reshape_fns.repeat(a2, 3, axis=0))
print(reshape_fns.repeat(a3, 3, axis=0))
print(reshape_fns.repeat(a4, 3, axis=0))
print(reshape_fns.repeat(a5, 3, axis=0))
print(reshape_fns.repeat(sr_none, 3, axis=0))
print(reshape_fns.repeat(sr1, 3, axis=0))
print(reshape_fns.repeat(sr2, 3, axis=0))
print(reshape_fns.repeat(df_none, 3, axis=0))
print(reshape_fns.repeat(df1, 3, axis=0))
print(reshape_fns.repeat(df2, 3, axis=0))
print(reshape_fns.repeat(df3, 3, axis=0))
print(reshape_fns.repeat(df4, 3, axis=0))
print(reshape_fns.repeat(v1, 3, axis=1))
print(reshape_fns.repeat(a1, 3, axis=1))
print(reshape_fns.repeat(a2, 3, axis=1))
print(reshape_fns.repeat(a3, 3, axis=1))
print(reshape_fns.repeat(a4, 3, axis=1))
print(reshape_fns.repeat(a5, 3, axis=1))
print(reshape_fns.repeat(sr_none, 3, axis=1))
print(reshape_fns.repeat(sr1, 3, axis=1))
print(reshape_fns.repeat(sr2, 3, axis=1))
print(reshape_fns.repeat(df_none, 3, axis=1))
print(reshape_fns.repeat(df1, 3, axis=1))
print(reshape_fns.repeat(df2, 3, axis=1))
print(reshape_fns.repeat(df3, 3, axis=1))
print(reshape_fns.repeat(df4, 3, axis=1))
print(reshape_fns.tile(v1, 3, axis=0))
print(reshape_fns.tile(a1, 3, axis=0))
print(reshape_fns.tile(a2, 3, axis=0))
print(reshape_fns.tile(a3, 3, axis=0))
print(reshape_fns.tile(a4, 3, axis=0))
print(reshape_fns.tile(a5, 3, axis=0))
print(reshape_fns.tile(sr_none, 3, axis=0))
print(reshape_fns.tile(sr1, 3, axis=0))
print(reshape_fns.tile(sr2, 3, axis=0))
print(reshape_fns.tile(df_none, 3, axis=0))
print(reshape_fns.tile(df1, 3, axis=0))
print(reshape_fns.tile(df2, 3, axis=0))
print(reshape_fns.tile(df3, 3, axis=0))
print(reshape_fns.tile(df4, 3, axis=0))
print(reshape_fns.tile(v1, 3, axis=1))
print(reshape_fns.tile(a1, 3, axis=1))
print(reshape_fns.tile(a2, 3, axis=1))
print(reshape_fns.tile(a3, 3, axis=1))
print(reshape_fns.tile(a4, 3, axis=1))
print(reshape_fns.tile(a5, 3, axis=1))
print(reshape_fns.tile(sr_none, 3, axis=1))
print(reshape_fns.tile(sr1, 3, axis=1))
print(reshape_fns.tile(sr2, 3, axis=1))
print(reshape_fns.tile(df_none, 3, axis=1))
print(reshape_fns.tile(df1, 3, axis=1))
print(reshape_fns.tile(df2, 3, axis=1))
print(reshape_fns.tile(df3, 3, axis=1))
print(reshape_fns.tile(df4, 3, axis=1))
# Change broadcasting rules globally
vbt.settings.broadcasting['index_from'] = 'stack' # default is 'strict'
vbt.settings.broadcasting['columns_from'] = 'stack'
print(vbt.settings.broadcasting)
# Broadcasting arrays
args = [
('v1', v1),
('a1', a1),
('a2', a2),
('a3', a3),
('a4', a4),
('a5', a5)
]
arg_combs = list(itertools.combinations_with_replacement(args, 2))
for (n1, arg1), (n2, arg2) in arg_combs:
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
# Broadcasting series
args = [
('sr_none', sr_none),
('sr1', sr1),
('sr2', sr2)
]
arg_combs = list(itertools.combinations_with_replacement(args, 2))
for (n1, arg1), (n2, arg2) in arg_combs:
print(n1 + '+' + n2)
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
# Broadcasting arrays and series
a_args = [
('v1', v1),
('a1', a1),
('a2', a2),
('a3', a3),
('a4', a4),
('a5', a5)
]
sr_args = [
('sr_none', sr_none),
('sr1', sr1),
('sr2', sr2)
]
arg_combs = list(itertools.product(a_args, sr_args))
for (n1, arg1), (n2, arg2) in arg_combs:
print(n1 + '+' + n2)
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
# Broadcasting dataframes
args = [
('df_none', df_none),
('df1', df1),
('df2', df2),
('df3', df3),
('df4', df4)
]
arg_combs = list(itertools.combinations_with_replacement(args, 2))
for (n1, arg1), (n2, arg2) in arg_combs:
print(n1 + '+' + n2)
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
# Broadcasting arrays and dataframes
a_args = [
('v1', v1),
('a1', a1),
('a2', a2),
('a3', a3),
('a4', a4),
('a5', a5)
]
sr_args = [
('df_none', df_none),
('df1', df1),
('df2', df2),
('df3', df3),
('df4', df4)
]
arg_combs = list(itertools.product(a_args, sr_args))
for (n1, arg1), (n2, arg2) in arg_combs:
print(n1 + '+' + n2)
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
# Broadcasting series and dataframes
a_args = [
('sr_none', sr_none),
('sr1', sr1),
('sr2', sr2)
]
sr_args = [
('df_none', df_none),
('df1', df1),
('df2', df2),
('df3', df3),
('df4', df4)
]
arg_combs = list(itertools.product(a_args, sr_args))
for (n1, arg1), (n2, arg2) in arg_combs:
print(n1 + '+' + n2)
print(arg1)
print(arg2)
print("================")
arg1, arg2 = reshape_fns.broadcast(arg1, arg2)
print(arg1)
print(arg2)
print()
[np.broadcast_to(x, (3, 3)) for x in (0, a1, a2, sr_none, sr1, sr2)]
# Broadcasting all at once
for i in reshape_fns.broadcast(
0, a1, a2, sr_none, sr1, sr2,
to_shape=(3, 3),
index_from='stack',
columns_from='stack'
):
print(i)
# Broadcasting all at once
for i in reshape_fns.broadcast(
v1, a1, a2, a3, a4, a5, sr_none, sr1, sr2, df_none, df1, df2, df3, df4,
index_from='stack',
columns_from='stack'
):
print(i)
for i in reshape_fns.broadcast(
v1, a1, a2, a3, a4, a5, sr_none, sr1, sr2, df_none, df1, df2, df3, df4,
index_from=None, # use as-is
columns_from=None
):
print(i)
for i in reshape_fns.broadcast(
v1, a1, a2, a3, a4, a5, sr_none, sr1, sr2, df_none, df1, df2, df3, df4,
index_from=-1, # take index from the last dataframe
columns_from=-1
):
print(i)
for i in reshape_fns.broadcast(
v1, a1, a2, a3, a4, a5, sr_none, sr1, sr2, df_none, df1, df2, df3, df4,
index_from=multi_i, # specify manually
columns_from=multi_c
):
print(i)
# Do not clean columns
vbt.settings.broadcasting['drop_duplicates'] = False
vbt.settings.broadcasting['drop_redundant'] = False
vbt.settings.broadcasting['ignore_sr_names'] = False
for i in reshape_fns.broadcast(
v1, a1, a2, a3, a4, a5, sr_none, sr1, sr2, df_none, df1, df2, df3, df4,
index_from='stack', # stack but do not clean
columns_from='stack'
):
print(i)
vbt.settings.broadcasting.reset()
big_a = np.empty((1000, 1000))
print(reshape_fns.broadcast(np.empty((1,)), big_a)[0].flags)
%timeit reshape_fns.broadcast(np.empty((1,)), big_a)
print(reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'W'})[0].flags)
%timeit reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'W'})
print(reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'C'})[0].flags)
%timeit reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'C'})
print(reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'F'})[0].flags)
%timeit reshape_fns.broadcast(np.empty((1,)), big_a, require_kwargs={'requirements': 'F'})
print(reshape_fns.broadcast(v1, df4, to_pd=False))
print(reshape_fns.broadcast(v1, df4, to_pd=True))
# One-side broadcasting, default behaviour is copying index/columns from the second argument
print(reshape_fns.broadcast_to(sr1, sr1))
print(reshape_fns.broadcast_to(sr1, sr2))
print(reshape_fns.broadcast_to(sr1, df1))
print(reshape_fns.broadcast_to(sr1, df2))
print(reshape_fns.broadcast_to(sr1, df3))
print(reshape_fns.broadcast_to(sr1, df4))
# Broadcasting first element to be an array out of the second argument
print(reshape_fns.broadcast_to_array_of(0.1, v1))
print(reshape_fns.broadcast_to_array_of([0.1], v1))
print(reshape_fns.broadcast_to_array_of([0.1, 0.2], v1))
print(reshape_fns.broadcast_to_array_of(0.1, sr2))
print(reshape_fns.broadcast_to_array_of([0.1], sr2))
print(reshape_fns.broadcast_to_array_of([0.1, 0.2], sr2))
print(reshape_fns.broadcast_to_array_of([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]], sr2))
print(reshape_fns.broadcast_to_array_of(0.1, df2))
print(reshape_fns.broadcast_to_array_of([0.1], df2))
print(reshape_fns.broadcast_to_array_of([0.1, 0.2], df2))
print(reshape_fns.broadcast_to_array_of([[[0.1], [0.2], [0.3]], [[0.4], [0.5], [0.6]]], df2))
print(reshape_fns.broadcast_to_array_of(0.1, np.empty((2, 2, 2)))) # works even for ndim > 2
print(reshape_fns.broadcast_to_axis_of(10, np.empty((2,)), 0))
print(reshape_fns.broadcast_to_axis_of(10, np.empty((2,)), 1))
print(reshape_fns.broadcast_to_axis_of(10, np.empty((2, 3)), 0))
print(reshape_fns.broadcast_to_axis_of(10, np.empty((2, 3)), 1))
print(reshape_fns.broadcast_to_axis_of(10, np.empty((2, 3)), 2))
i = pd.MultiIndex.from_arrays([[1, 1, 2, 2], [3, 4, 3, 4], ['a', 'b', 'c', 'd']])
sr = pd.Series([1, 2, 3, 4], index=i)
print(reshape_fns.unstack_to_array(sr))
print(reshape_fns.make_symmetric(sr1))
print(reshape_fns.make_symmetric(sr2))
print(reshape_fns.make_symmetric(df1))
print(reshape_fns.make_symmetric(df2))
print(reshape_fns.make_symmetric(df3))
print(reshape_fns.make_symmetric(df4))
print(reshape_fns.make_symmetric(df5))
print(reshape_fns.make_symmetric(pd.Series([1, 2, 3], name='yo'), sort=False))
print(reshape_fns.unstack_to_df(df5.iloc[0]))
print(reshape_fns.unstack_to_df(sr, index_levels=0, column_levels=1))
print(reshape_fns.unstack_to_df(sr, index_levels=(0, 1), column_levels=2))
print(reshape_fns.unstack_to_df(sr, index_levels=0, column_levels=1, symmetric=True).columns)
```
## indexing
```
PandasIndexer = indexing.PandasIndexer
ParamIndexer = indexing.ParamIndexerFactory(['param1', 'param2', 'tuple'])
class H(PandasIndexer, ParamIndexer):
def __init__(self, a, param1_mapper, param2_mapper, tuple_mapper):
self.a = a
self._param1_mapper = param1_mapper
self._param2_mapper = param2_mapper
self._tuple_mapper = tuple_mapper
PandasIndexer.__init__(self, my_kw='PandasIndexer')
ParamIndexer.__init__(self, [param1_mapper, param2_mapper, tuple_mapper], my_kw='ParamIndexer')
def _indexing_func(self, pd_indexing_func, my_kw=None):
# As soon as you call iloc etc., performs it on each dataframe and mapper and returns a new class instance
print(my_kw)
param1_mapper = indexing.indexing_on_mapper(self._param1_mapper, self.a, pd_indexing_func)
param2_mapper = indexing.indexing_on_mapper(self._param2_mapper, self.a, pd_indexing_func)
tuple_mapper = indexing.indexing_on_mapper(self._tuple_mapper, self.a, pd_indexing_func)
return H(pd_indexing_func(self.a), param1_mapper, param2_mapper, tuple_mapper)
@classmethod
def run(cls, a, params1, params2, level_names=('p1', 'p2')):
a = reshape_fns.to_2d(a)
# Build column hierarchy
params1_idx = pd.Index(params1, name=level_names[0])
params2_idx = pd.Index(params2, name=level_names[1])
params_idx = index_fns.stack_indexes(params1_idx, params2_idx)
new_columns = index_fns.combine_indexes(params_idx, a.columns)
# Build mappers
param1_mapper = np.repeat(params1, len(a.columns))
param1_mapper = pd.Series(param1_mapper, index=new_columns, name=params1_idx.name)
param2_mapper = np.repeat(params2, len(a.columns))
param2_mapper = pd.Series(param2_mapper, index=new_columns, name=params2_idx.name)
tuple_mapper = list(zip(*list(map(lambda x: x.values, [param1_mapper, param2_mapper]))))
tuple_mapper = pd.Series(tuple_mapper, index=new_columns, name=(params1_idx.name, params2_idx.name))
# Tile a to match the length of new_columns
a = array_wrapper.ArrayWrapper(a.index, new_columns, 2).wrap(reshape_fns.tile(a.values, 4, axis=1))
return cls(a, param1_mapper, param2_mapper, tuple_mapper)
# Similate an indicator with two params
h = H.run(df4, [0.1, 0.1, 0.2, 0.2], [0.3, 0.4, 0.5, 0.6])
print(df4)
print(h.a)
print(h._param1_mapper)
print(h._param2_mapper)
print(h._tuple_mapper)
# Indexing operations are delegated to the underlying dataframes
print(h[(0.1, 0.3, 'a6')].a)
print(h.loc[:, (0.1, 0.3, 'a6'):(0.1, 0.3, 'c6')].a)
print(h.iloc[-2:, -2:].a)
print(h.xs((0.1, 0.3), level=('p1', 'p2'), axis=1).a.columns)
print(h.param1_loc[0.1].a.columns)
print(h.param1_loc[0.1:0.1].a)
print(h.param1_loc[[0.1, 0.1]].a)
print(h.param2_loc[0.3].a)
print(h.param2_loc[0.3:0.3].a)
print(h.param2_loc[[0.3, 0.3]].a.columns)
print(h.tuple_loc[(0.1, 0.3)].a)
print(h.tuple_loc[(0.1, 0.3):(0.1, 0.3)].a.columns)
print(h.tuple_loc[[(0.1, 0.3), (0.1, 0.3)]].a.columns)
```
## combine_fns
```
vbt.settings.broadcasting['index_from'] = 'stack'
vbt.settings.broadcasting['columns_from'] = 'stack'
print(combine_fns.apply_and_concat_one(3, lambda i, x, a: x + a[i], sr2.values, [10, 20, 30]))
print(combine_fns.apply_and_concat_one_nb(3, njit(lambda i, x, a: x + a[i]), sr2.values, (10, 20, 30)))
print(combine_fns.apply_and_concat_one(3, lambda i, x, a: x + a[i], df4.values, [10, 20, 30]))
print(combine_fns.apply_and_concat_one_nb(3, njit(lambda i, x, a: x + a[i]), df4.values, (10, 20, 30)))
print(combine_fns.apply_and_concat_multiple(3, lambda i, x, a: (x, x + a[i]), sr2.values, [10, 20, 30]))
print(combine_fns.apply_and_concat_multiple_nb(3, njit(lambda i, x, a: (x, x + a[i])), sr2.values, (10, 20, 30)))
print(combine_fns.apply_and_concat_multiple(3, lambda i, x, a: (x, x + a[i]), df4.values, [10, 20, 30]))
print(combine_fns.apply_and_concat_multiple_nb(3, njit(lambda i, x, a: (x, x + a[i])), df4.values, (10, 20, 30)))
print(combine_fns.combine_and_concat(sr2.values, (sr2.values*2, sr2.values*3), lambda x, y, a: x + y + a, 100))
print(combine_fns.combine_and_concat_nb(sr2.values, (sr2.values*2, sr2.values*3), njit(lambda x, y, a: x + y + a), 100))
print(combine_fns.combine_and_concat(df4.values, (df4.values*2, df4.values*3), lambda x, y, a: x + y + a, 100))
print(combine_fns.combine_and_concat_nb(df4.values, (df4.values*2, df4.values*3), njit(lambda x, y, a: x + y + a), 100))
print(combine_fns.combine_multiple((sr2.values, sr2.values*2, sr2.values*3), lambda x, y, a: x + y + a, 100))
print(combine_fns.combine_multiple_nb((sr2.values, sr2.values*2, sr2.values*3), njit(lambda x, y, a: x + y + a), 100))
print(combine_fns.combine_multiple((df4.values, df4.values*2, df4.values*3), lambda x, y, a: x + y + a, 100))
print(combine_fns.combine_multiple_nb((df4.values, df4.values*2, df4.values*3), njit(lambda x, y, a: x + y + a), 100))
```
## accessors
```
print(pd.Series.vbt.empty(5, index=np.arange(10, 15), name='a', fill_value=5))
print(pd.DataFrame.vbt.empty((5, 3), index=np.arange(10, 15), columns=['a', 'b', 'c'], fill_value=5))
print(pd.Series.vbt.empty_like(sr2, fill_value=5))
print(pd.DataFrame.vbt.empty_like(df4, fill_value=5))
print(sr1.vbt.is_series())
print(sr1.vbt.is_frame())
print(df1.vbt.is_series())
print(df2.vbt.is_frame())
print(sr2.vbt.wrapper.index)
print(sr2.vbt.wrapper.columns)
print(df4.vbt.wrapper.index)
print(df4.vbt.wrapper.columns)
print(df1.vbt.apply_on_index(lambda idx: idx + '_yo', axis=0))
print(df1.vbt.apply_on_index(lambda idx: idx + '_yo', axis=1))
df1_copy = df1.copy()
df1_copy.vbt.apply_on_index(lambda idx: idx + '_yo', axis=0, inplace=True)
print(df1_copy)
df1_copy.vbt.apply_on_index(lambda idx: idx + '_yo', axis=1, inplace=True)
print(df1_copy)
print(sr2.vbt.to_1d_array())
print(sr2.vbt.to_2d_array())
# It will try to return pd.Series
print(sr2.vbt.wrapper.wrap(a2)) # returns sr
print(sr2.vbt.wrapper.wrap(df2.values)) # returns sr
print(sr2.vbt.wrapper.wrap(df2.values, index=df2.index, columns=df2.columns)) # returns sr
print(sr2.vbt.wrapper.wrap(df4.values, columns=df4.columns)) # returns df
print(sr2.vbt.wrapper.wrap(df4.values, index=df4.index, columns=df4.columns)) # returns df
# It will try to return pd.DataFrame
print(df2.vbt.wrapper.wrap(a2)) # returns df
print(df2.vbt.wrapper.wrap(sr2.values)) # returns df
print(df2.vbt.wrapper.wrap(df4.values, columns=df4.columns)) # returns df
print(df2.vbt.wrapper.wrap(df4.values, index=df4.index, columns=df4.columns)) # returns df
print(df4.vbt.tile(2, keys=['a', 'b']))
print(df4.vbt.repeat(2, keys=['a', 'b']))
df10 = pd.DataFrame([[1, 2], [4, 5], [7, 8]], columns=multi_c1)
df20 = pd.DataFrame([[1, 2, 3, 4], [4, 5, 6, 7], [7, 8, 9, 10]], columns=multi_c2)
print(df10)
print(df20)
print(df10.vbt.align_to(df20))
print(pd.DataFrame.vbt.broadcast(
sr2,
10
))
print(sr2.vbt.broadcast(
10
))
print(sr2.vbt.broadcast_to(
df2
))
print(sr2.vbt.make_symmetric())
print(df2.vbt.make_symmetric())
print(df3.vbt.make_symmetric())
print(df4.vbt.make_symmetric())
print(df5.iloc[:, 0].vbt.unstack_to_array())
print(df5.iloc[:, 0].vbt.unstack_to_df())
print(sr2.vbt.apply(apply_func=lambda x: x ** 2))
print(sr2.vbt.apply(apply_func=lambda x: x ** 2, to_2d=True))
print(df2.vbt.apply(apply_func=lambda x: x ** 2))
print(pd.DataFrame.vbt.concat(sr2, 10, df4, keys=['a', 'b', 'c']))
print(sr2.vbt.concat(10, df4, keys=['a', 'b', 'c']))
print(sr2.vbt.apply_and_concat(3, sr2.values, 10, apply_func=lambda i, x, y, c, d=1: x + y[i] + c + d, d=100))
print(sr2.vbt.apply_and_concat(3, sr2.values, 10, apply_func=njit(lambda i, x, y, c: x + y[i] + c + 100)))
print(sr2.vbt.apply_and_concat(3, df4.values, 10, apply_func=lambda i, x, y, c, d=1: x + y[:, i] + c + d, d=100))
print(sr2.vbt.apply_and_concat(3, df4.values, 10, apply_func=njit(lambda i, x, y, c: x + y[:, i] + c + 100)))
print(df4.vbt.apply_and_concat(3, df4.values, 10, apply_func=lambda i, x, y, c, d=1: x + y[:, i] + c + d, d=100))
print(df4.vbt.apply_and_concat(
3,
df4.values,
10,
apply_func=njit(lambda i, x, y, c: x + y[:, i] + c + 100),
keys=pd.Index(['a', 'b', 'c'], name='hello')))
print(sr2.vbt.combine_with(10., combine_func=lambda x, y: x + y))
print(sr2.vbt.combine_with(10, 100, d=1000, combine_func=lambda x, y, c, d=1: x + y + c + d)) # test args and kwargs
print(sr2.vbt.combine_with([10, 20, 30], combine_func=lambda x, y: x + y))
print(sr2.vbt.combine_with([[10, 20, 30]], combine_func=lambda x, y: x + y))
print(sr2.vbt.combine_with(sr1, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with(sr2, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with(df2, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with(df3, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with(df4, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with(df5, combine_func=lambda x, y: x + y, broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
pd.Series([10, 20, 30])],
10, b=100,
combine_func=lambda x, y, a, b=1: x + y + a + b,
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10, b=100,
combine_func=lambda x, y, a, b=1: x + y + a + b,
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10,
combine_func=njit(lambda x, y, a, b=1: x + y + a + 100),
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10,
combine_func=njit(lambda x, y, a, b=1: x + y + a + 100),
broadcast_kwargs=dict(index_from='stack')))
# Test concat=True
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
pd.Series([10, 20, 30])],
10, b=100,
combine_func=lambda x, y, a, b=1: x + y + a + b,
concat=True,
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10, b=100,
combine_func=lambda x, y, a, b=1: x + y + a + b,
concat=True,
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10,
combine_func=njit(lambda x, y, a, b=1: x + y + a + 100),
concat=True,
broadcast_kwargs=dict(index_from='stack')))
print(sr2.vbt.combine_with_multiple(
[10,
[10, 20, 30],
[[10, 20, 30]],
pd.Series([10, 20, 30]),
df1,
df3],
10,
combine_func=njit(lambda x, y, a, b=1: x + y + a + 100),
concat=True,
keys=['a', 'b', 'c', 'd', 'e', 'f'],
broadcast_kwargs=dict(index_from='stack')))
# Use magic methods with .vbt to do operations with custom broadcasting
# Regular df3 + df4 will return nans
print(df3.vbt + df4.vbt)
```
| github_jupyter |
<h4>Le but de ce document est de présenter les <b>dictionnaires</b> python.</h4>
<h1 class="alert alert-success">Les dictionnaires</h1>
<h2 class="alert alert-info">
Définition : Les dictionnaires</h2>
<p class="alert-warning">Un <span class="inline_code">dictionnaire</span> est une collection non-ordonnée (non indicée) d’objets (simples ou évolués) qui utilise le mécanisme associatif <span class="inline_code">clef-valeur</span>.</p>
Une collection, c'est donc comme une liste mais là, elle n'est pas ordonnée : ça n'a pas de sens de dire qu'un élément est avant un autre puiqu'on va les appeler par leur nom (enfin leur clef !). En pratique :
<h2 class="alert alert-info">Utilisation des dictionnaires</h2>
```
#définition d'un dictionnaire
age_eleve = {'Pierre':17, 'Paul':15,'Jacques':16}
print(age_eleve)
```
<p><span class=inline_code>age_eleve</span> est un dictionnaire, il est délimité par des {}.</p>
<span class=inline_code>Pierre</span> est une <span class=inline_code>clef</span> : <code>key</code> et 17 est la <span class=inline_code>valeur</span> : <code>value</code> associée à cette <span class=inline_code>clef</span>.</p>
<h2 class="alert alert-info">
Accès à une valeur du dictionnaire à partir d'une clef</h2>
Pour connaître l'âge de Pierre on écrit :
age_eleve["Pierre"]
```
age_eleve["Pierre"]
```
<p class="alert alert-danger"><u>Remarque</u> : si on oublie les guillemets à "Pierre", l'interpréteur en déduit que l'on fait référence à une variable nommée Pierre, et il ne la connaît pas. Normal puisqu'ici, elle n'existe pas.</p>
```
age_eleve[Pierre]
#Alors que l'on peut utiliser une variable ainsi :
nom="Pierre"
age_eleve[nom]
```
Pour savoir si une clef est présente dans un dictionnaire on utilise <code>in</code> :
```
print("Pierre" in age_eleve)
print("Steven" in age_eleve)
```
<h2 class="alert alert-info">
Parcours des valeurs d'un dictionnaire</h2>
Pour énumérer toutes les valeurs d'un dictionnaire, on utilise une boucle après avoir créer un itérateur : <code>items()</code> de ses éléments :
```
# age_eleve.item() est de type dict_item c'est une sorte de liste de tuples (clef,valeur)
age_eleve.items()
#Ce qui donne, avec une boucle for à deux variables clef,valeur :
for clef,valeur in age_eleve.items():
print(clef+"\t", valeur) #\t est une tabulation, pour que les valeurs soient alignées.
```
On aurait pu ne lister que les clefs :
```
#liste des clés
for key in age_eleve.keys():
print(key)
```
Ou ne lister que les valeurs :
```
#liste des valeurs :
for value in age_eleve.values():
print(value)
```
<p class="alert alert-danger"><u>Remarque</u> :</br>
Si on ne précise pas keys() ou values(), on itère par défaut sur les clefs :</p>
```
for element in age_eleve: #on itère sans préciser, sur les éléments du dictionnaire
print(element)
print("--> On a obtenu une énumération des clefs.")
```
<h2 class="alert alert-info">Application aux données exif d'une image :</h2>
Pour consulter les données exif d'une photo on peut utiliser le module <code>exifread</code> qui s'installe en ligne de commande par : <br/>
<code>$pip3 install exifread.</code><br/>
Les données exif peuvent alors être récupérées sous forme de dictionnaire. Comme dans le code suivant :
```
import exifread
import os
print("Le repertoire courant est : ", os.getcwd(),'\n')
# Ouverture d'une image pour lecture (en mode binaire)
f = open("ma_photo.jpg", 'rb')
# Retourne les Exif dans un dictionnaire (variable tags)
tags = exifread.process_file(f)
print(" "*(40-len("RUBRIQUE")),"RUBRIQUE"," | ","VALEUR")
print("-"*70)
for key, val in tags.items():
if key!="JPEGThumbnail":
compl=40-len(key)
print(" "*compl,key," : ",val)
```
<p class="alert alert-warning"><u>Travail demandé n°1 :</u><br/>
1.Munis-toi d'une photo au format compressé jpeg et place-la dans le même répertoire que ce document sous le nom "ma_photo.jpg"<br/>
2. A l'aide du code ci-dessus, retrouve et affiche les dates de prise de vue, la largeur et la hauteur, la durée d'exposition, l'ouverture (FNumber).<br/>
3. Pendant combien de temps l'obturateur s'est-il ouvert durant la prise de vue ? </p>
<h2 class="alert alert-info">
Les types de clefs</h2>
Les clefs ne sont pas nécessairement de type <span class="inline_code">string</span> :
```
#Des entiers pour les clefs
shifumi={1:"Pierre",2:"Feuille",3:"Ciseaux"}
from random import randint
#On associe ainsi à un tirage aléatoire : pierre, feuille ou ciseaux !
shifumi[randint(1,3)]
```
On peut même utiliser des tuples soit pour la clef, soit pour la valeur, soit pour les deux :
```
fiche_adherent={('Pierre',56):(15.4,'à jour de cotisation'),('Lucie',24):(15.1,'à jour de cotisation'),
('Pierre',22):(2.6,'à jour de cotisation'),('Lucie',3.2e6):('Non Classé', 'cotisation en retard')}
#Classement de l 'adhérent.e (on remarquera que sauf en cas d'ambiguité, les
# parenthèses ne sont pas nécessaires pour appeler un tuple)
print("fiche_adherent['Lucie',24][0] renvoie : ",fiche_adherent['Lucie',24][0])
```
<p class="alert alert-warning"><u>Travail demandé n°2 :</u><br/>
Dans la cellule ci-dessous, écrire un programme avec une boucle réalisant l'affichage suivant :

</p>
```
#Etat de la cotisation annuelle :
print("\r") #Retour à la ligne
print("Fichier des adhérent.e.s :")
```
<p class="alert alert-danger"><u>Remarque</u></span>.<br/>Il y a une restriction notable : __on ne peut pas utiliser une liste pour définir une clef__ .<br/>
Une <a href="https://wiki.python.org/moin/DictionaryKeys">explication</a> sur le wiki : les clefs doivent être *hachable* c'est-à-dire que leur valeur permet de calculer un entier qui ne change pas au cours du temps. C'est à cette condition que l'accès à une valeur par la clef via une *table de hash* sera performant.</p>
```
dico={[1,2]:'a'}
```
<h2 class="alert alert-info">
Modification de valeur</h2>
Les dictionnaires sont modifiables :
```
#Si Lucie améliore sont classement on met à jour le dictionnaire :
fiche_adherent[('Lucie',24)]=(15.0,'à jour de cotisation')
#Classement de l 'adhérent.e
print(fiche_adherent[('Lucie',24)][0])
```
Avec un dictionnaire, pour ajouter une paire clef/valeur, il suffit d'écrire :
```
#Si on veut rajouter le puits au shifumi, on rajoute une clef 4 et sa valeur "Puits" :
shifumi[4]="Puits"
#La liste shifumi devient :
print(shifumi)
```
Alors que l'équivalent avec une liste conduit à un message d'erreur <span class="inline_code">index out of range</span> :
```
#Créons une liste pour notre exemple :
liste_nature=['arbre','fruit','feuille']
print(liste_nature[1])
#On voudrait rajouter un élément au rang suivant (3) de la liste en faisant :
liste_nature[3]='fleur'
#Il aurait fallu écrire :
liste_nature.append('fleur')
liste_nature
```
<p class="alert alert-warning"><u>Travail demandé n°3 :</u><br/>
1.Reprends les donnees exif de la photo vue ci-dessus et modifie le Copyright en ajoutant ton nom.<br/>
2.Affiche à nouveau les données et vérifie qu'elles ont bien été modifées.
<h2 class="alert alert-info">Création en compréhension</h2>
A l'instar d'une liste, on peut créer un dictionnaire en compréhension !
Dans l'exemple qui suit, on crée la table ascii des lettres majuscules :
```
# chr(n) renvoie le caractère ascii correspondant à un entier.
print("Dans la table ascii 65 correspond au caractère",chr(65))
#On génère le dictionnaire associant l'entier à la lettre de l'alphabet majuscule :
numero_table_ascii={chr(n): n for n in range(65, 65+26)}
print("Table ascii par lettre :")
print(numero_table_ascii)
#Ou à l'inverse avec la fonction ord() qui renvoie l'entier correponsdant à un caractère ascii :
print("L'entier correspondant au caractère a dans la table ascii est",ord('a'))
#On génère le dictionnaire associant la lettre de l'alphabet minuscule au nombre entier entre 97 et 122 :
caractere_ascii={ord(i):i for i in "abcdefghijklmnopqrstuvwxyz"}
print("Table ascii par numéro :")
print(caractere_ascii)
#Remarque :
############
#Pour avoir l'alphabet de façon savante, on peut utiliser le module string :
import string
print("l'alphabet est :", string.ascii_lowercase)
#Faire help('string') pour les autres attributs possibles de string
```
<p class="alert alert-warning"><u>Travail demandé n°4 :</u><br/>
On considère la chaîne de caractère (string) :<br/><br/>
" >La caméra volante nous pistait depuis notre entrée
<br/>sur l'anneau périphérique. Dès la rampe, j'avais décon-<br/>necté le pilote et poussé le bobsleigh à deux cents -"<br/><br/>
(A. Damasio - La Zone du Dehors).<br/><br/>
Construire un dictionnaire dont les clefs sont les caractères ascii utilisés dans la citation et les valeurs, le nombre d'occurences de chaque caractère (on ne prendra pas en compte les sauts de ligne).
</p>
<u>Solution:</u>
```
#Placer la solution ici :
```
<p class="alert alert-warning"><u>Travail demandé n°5 : chiffrement d'un message.</u><br/>
1.Créer un dictionnaire dans lequel chaque caractère est une clef, et sa valeur est un caractère différent.<br/>
2.Réécrire la string <span class='inline_code'>chaine</span> avec ce nouvel alphabet.<br/>
3.A quelle condition le message est-il facilement déchiffrable ?<br/>
4.Ecrire un programme de déchiffrement du message.</p>
```
#Placer la solution ici :
```
Quelques références :<br/>
wikibook :https://fr.wikibooks.org/wiki/Programmation_Python/Dictionnaires reprend le livre de swinnen.<br/>
Le Swinnen : https://inforef.be/swi/download/apprendre_python3_5.pdf<br/>
Site python-django.dev sur le mode "comment faire" : https://python-django.dev/page-apprendre-dictionnaire-python<br/>
Sur la table de hachage :https://fr.wikipedia.org/wiki/Table_de_hachage<br/>
Compléments : https://thispointer.com/python-6-different-ways-to-create-dictionaries/
| github_jupyter |
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
self.activation_function = lambda x : 1/(1+np.exp(-x))
# def sigmoid(self, x):
# return 1/(1+np.exp(-x))
# self.activation_function = sigmoid
def train(self, features, targets):
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
# print('X is: ' + str(X))
# print('X.shape is: ' + str(X.shape))
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) ## verified
hidden_outputs = self.activation_function(hidden_inputs) ## verified
# print('hidden_inputs is: ' + str(hidden_inputs))
# print('hidden_outputs is: ' + str(hidden_outputs));
# TODO: Output layer - Replace these values with your calculations.
# final_inputs = self.activation_function(np.dot(hidden_outputs, self.weights_hidden_to_output))
# signals into final output layer
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output)
final_outputs = final_inputs ## verified
# print('final_inputs is: ' + str(final_inputs))
# print('final_outputs is: ' + str(final_outputs))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs
# error = final_outputs - y
# print('error is: ' + str(error))
# TODO: Calculate error term for the output unit
output_error_term = error * 1 ## verified
# print('output_error_term is: ' + str(output_error_term))
# print('self.weights_hidden_to_output is: ' + str(self.weights_hidden_to_output))
# print('self.weights_hidden_to_output.shape is: ' + str(self.weights_hidden_to_output.shape))
# TODO: Calculate the hidden layer's contribution to the error
# hidden_error = np.dot(output_error_term, self.weights_hidden_to_output)
# hidden_error = np.dot(delta_weights_h_o,output_error_term)
# hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# print('hidden_error is: ' + str(hidden_error))
# TODO: Calculate the error term for the hidden layer
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) ## verified
# print('hidden_error_term is: ' + str(hidden_error_term))
# # Weight step (input to hidden)
# delta_weights_i_h += hidden_error_term.T * X[:, None]
# # Weight step (hidden to output)
# delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# Weight step (input to hidden)
# delta_weights_i_h += hidden_error_term * X[:, None]
# delta_weights_i_h += hidden_error_term * X
delta_weights_i_h += hidden_error_term.T * X[:, None] ## verified
# print('delta_weights_i_h is: ' + str(delta_weights_i_h))
# Weight step (hidden to output)
# delta_weights_h_o += output_error_term * hidden_outputsg
delta_weights_h_o += output_error_term * hidden_outputs[:, None] ## verified
# TODO: Update the weights - Replace these values with your calculations.
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
def run(self, features):
#### Implement the forward pass here ####
for X in zip(features):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - replace these values with the appropriate calculations.
# signals into hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden) ## verified
# signals from hidden layer
hidden_outputs = self.activation_function(hidden_inputs) ## verified
# TODO: Output layer - Replace these values with the appropriate calculations.
# signals from final output layer
# final_inputs = self.activation_function(np.dot(hidden_outputs, self.weights_hidden_to_output))
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output)
# signals into final output layer
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
print ('inputs.shape: ' + str(inputs.shape))
targets = np.array([[0.4]])
print ('targets.shape:' + str(targets.shape))
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.loc[batch].values, train_targets.iloc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
MSE(network.run(train_features).T, train_targets['cnt'].values)
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
```
## Behavorial Cloning
### Review Articles
- [An overview of gradient descent optimization algorithms](http://ruder.io/optimizing-gradient-descent/index.html#adam)
### Enrichment Readings
- [Review: SegNet (Semantic Segmentation)](https://towardsdatascience.com/review-segnet-semantic-segmentation-e66f2e30fb96)
- [Installing TensorFlow Object Detection API on Windows 10](https://medium.com/@marklabinski/installing-tensorflow-object-detection-api-on-windows-10-7a4eb83e1e7b)
- [Multi-Sensor Data Fusion (MSDF) for Driverless Cars, An Essential Primer
](https://medium.com/@lance.eliot/multi-sensor-data-fusion-msdf-for-driverless-cars-an-essential-primer-a1948bb8b57c)
- [How to validate your deep learning model with the Diffgram SDK — Tutorial](https://medium.com/diffgram/how-to-validate-your-deep-learning-model-with-the-diffgram-sdk-tutorial-22234a9a35?_hsenc=p2ANqtz-_o0BTtZu_UHjEOD4taLJqxrDs0xDP_xl-Do12O-pIoMFjzmoS945j4gYYqt96YCTANNiUtfOuRCPnutqNDwwtgSCRMhQ&_hsmi=74444548)
- [How do I design a visual deep learning system in 2019?](https://medium.com/diffgram/how-do-i-design-a-visual-deep-learning-system-in-2019-8597aaa35d03?_hsenc=p2ANqtz-_o0BTtZu_UHjEOD4taLJqxrDs0xDP_xl-Do12O-pIoMFjzmoS945j4gYYqt96YCTANNiUtfOuRCPnutqNDwwtgSCRMhQ&_hsmi=74444548)
### Useful Tips
- [A detailed example of how to use data generators with Keras](https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly)
- [Writing Custom Keras Generators](https://towardsdatascience.com/writing-custom-keras-generators-fe815d992c5a)
### Image Database
- [A dataset of images containing...](https://www.kaggle.com/moltean/fruits/downloads/fruits.zip/57)
### General Tips
- It is not necessary to use the left and right images to derive a successful model. Recording recovery driving from the sides of the road is also effective.
**Center Driving**
So that the car drives down the center of the road, it's essential to capture center lane driving. Try driving around the track various times while staying as close to the middle of the track as possible even when making turns.
In the real world, the car would need to stay in a lane rather than driving down the center. But for the purposes of this project, aim for center of the road driving.
**Strategies for Collecting Data**
Now that you have driven the simulator and know how to record data, it's time to think about collecting data that will ensure a successful model. There are a few general concepts to think about that we will later discuss in more detail:
- the car should stay in the center of the road as much as possible
- if the car veers off to the side, it should recover back to center
- driving counter-clockwise can help the model generalize
- flipping the images is a quick way to augment the data
- collecting data from the second track can also help generalize the model
- we want to avoid overfitting or underfitting when training the model
- knowing when to stop collecting more data
```
# Load pickled data
import pickle
import pandas as pd
import cv2
import numpy as np
from sklearn import preprocessing
import os
from random import shuffle
import glob
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
import math
import matplotlib.image as mpimg
import csv
from keras.layers import Input, InputLayer, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, Lambda, Cropping2D
from keras.models import Sequential, Model
from keras.optimizers import SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
import keras
from keras import backend as K
from keras.preprocessing import image
from keras.callbacks import EarlyStopping
from keras.models import load_model
from importlib import reload
import selfDrivingCarModules
reload(selfDrivingCarModules)
from selfDrivingCarModules import Sdc
import dataProcessingModules
reload(dataProcessingModules)
from dataProcessingModules import DataGenerator4Regression
data_path = "data/"
csv_file = data_path + "sim-00/driving_log.csv"
# csv_file = data_path + "driving_log-combined.csv"
x_partitions = {"train": None, "validation": None}
# y_partitions = {"train": None, "validation": None}
batch_size = 32
image_sizes = (160, 320)
params = {"dims": (*image_sizes, 3),
"batch_size": batch_size,
"n_channels": 1,
"augment_data": True,
"rescale_zero_mean": True,
"shuffle": True}
# x_partitions["train"], x_partitions["validation"], y_partitions["train"], y_partitions["validation"] = \
# Sdc.generate_partition_ids(data_path, csv_file, validation_split=0.2, limit=64, image_series_type=Sdc.__CENTER_IMAGES__)
x_partitions["train"], x_partitions["validation"], y_values = \
Sdc.generate_partition_ids(data_path, csv_file, validation_split=0.2, limit=0, image_series_type=Sdc.__ALL_IMAGES__,
correction_factor=0.1)
training_generator = DataGenerator4Regression(x_partitions["train"], y_values, **params)
validation_generator = DataGenerator4Regression(x_partitions["validation"], y_values, **params)
# testing data generators
x_data = training_generator[0][0]
y_data = training_generator[0][1]
test_index = 10
print("batch size={0:d} , number of batches={1:d}".format(batch_size, len(training_generator)))
# for augmented, they should be opposite values
print("sample training y_data: {0:0.3f}, {1:0.3f}".format(y_data[test_index], y_data[test_index + batch_size]))
y_data = validation_generator[0][1]
print("sample validation y_data: {0:0.3f}, {1:0.3f}".format(y_data[test_index], y_data[test_index + batch_size]))
# a check-point whether the values are re-scaled or not
print(np.min(x_data[test_index]), np.max(x_data[test_index]))
```
### Training New CNN Model from Scratch (No Transfer Learning)
```
model = Sdc.generate_model("cnn-01", image_sizes, rescale_input_zero_mean=False)
model.summary()
import pickle
model_name = "001-model-conv-6-fc-4-all-aug-crop"
model_filename = "saved-models/" + model_name + ".h5"
history_filename = "saved-models/" + model_name + ".p"
checkpoint_file = model_filename
checkpoint = ModelCheckpoint(filepath=checkpoint_file, monitor="val_loss", save_best_only=True)
stopper = EarlyStopping(monitor="val_loss", min_delta=1e-5, patience=5)
model.compile(loss="mse", optimizer="adam")
history = model.fit_generator(generator=training_generator, validation_data=validation_generator,
use_multiprocessing=True, workers=2, epochs=50, callbacks=[checkpoint, stopper])
model.save(model_filename)
with open(history_filename, "wb") as file_pi:
pickle.dump(history.history, file_pi)
# model = load_model(saved_model_filename, custom_objects={"tf": tf})
with open(history_filename, "rb") as file_pi:
history_data = pickle.load(file_pi)
# define SGD optimizer
# momentum = 0.5
# sgd = SGD(lr=0.01, momentum=momentum, decay=0.0, nesterov=False)
# compile the model
# model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])
# history = model.fit(x_trains, y_trains, validation_split=0.2, epochs=5, shuffle=True, callbacks=None, verbose=1)
# model.fit(x_trains, y_trains, validation_split=0.2, shuffle=True, epochs=5)
```
One can fairly quickly utilize a pre-trained model with [Keras Applications](https://keras.io/applications/).
### Transfer Learning, Available Models
1. **AlexNet**, developed in 2012, was the first breakthrough NN. AlexNet is one of the most understood and one of the best network architectures that one likes to start with.
- **VGGNet or VGG (using 224x224 images as input):**, developed by Virtual Geometry Group at Oxford university in 2014, contains simple and elegant architcture which makes it great for transfer learning. The VGG architecture is just a long sequence of 3x3 convolutions, broken up by 2x2 pooling layers, and then followed up by two or three FC layers at the end. The flexibility of VGG is one of its biggest strengths. There are actually two versions of VGG, VGG16 and VGG19 (where the numbers denote the number of layers included in each respective model). Both of them are supported by Keras.
```python
from keras.applications.vgg16 import VGG16
model = VGG16(weights='imagenet', include_top=False)
```
- **GoogLeNet/Inception in Keras (using 229x229 images as input):**, developed in 2013 by Google, performed even slightly better than the VGG, 6.7% vs 7.3% error. GoogLeNet's great advantage is that it really runs very fast. The Google team developed a model called Inception module which trains really well and it is efficiently deployable. The Inception module creates a situation that in which the total number of parameters is very small. This is why GoogLeNet almost runs as fast as AlexNet. Therefore, GoogLeNet is a great choice to investigate if we need to run our network in realtime.
```python
from keras.applications.inception_v3 import InceptionV3
model = InceptionV3(weights='imagenet', include_top=False)
```
- **ResNet (using 224x224 images as input):**, developed by Microsoft in 2015, has a massive 152 layers (AlexNet has 8 layers, VGG has 19 layers, and GoogLeNet has 22 layers). ResNet structure is similar to VGG's. ResNet achives a loss of only 3% on imagenet, which is actually better than normal human accuracy.
**Transfeer Learning Tips:**
The approach for using transfer learning will be different. There are four main cases:
1. new data set is small, new data is similar to original training data
- new data set is small, new data is different from original training data
- new data set is large, new data is similar to original training data
- new data set is large, new data is different from original training data
### Small new data set and similar to the original training data:
- slice off the end of the neural network
- add a new fully connected layer that matches the number of classes in the new data set
- randomize the weights of the new fully connected layer; freeze all the weights from the pre-trained network
- train the network to update the weights of the new fully connected layer
To avoid overfitting on the small data set, the weights of the original network will be held constant rather than re-training the weights.
## Transfer learning using provided data
### InceptionV3: Pre-trained with frozen weights
To start, we use an ImageNet pre-trained model with all weights frozen in the InceptionV3 model. We will also drop the end layer and append new layers onto it.
### Adding new layers
Unlike the Keras's `Sequential` model that I used above for pure training model, I rather use the [Model API](https://keras.io/models/model/) this time. This model is useful if one wants to use more advanced concepts like [skip layers](https://en.wikipedia.org/wiki/Residual_neural_network), for instance (which were used heavily in ResNet).
The camera image sizes that I am going to use are `160x320` (height, width), so I need to re-size the images up to the `input_size` I specified earlier `139x139`. To do so, I try both fixed and variable aspect ratios.
[GlobalAveragePooling2D, an alternative to overfitting](https://datascience.stackexchange.com/questions/28120/globalaveragepooling2d-in-inception-v3-example)
```
# Load our images first, and we'll check what we have
# from keras.applications.vgg16 import preprocess_input
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input
from keras.layers import Input, Lambda
import tensorflow as tf
from keras.layers import Dense, GlobalAveragePooling2D
from keras.callbacks import EarlyStopping
resized_input_shape = (139, 139)
freeze_flag = False # `True` to freeze layers, `False` for full training
weights_flag = "imagenet" # 'imagenet' or None
preprocess_flag = True # Should be true for ImageNet pre-trained typically
# Using smaller than the default 299x299x3 input for InceptionV3
# which will speed up training. Keras v2.0.9 supports down to 139x139x3
# input_size = 139
# Using Inception with ImageNet pre-trained weights
inception = InceptionV3(weights=weights_flag, include_top=False, input_shape=(*resized_input_shape, 3))
if (freeze_flag == True):
for layer in inception.layers:
layer.trainable = False
# inception.summary()
# Makes the input placeholder layer with image shape
input_ph = Input(shape=(*image_sizes, 3))
preprocessed_input = Cropping2D(cropping=((50,20), (0,0)), input_shape=(*image_sizes, 3))(input_ph)
preprocessed_input = Lambda(lambda image: tf.image.resize_images( \
image, (139, 139), method=tf.image.ResizeMethod.BILINEAR, preserve_aspect_ratio=False))(preprocessed_input)
# preprocessed_input = Lambda(lambda x: x / 255.0 - 0.5, input_shape=(*input_size, 3))(preprocessed_input)
inception_output = inception(preprocessed_input)
# layer_output = Flatten()(inception_output)
layer_output = GlobalAveragePooling2D()(inception_output)
layer_output = Dense(128, activation=None, name="fc1")(layer_output)
layer_output = Dropout(rate=0.20)(layer_output)
layer_output = Dense(64, activation=None, name="fc2")(layer_output)
layer_output = Dropout(rate=0.20)(layer_output)
layer_output = Dense(32, activation=None, name="fc3")(layer_output)
layer_output = Dropout(rate=0.20)(layer_output)
predictions = Dense(1, activation=None, name="fc4")(layer_output)
# layer_output = GlobalAveragePooling2D()(inception_output)
# layer_output = Dense(64, activation="relu")(layer_output)
# predictions = Dense(1, activation="relu")(layer_output)
# predictions = Dense(1, activation="softmax")(layer_output)
model = Model(inputs=input_ph, outputs=predictions, name="cnn-20")
model.compile(optimizer="Adam", loss="mse", metrics=["mse"])
model.summary()
model_name = "051-model-inception-fc-4-all-data-01"
model_filename = "saved-models/" + model_name + ".h5"
history_filename = "saved-models/" + model_name + ".p"
checkpoint_file = model_filename
checkpoint = ModelCheckpoint(filepath=checkpoint_file, monitor="val_loss", save_best_only=True)
stopper = EarlyStopping(monitor="val_loss", min_delta=1e-6, patience=5)
history = model.fit_generator(generator=training_generator, validation_data=validation_generator,
use_multiprocessing=True, workers=2, epochs=100, callbacks=[checkpoint])
model.save("saved-models/020-model-inception-fc-3-all-aug-crop.h5")
history.history["val_loss"]
history.history["loss"]
image_paths = glob("images/*.jpg")
i = 2 # Can change this to your desired image to test
img_path = image_paths[i]
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
# Set a couple flags for training - you can ignore these for now
freeze_flag = True # `True` to freeze layers, `False` for full training
weights_flag = "imagenet" # 'imagenet' or None
preprocess_flag = True # Should be true for ImageNet pre-trained typically
# Loads in InceptionV3
from keras.applications.inception_v3 import InceptionV3
# We can use smaller than the default 299x299x3 input for InceptionV3
# which will speed up training. Keras v2.0.9 supports down to 139x139x3
input_size = 139
# Using Inception with ImageNet pre-trained weights
inception = InceptionV3(weights=weights_flag, include_top=False,
input_shape=(input_size,input_size,3))
```
We'll use Inception V3 for this lab, although you can use the same techniques with any of the models in [Keras Applications](https://keras.io/applications/). Do note that certain models are only available in certain versions of Keras; this workspace uses Keras v2.0.9, for which you can see the available models [here](https://faroit.github.io/keras-docs/2.0.9/applications/).
In the above, we've set Inception to use an `input_shape` of 139x139x3 instead of the default 299x299x3. This will help us to speed up our training a bit later (and we'll actually be upsampling from smaller images, so we aren't losing data here). In order to do so, we also must set `include_top` to `False`, which means the final fully-connected layer with 1,000 nodes for each ImageNet class is dropped, as well as a Global Average Pooling layer.
### Pre-trained with frozen weights
To start, we'll see how an ImageNet pre-trained model with all weights frozen in the InceptionV3 model performs. We will also drop the end layer and append new layers onto it, although you could do this in different ways (not drop the end and add new layers, drop more layers than we will here, etc.).
You can freeze layers by setting `layer.trainable` to False for a given `layer`. Within a `model`, you can get the list of layers with `model.layers`.
```
if freeze_flag == True:
## TODO: Iterate through the layers of the Inception model
## loaded above and set all of them to have trainable = False
for layer in inception.layers:
layer.trainable = False
```
### Dropping layers
You can drop layers from a model with `model.layers.pop()`. Before you do this, you should check out what the actual layers of the model are with Keras's `.summary()` function.
```
## TODO: Use the model summary function to see all layers in the
## loaded Inception model
inception.summary()
```
In a normal Inception network, you would see from the model summary that the last two layers were a global average pooling layer, and a fully-connected "Dense" layer. However, since we set `include_top` to `False`, both of these get dropped. If you otherwise wanted to drop additional layers, you would use:
```
inception.layers.pop()
```
Note that `pop()` works from the end of the model backwards.
It's important to note two things here:
1. How many layers you drop is up to you, typically. We dropped the final two already by setting `include_top` to False in the original loading of the model, but you could instead just run `pop()` twice to achieve similar results. (*Note:* Keras requires us to set `include_top` to False in order to change the `input_shape`.) Additional layers could be dropped by additional calls to `pop()`.
2. If you make a mistake with `pop()`, you'll want to reload the model. If you use it multiple times, the model will continue to drop more and more layers, so you may need to check `model.summary()` again to check your work.
### Adding new layers
Now, you can start to add your own layers. While we've used Keras's `Sequential` model before for simplicity, we'll actually use the [Model API](https://keras.io/models/model/) this time. This functions a little differently, in that instead of using `model.add()`, you explicitly tell the model which previous layer to attach to the current layer. This is useful if you want to use more advanced concepts like [skip layers](https://en.wikipedia.org/wiki/Residual_neural_network), for instance (which were used heavily in ResNet).
For example, if you had a previous layer named `inp`:
```
x = Dropout(0.2)(inp)
```
is how you would attach a new dropout layer `x`, with it's input coming from a layer with the variable name `inp`.
We are going to use the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), which consists of 60,000 32x32 images of 10 classes. We need to use Keras's `Input` function to do so, and then we want to re-size the images up to the `input_size` we specified earlier (139x139).
```
from keras.layers import Input, Lambda
import tensorflow as tf
# Makes the input placeholder layer 32x32x3 for CIFAR-10
cifar_input = Input(shape=(32,32,3))
# Re-sizes the input with Kera's Lambda layer & attach to cifar_input
resized_input = Lambda(lambda image: tf.image.resize_images(
image, (input_size, input_size)))(cifar_input)
# Feeds the re-sized input into Inception model
# You will need to update the model name if you changed it earlier!
inp = inception(resized_input)
# Imports fully-connected "Dense" layers & Global Average Pooling
from keras.layers import Dense, GlobalAveragePooling2D
## TODO: Setting `include_top` to False earlier also removed the
## GlobalAveragePooling2D layer, but we still want it.
## Add it here, and make sure to connect it to the end of Inception
x = GlobalAveragePooling2D()(inp)
## TODO: Create two new fully-connected layers using the Model API
## format discussed above. The first layer should use `out`
## as its input, along with ReLU activation. You can choose
## how many nodes it has, although 512 or less is a good idea.
## The second layer should take this first layer as input, and
## be named "predictions", with Softmax activation and
## 10 nodes, as we'll be using the CIFAR10 dataset.
x = Dense(512, activation = 'relu')(x)
predictions = Dense(10, activation = 'softmax')(x)
```
We're almost done with our new model! Now we just need to use the actual Model API to create the full model.
```
# Imports the Model API
from keras.models import Model
# Creates the model, assuming your final layer is named "predictions"
model = Model(inputs=cifar_input, outputs=predictions)
# Compile the model
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Check the summary of this new model to confirm the architecture
model.summary()
```
Great job creating a new model architecture from Inception! Notice how this method of adding layers before InceptionV3 and appending to the end of it made InceptionV3 condense down into one line in the summary; if you use the Inception model's normal input (which you could gather from `inception.layers.input`), it would instead show all the layers like before.
Most of the rest of the code in the notebook just goes toward loading our data, pre-processing it, and starting our training in Keras, although there's one other good point to make here - Keras callbacks.
### Keras Callbacks
Keras [callbacks](https://keras.io/callbacks/) allow you to gather and store additional information during training, such as the best model, or even stop training early if the validation accuracy has stopped improving. These methods can help to avoid overfitting, or avoid other issues.
There's two key callbacks to mention here, `ModelCheckpoint` and `EarlyStopping`. As the names may suggest, model checkpoint saves down the best model so far based on a given metric, while early stopping will end training before the specified number of epochs if the chosen metric no longer improves after a given amount of time.
To set these callbacks, you could do the following:
```
checkpoint = ModelCheckpoint(filepath=save_path, monitor='val_loss', save_best_only=True)
```
This would save a model to a specified `save_path`, based on validation loss, and only save down the best models. If you set `save_best_only` to `False`, every single epoch will save down another version of the model.
```
stopper = EarlyStopping(monitor='val_acc', min_delta=0.0003, patience=5)
```
This will monitor validation accuracy, and if it has not decreased by more than 0.0003 from the previous best validation accuracy for 5 epochs, training will end early.
You still need to actually feed these callbacks into `fit()` when you train the model (along with all other relevant data to feed into `fit`):
```
model.fit(callbacks=[checkpoint, stopper])
```
## GPU time
The rest of the notebook will give you the code for training, so you can turn on the GPU at this point - but first, **make sure to save your jupyter notebook**. Once the GPU is turned on, it will load whatever your last notebook checkpoint is.
While we suggest reading through the code below to make sure you understand it, you can otherwise go ahead and select *Cell > Run All* (or *Kernel > Restart & Run All* if already using GPU) to run through all cells in the notebook.
```
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelBinarizer
from keras.datasets import cifar10
(X_train, y_train), (X_val, y_val) = cifar10.load_data()
# One-hot encode the labels
label_binarizer = LabelBinarizer()
y_one_hot_train = label_binarizer.fit_transform(y_train)
y_one_hot_val = label_binarizer.fit_transform(y_val)
# Shuffle the training & test data
X_train, y_one_hot_train = shuffle(X_train, y_one_hot_train)
X_val, y_one_hot_val = shuffle(X_val, y_one_hot_val)
# We are only going to use the first 10,000 images for speed reasons
# And only the first 2,000 images from the test set
X_train = X_train[:10000]
y_one_hot_train = y_one_hot_train[:10000]
X_val = X_val[:2000]
y_one_hot_val = y_one_hot_val[:2000]
```
You can check out Keras's [ImageDataGenerator documentation](https://faroit.github.io/keras-docs/2.0.9/preprocessing/image/) for more information on the below - you can also add additional image augmentation through this function, although we are skipping that step here so you can potentially explore it in the upcoming project.
```
# Use a generator to pre-process our images for ImageNet
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.inception_v3 import preprocess_input
if preprocess_flag == True:
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
val_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
else:
datagen = ImageDataGenerator()
val_datagen = ImageDataGenerator()
# Train the model
batch_size = 32
epochs = 5
# Note: we aren't using callbacks here since we only are using 5 epochs to conserve GPU time
model.fit_generator(datagen.flow(X_train, y_one_hot_train, batch_size=batch_size),
steps_per_epoch=len(X_train)/batch_size, epochs=epochs, verbose=1,
validation_data=val_datagen.flow(X_val, y_one_hot_val, batch_size=batch_size),
validation_steps=len(X_val)/batch_size)
```
As you may have noticed, CIFAR-10 is a fairly tough dataset. However, given that we are only training on a small subset of the data, only training for five epochs, and not using any image augmentation, the results are still fairly impressive!
We achieved ~70% validation accuracy here, although your results may vary.
## [Optional] Test without frozen weights, or by training from scratch.
Since the majority of the model was frozen above, training speed is pretty quick. You may also want to check out the training speed, as well as final accuracy, if you don't freeze the weights. Note that this can be fairly slow, so we're marking this as optional in order to conserve GPU time.
If you do want to see the results from doing so, go back to the first code cell and set `freeze_flag` to `False`. If you want to completely train from scratch without ImageNet pre-trained weights, follow the previous step as well as setting `weights_flag` to `None`. Then, go to *Kernel > Restart & Run All*.
## Comparison
So that you don't use up your GPU time, we've tried out these results ourselves as well.
Training Mode | Val Acc @ 1 epoch | Val Acc @ 5 epoch | Time per epoch
---- | :----: | :----: | ----:
Frozen weights | 65.5% | 70.3% | 50 seconds
Unfrozen weights | 50.6% | 71.6% | 142 seconds
No pre-trained weights | 19.2% | 39.2% | 142 seconds
From the above, we can see that the pre-trained model with frozen weights actually began converging the fastest (already at 65.5% after 1 epoch), while the model re-training from the pre-trained weights slightly edged it out after 5 epochs.
However, this does not tell the whole story - the training accuracy was substantially higher, nearing 87% for the unfrozen weights model. It actually began overfit the data much more under this method. We would likely be able to counteract some of this issue by using data augmentation. On the flip side, the model using frozen weights could also have been improved by actually only freezing a portion of the weights; some of these are likely more specific to ImageNet classes as it gets later in the network, as opposed to the simpler features extracted early in the network.
### The Power of Transfer Learning
Comparing the last line to the other two really shows the power of transfer learning. After five epochs, a model without ImageNet pre-training had only achieved 39.2% accuracy, compared to over 70% for the other two. As such, pre-training the network has saved substantial time, especially given the additional training time needed when the weights are not frozen.
There is also evidence found in various research that pre-training on ImageNet weights will result in a higher overall accuracy than completely training from scratch, even when using a substantially different dataset.
| github_jupyter |
# Example Seldon Core Deployments using Helm
<img src="images/deploy-graph.png" alt="predictor with canary" title="ml graph"/>
## Prerequisites
You will need
- [Git clone of Seldon Core](https://github.com/SeldonIO/seldon-core)
- A running Kubernetes cluster with kubectl authenticated
- [seldon-core Python package](https://pypi.org/project/seldon-core/) (```pip install seldon-core>=0.2.6.1```)
- [Helm client](https://helm.sh/)
### Creating a Kubernetes Cluster
Follow the [Kubernetes documentation to create a cluster](https://kubernetes.io/docs/setup/).
Once created ensure ```kubectl``` is authenticated against the running cluster.
## Setup
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
!kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
```
## Install Helm
```
!kubectl -n kube-system create sa tiller
!kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
!helm init --service-account tiller
!kubectl rollout status deploy/tiller-deploy -n kube-system
```
## Setup Istio
Ensure you have istio installed. Follow their [docs](https://istio.io/docs)
For this example we will create the default istio gateway for seldon which needs to be called `seldon-gateway`. You can supply your own gateway by adding to your SeldonDeployments resources the annotation `seldon.io/istio-gateway` with values the name of your istio gateway.
Create a gateway for our istio-ingress
```
!kubectl create -f resources/seldon-gateway.yaml
```
Label our namespace so istio creates sidecars
```
!kubectl label namespace seldon istio-injection=enabled
```
If you are using Minikube for your Kubernetes cluster you will need to run as root in a separte terminal:
```
minikube tunnel
```
This will allow a LoadBalancer to be simulated on your local machine.
```
INGRESS_HOST=!kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
INGRESS_PORT=!kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'
ISTIO_GATEWAY=INGRESS_HOST[0]+":"+INGRESS_PORT[0]
ISTIO_GATEWAY
```
## Start seldon-core
```
!helm install ../helm-charts/seldon-core-operator --name seldon-core --set istio.enabled=true --set usageMetrics.enabled=true --namespace seldon-system
!kubectl rollout status deploy/seldon-controller-manager -n seldon-system
```
## Serve Single Model
```
!helm install ../helm-charts/seldon-single-model --name mymodel
!helm template ../helm-charts/seldon-single-model | pygmentize -l json
!kubectl rollout status deploy/mymodel-mymodel-7cd068f
```
### Get predictions
```
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="mymodel",namespace="seldon",gateway_endpoint=ISTIO_GATEWAY)
```
#### REST Request
```
r = sc.predict(gateway="istio",transport="rest")
print(r)
```
#### gRPC Request
```
r = sc.predict(gateway="istio",transport="grpc")
print(r)
!helm delete mymodel --purge
```
## Serve AB Test
```
!helm install ../helm-charts/seldon-abtest --name myabtest
!helm template ../helm-charts/seldon-abtest | pygmentize -l json
!kubectl rollout status deploy/myabtest-abtest-41de5b8
!kubectl rollout status deploy/myabtest-abtest-df66c5c
```
### Get predictions
```
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="myabtest",namespace="seldon",gateway_endpoint=ISTIO_GATEWAY)
```
#### REST Request
```
r = sc.predict(gateway="istio",transport="rest")
print(r)
```
#### gRPC Request
```
r = sc.predict(gateway="istio",transport="grpc")
print(r)
!helm delete myabtest --purge
```
## Serve Multi-Armed Bandit
```
!helm install ../helm-charts/seldon-mab --name mymab
!helm template ../helm-charts/seldon-mab | pygmentize -l json
!kubectl rollout status deploy/mymab-abtest-41de5b8
!kubectl rollout status deploy/mymab-abtest-b8038b2
!kubectl rollout status deploy/mymab-abtest-df66c5c
```
### Get predictions
```
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="mymab",namespace="seldon",gateway_endpoint=ISTIO_GATEWAY)
```
#### REST Request
```
r = sc.predict(gateway="istio",transport="rest")
print(r)
```
#### gRPC Request
```
r = sc.predict(gateway="istio",transport="grpc")
print(r)
!helm delete mymab --purge
```
## Serve with Shadow
#### We'll use a pre-packaged model server but the 'shadow' flag can be set on any predictor.
```
!pygmentize ./resources/istio_shadow.yaml
!kubectl apply -f ./resources/istio_shadow.yaml
!kubectl rollout status deploy/iris-default-54fcd84
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="sklearn",namespace="seldon",gateway_endpoint=ISTIO_GATEWAY)
r = sc.predict(gateway="istio",transport="rest",shape=(1,4))
print(r)
```
#### The traffic should go to both the default predictor and the shadow. If desired this can be checked in istio dashboards in the same way as with the istio canary example. When shadowing only the responses from the default predictor are used.
```
!kubectl delete -f ./resources/istio_shadow.yaml
```
| github_jupyter |
```
from PIL import Image
import numpy as np
```
先下載 MNIST 資料
```
import os
import urllib
from urllib.request import urlretrieve
dataset = 'mnist.pkl.gz'
def reporthook(a,b,c):
print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="")
if not os.path.isfile(dataset):
origin = "https://github.com/mnielsen/neural-networks-and-deep-learning/raw/master/data/mnist.pkl.gz"
print('Downloading data from %s' % origin)
urlretrieve(origin, dataset, reporthook=reporthook)
import gzip
import pickle
with gzip.open(dataset, 'rb') as f:
train_set, validation_set, test_set = pickle.load(f, encoding='latin1')
# 設定好訓練及測試資料
train_X, train_y = train_set
test_X, test_y = test_set
# 設定成我們的格式
train_X = train_X[..., None]
test_X = test_X[..., None]
# 有 10 種類別,輸入的是 784 維
print(train_X.shape)
np.unique(train_y)
from IPython.display import display
def showX(X):
int_X = (X*255).clip(0,255).astype('uint8')
# N*784 -> N*28*28 -> 28*N*28 -> 28 * 28N
int_X_reshape = int_X.reshape(-1,28,28).swapaxes(0,1).reshape(28,-1)
display(Image.fromarray(int_X_reshape))
# 訓練資料, X 的前 20 筆
print(train_y[:20])
showX(train_X[:20])
# 參考範例 softmax regression
W = np.random.normal(size=(10, 784))
b = np.random.normal(size=(10, 1))
n_data = train_X.shape[0]
# 紀錄 loss
loss_history = []
accuracy_history = []
for epoch in range(5000):
idx = np.random.choice(n_data, 300, replace=False)
X = train_X[idx]
y = train_y[idx]
one_y = np.eye(10)[y][..., None]
d = np.exp(W @ X + b)
q = d/d.sum(axis=(1,2), keepdims=True)
loss = -np.log(q[range(len(y)), y]).mean()
loss_history.append(loss)
accuracy = (q.argmax(axis=1).ravel() == y).mean()
accuracy_history.append(accuracy)
if epoch%100 == 0:
print(epoch, accuracy, loss)
grad_b_all = q - one_y
grad_b = grad_b_all.mean(axis=0)
grad_W_all = grad_b_all @ X.swapaxes(1,2)
grad_W = grad_W_all.mean(axis=0)
W -= grad_W
b -= grad_b
# test data 的正確率
((W @ test_X + b).argmax(axis=1).ravel() == test_y).mean()
%matplotlib inline
import matplotlib.pyplot as plt
# 準確率的圖
plt.plot(accuracy_history);
# loss 的圖
plt.plot(loss_history);
def softmax(x):
t = np.exp(x)
return t/t.sum(axis=(-2,-1),keepdims=True)
def relu(x):
return np.maximum(x, 0)
def sigmoid(x):
return 1/(1+np.exp(-x))
# 微分
def Drelu(x):
return (x>0).astype('float32')
def Dsigmoid(x):
q = sigmoid(x)
return q * (1-q)
# or
#return np.exp(x)/(1+np.exp(-x))**2
# 參考範例 feedforward network
from time import time
accuracy_history = []
γ = 0.02
A = np.random.normal(size=(50,784))
b = np.random.normal(size=(50,1))
C = np.random.normal(size=(10,50))
d = np.random.normal(size=(10,1))
t0 = time()
for epochs in range(20):
idx = np.random.choice(n_data, n_data, replace=False)
for i in idx:
x = train_X[i]
y = train_y[i]
U_ = A@x+b
U = relu(U_)
q = softmax(C@U+d)
L = - np.log(q[y])[0]
p = np.eye(10)[y][:, None]
grad_d = q - p
grad_C = grad_d @ U.T
grad_b = (C.T @ grad_d ) * Drelu(U_)
grad_A = grad_b @ x.T
A -= γ * grad_A
b -= γ * grad_b
C -= γ * grad_C
d -= γ * grad_d
score = ((C@relu(A@test_X+b)+d).argmax(axis=1).ravel()==test_y).mean()
print(epochs, score, "%.1f"%(time()-t0), L)
print(time()-t0)
```
| github_jupyter |
```
!git clone https://github.com/MinkaiXu/CGCF-ConfGen
!pip install kora -q
import kora.install.rdkit
pip install torch==1.6.0
!pip install torchvision==0.7.0
pip install torchdiffeq==0.0.1
pip install tqdm networkx scipy scikit-learn h5py tensorboard
pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.6.0+cu101.html
pip install torch-geometric
from google.colab import drive
drive.mount('/content/drive')
cd CGCF-ConfGen/
#@title import packages
import os
import argparse
import torch
import torch.utils.tensorboard
from torch_geometric.data import DataLoader
from torch_geometric.loader import DataLoader
from tqdm.auto import tqdm
from models.edgecnf import *
from models.cnf_edge import NONLINEARITIES, LAYERS, SOLVERS
from utils.dataset import *
from utils.transforms import *
from utils.misc import *
#@title Arguments
parser = argparse.ArgumentParser()
# BEGIN
# Model arguments
parser.add_argument('--activation', type=str, default='softplus')
parser.add_argument('--hidden_dim', type=int, default=64)
parser.add_argument("--num_blocks", type=int, default=1,
help='Number of stacked CNFs.')
parser.add_argument("--layer_type", type=str, default="concatsquash", choices=LAYERS)
parser.add_argument('--time_length', type=float, default=0.5)
parser.add_argument('--train_T', type=eval, default=True, choices=[True, False])
parser.add_argument('--use_adjoint', type=eval, default=True, choices=[True, False])
parser.add_argument('--solver', type=str, default='dopri5', choices=SOLVERS)
parser.add_argument('--atol', type=float, default=1e-5)
parser.add_argument('--rtol', type=float, default=1e-5)
parser.add_argument('--batch_norm', type=eval, default=True, choices=[True, False])
parser.add_argument('--sync_bn', type=eval, default=False, choices=[True, False])
parser.add_argument('--bn_lag', type=float, default=0)
parser.add_argument('--spectral_norm', type=eval, default=True, choices=[True, False])
parser.add_argument('--train_noise_std', type=float, default=0.1)
# Datasets and loaders
parser.add_argument('--aux_edge_order', type=int, default=3)
parser.add_argument('--train_dataset', type=str, default='./data/qm9/train.pkl')
parser.add_argument('--val_dataset', type=str, default='./data/qm9/val.pkl')
parser.add_argument('--train_batch_size', type=int, default=128)
parser.add_argument('--val_batch_size', type=int, default=256)
parser.add_argument('--num_workers', type=int, default=8)
parser.add_argument('--max_val_batch', type=int, default=5)
# Optimizer and scheduler
parser.add_argument('--lr', type=float, default=1e-3)
parser.add_argument('--weight_decay', type=float, default=0)
parser.add_argument('--sched_factor', type=float, default=0.5)
parser.add_argument('--sched_patience', type=int, default=3,
help='Patience steps = sched_patience * val_freq')
parser.add_argument('--sched_min_lr', type=int, default=1e-5)
parser.add_argument('--beta1', type=float, default=0.95)
parser.add_argument('--beta2', type=float, default=0.999)
# Training
parser.add_argument('--seed', type=int, default=2020)
parser.add_argument('--logging', type=eval, default=True, choices=[True, False])
parser.add_argument('--device', type=str, default='cuda')
parser.add_argument('--max_iters', type=int, default=50*1000,
help='Max iterations for MLE pre-training of CNF')
parser.add_argument('--val_freq', type=int, default=300)
parser.add_argument('--inspect_freq', type=int, default=50)
parser.add_argument('--tag', type=str, default='')
parser.add_argument('--resume', type=str, default=None)
parser.add_argument('--log_root', type=str, default='./logs')
# END
args = parser.parse_args(['--train_dataset','/content/drive/MyDrive/test_QM9.pkl','--val_dataset','/content/drive/MyDrive/val_QM9.pkl'])
seed_all(args.seed)
tf = get_standard_transforms(order=args.aux_edge_order)
train_dset = MoleculeDataset(args.train_dataset, transform=tf)
train_iterator = get_data_iterator(DataLoader(train_dset, batch_size=args.train_batch_size, shuffle=True, drop_last=True))
args.device
model = EdgeCNF(args).to(args.device)
batch = next(train_iterator).to(args.device)
import torch
from .common import *
from .cnf_edge import CNF, ODEfunc, ODEgnn, MovingBatchNorm1d, SequentialFlow, count_nfe, add_spectral_norm, spectral_norm_power_iteration
from .distgeom import *
!python train.py --train_dataset /content/drive/MyDrive/train_QM9.pkl --val_dataset /content/drive/MyDrive/val_QM9.pkl
args.train_dataset
torch.__version__
```
| github_jupyter |
```
import pandas as pd
import scipy.sparse as sparse
from code.preprocessing import Dataset
from core.database.db import DB
from code.metrics import fuzzy, precision
from implicit.als import AlternatingLeastSquares
db = DB(db='recsys')
from code.preprocessing import filter_old_cards, filter_rare_cards, filter_rare_goods, filter_old_goods, filter_by_quantile
%load_ext autoreload
%autoreload 2
```
### Препроцессинг трейна
```
train = pd.read_sql('select * from db.train', con = db.engine)
print('Shape: %s' % train.shape[0])
train = filter_rare_goods(train, rarity_num=5)
print('Shape without rare goods: %s' % train.shape[0])
train = filter_rare_cards(train, rarity_num=5)
print('Shape without rare cards: %s' % train.shape[0])
train = filter_old_cards(train, month_threshold=1)
print('Shape without old cards: %s' % train.shape[0])
train = filter_old_goods(train, month_threshold=1)
print('Shape without old goods: %s' % train.shape[0])
train = filter_by_quantile(train, plu_count_quantiles=(0.5, 0.99), cards_count_quantiles=(0.4, 0.99))
print('Shape without low and high quantiles: %s' % train.shape[0])
ds = Dataset(train)
matrix = ds.make_matrix()
matrix = ds.transform(matrix, method='clip', clip_upper_value=1000)
matrix = ds.transform(matrix, method='log')
matrix = ds.apply_weights(matrix, weight='bm25')
```
## Подготовка и очистка тестового сета
```
products = pd.read_sql('select * from db.products', con = db.engine)
test = pd.read_sql('select * from db.test', con = db.engine)
val = pd.read_sql('select * from db.val', con = db.engine)
test.columns = [x.lower() for x in test.columns]
products.columns = [x.lower() for x in products.columns]
val.columns = [x.lower() for x in val.columns]
crd_no_unique_train = matrix.index.unique()
plu_id_unique_train = matrix.columns.unique()
test = test[test['crd_no'].isin(crd_no_unique_train)]
test = test[test['plu_id'].isin(plu_id_unique_train)]
val = val[val['crd_no'].isin(crd_no_unique_train)]
val = val[val['plu_id'].isin(plu_id_unique_train)]
plu_category_dict = products.set_index('plu_id').to_dict()['level_2_name']
val_facts_dict = dict(val[['crd_no', 'plu_id']].groupby('crd_no').apply(lambda x: x['plu_id'].unique().tolist()))
test_facts_dict = dict(test[['crd_no', 'plu_id']].groupby('crd_no').apply(lambda x: x['plu_id'].unique().tolist()))
plu_price = pd.read_sql('select * from db.plu_price', con=db.engine)
plu_price['mean_price'] = plu_price['mean_price'].astype('float16')
plu_price = dict(plu_price[['plu_id', 'mean_price']].values.tolist())
```
### Строим модель
```
model = AlternatingLeastSquares(factors=50, regularization=0.0001,
iterations=20, num_threads=16,
calculate_training_loss=True)
model.fit(sparse.csr_matrix(matrix).T.tocsr(), show_progress=True)
```
### Проверяем метрики
```
%%time
fz = fuzzy(matrix, model, val_facts_dict, plu_category_dict, weight_by_price=False)
prc = precision(matrix, model, val_facts_dict, weight_by_price=False)
fz_w = fuzzy(matrix, model, val_facts_dict, plu_category_dict, plu_price=plu_price)
prc_w = precision(matrix, model, val_facts_dict, plu_price=plu_price)
print('Fuzzy: %s' % fz)
print('Fuzzy Weighted: %s' % fz_w)
print('Precision: %s' % prc)
print('Precision Weighted: %s' % prc_w)
```
| github_jupyter |
# Assumptions of Linear Regression
Previously, we learned to apply linear regression on a given dataset. But it is important to note that Linear Regression have some assumptions related to the data on which it is applied and if they are not followed, it can affect its performance. These assumptions are:
1. There should be a linear relationship between the dependant and the independant features.
2. There should be no auto-correlation. This means that the error terms should not be correlated.
3. The variance of error terms should be equal.
4. There should be no multi-collinearity. This means that no 2 independant features should be highly correlated.
5. The errors should be normally distributed.
Lets check these assumptions on the model which we have trained in the previous activity.
## Loading the previous model
```
#importing libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
#load the data
#seperating dependant and independant features
#splitting data into training and test sets
#instantiate a model
#fit the model to training data
```
<details>
<summary>Solution</summary>
<p>
```python
data = pd.read_csv('../../data/data_cleaned.csv')
X = data.drop('price', axis= 1)
y = data.price
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2, random_state= 42, shuffle= True)
model = LinearRegression()
model.fit(X_train, y_train)
```
</p>
</details>
Now, we have the model. Lets calculate the residuals first.
## Calculate residuals
```
#create a dataframe to store residuals
result = pd.DataFrame({'Actual': y_test, 'Predicted': model.predict(X_test)})
result.reset_index(drop= True, inplace= True) #reset indexes
```
* Make a new column **residuals** by subtracting *Predicted* value from *Actual* values
* display top 5 rows of the **result** dataframe.
```
#Make a new column residuals
#display top 5 rows of the result dataframe.
```
<details>
<summary>Solution</summary>
<p>
```python
result['residuals'] = result.Actual - result.Predicted
result.head()
```
</p>
</details>
## Check the variance and correlation of error terms
```
import matplotlib.pyplot as plt #importing libraries for plotting graphs
#plotting the residuals
plt.scatter(range(len(y_test)), result.residuals)
plt.show()
```
We can clearly see that apart from 3-4 points, the spread of error terms is constant. Hence we can conclude that the variance is constant.
Also, there is no specific pattern in the error terms. They are randomly distributed. So, there is no correlation among them.
## Check Distribution of Residuals
* draw a histogram of residuals from result dataframe using 300 as the number of bins.
```
#draw a histogram
```
<details>
<summary>Solution</summary>
<p>
```python
plt.hist(result.residuals, bins= 300)
plt.show()
```
</p>
</details>
From the above graph it can be concluded that the error terms are normally distributed. The unusually high peak to the curve is caused by the outliers that were pointed out in the first activity. To confirm the distribution, we can also plot a qq plot
## Check for Multi-Collinearity
To check for multi-collinearity, we can find **Variance Inflation Factor** of all the columns. If for any feature, its value is above 5, we can conclude that the feature is correlated.
```
# Importing Variance_inflation_Factor funtion from the Statsmodels
from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant
# Calculating VIF for every column (only works for the not Catagorical)
VIF = pd.Series([variance_inflation_factor(data.values, i) for i in range(data.shape[1])], index =data.columns)
VIF
```
There are 4 features having VIF greater than 5. We can remove the 2 features having the higher values. So, none of the features will be correlated and hence multi-collinearity can be removed.
## Conclusion
This is how we can check the assumptions of Linear Regression and see if everything follows.
## Additional Resources
1. VIF : https://www.analyticsvidhya.com/blog/2020/03/what-is-multicollinearity/
2. Assumptions in detail: https://www.analyticsvidhya.com/blog/2016/07/deeper-regression-analysis-assumptions-plots-solutions/
| github_jupyter |
# Making Faces Using an Autoencoder
Autoencoders learn to compress data into a smaller frame and then reconstruct that data from that frame. When a computer encodes data this way, it is basically simplifying the data into what features it finds to be the most useful. This notebook will train an autoencoder on faces, then use PCA to create new encoded data that looks similar enough to our training data to create artificial faces based on the features that the neural network found was important.
```
!pip3 install face_recognition
import face_recognition
import os
import sys
import random
import warnings
from pylab import imshow, show, get_cmap
import numpy as np
import pandas as pd
from numpy import random
import matplotlib.pyplot as plt
from tqdm import tqdm
from itertools import chain
import skimage
from PIL import Image
from skimage.io import imread, imshow, imread_collection, concatenate_images
from skimage.transform import resize
from skimage.util import crop, pad
from skimage.morphology import label
from keras.models import Model, load_model,Sequential
from keras.layers import Input, Dense, UpSampling2D, Flatten, Reshape
from keras.layers.core import Dropout, Lambda
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.optimizers import Adam
from keras import backend as K
import tensorflow as tf
IMG_WIDTH = 128
IMG_HEIGHT = 128
IMG_CHANNELS = 3
INPUT_SHAPE=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS)
D_INPUT_SHAPE=[192]
TRAIN_PATH = '../input/lagdataset_200/LAGdataset_200/'
warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
seed = 42
random.seed = seed
np.random.seed = seed
```
# Read in the Faces
For preprocessing, the face recognition package will be used to find the bounding box around the face in the image and cut out the surrounding areas. Since the faces are taken from different areas and radically different hairstyles, limiting the area to just the face makes it a little easier on our model and focus on the most important features.
```
def FaceCrop(image):
face_locations = face_recognition.face_locations(image)
top, right, bottom, left = face_locations[0]
image = image[top:bottom,left:right]
return image
%%time
train_ids = next(os.walk(TRAIN_PATH))[2]
X_train = np.zeros((len(train_ids), IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), dtype=np.uint8)
final_train_ids = []
missing_count = 0
print('Getting train images ... ')
sys.stdout.flush()
for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
path = TRAIN_PATH + id_+''
try:
img = imread(path)
img = FaceCrop(img)
img = resize(img, (IMG_HEIGHT, IMG_WIDTH), mode='constant', preserve_range=True)
X_train[n-missing_count] = img
final_train_ids.append(id_)
except:
missing_count += 1
print("Total missing: "+ str(missing_count))
X_train = X_train[0:X_train.shape[0]-missing_count]
for n in range(0,5):
imshow(X_train[n])
plt.show()
```
# Add Noise
It is usually a good idea to add some noise to the training images when making an autoencoder.
```
X_train = X_train.astype('float32') / 255.
X_train_noisy = X_train + 0.1 * np.random.normal(size=X_train.shape)
X_train_noisy = np.clip(X_train_noisy, 0., 1.)
```
# Create the Models
We will create three models, the encoder, the decoder, and the autoencoder which is a combination of the 2. Make sure to keep the names of the layers consistent with the autoencoder as we will be setting the weights by_name after training the autoencoder.
```
def Encoder():
inp = Input(shape=INPUT_SHAPE)
x = Conv2D(128, (4, 4), activation='elu', padding='same',name='encode1')(inp)
x = Conv2D(64, (3, 3), activation='elu', padding='same',name='encode2')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same',name='encode3')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode4')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode5')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode6')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode7')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode8')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode9')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode10')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode11')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode12')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (4, 4), activation='elu', padding='same',name='encode13')(x)
x = Conv2D(16, (3, 3), activation='elu', padding='same',name='encode14')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode15')(x)
x = Conv2D(3, (3, 3), activation='elu', padding='same',name='encode16')(x)
x = Flatten()(x)
x = Dense(256, activation='elu',name='encode17')(x)
encoded = Dense(D_INPUT_SHAPE[0], activation='sigmoid',name='encode18')(x)
return Model(inp, encoded)
encoder = Encoder()
encoder.summary()
def Decoder():
inp = Input(shape=D_INPUT_SHAPE, name='decoder')
x = Dense(D_INPUT_SHAPE[0], activation='elu', name='decode1')(inp)
x = Dense(192, activation='elu', name='decode2')(x)
x = Reshape((8, 8, 3))(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode3')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode4')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans1')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode5')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode6')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans2')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode7')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode8')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans3')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same', name='decode9')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same', name='decode10')(x)
x = Conv2DTranspose(128, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans4')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same', name='decode11')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode12')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode13')(x)
x = Conv2D(16, (1, 1), activation='elu', padding='same', name='decode14')(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same', name='decode15')(x)
return Model(inp, decoded)
decoder = Decoder()
decoder.summary()
def Autoencoder():
inp = Input(shape=INPUT_SHAPE)
x = Conv2D(128, (4, 4), activation='elu', padding='same',name='encode1')(inp)
x = Conv2D(64, (3, 3), activation='elu', padding='same',name='encode2')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same',name='encode3')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode4')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode5')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode6')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode7')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode8')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode9')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same',name='encode10')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same',name='encode11')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode12')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (4, 4), activation='elu', padding='same',name='encode13')(x)
x = Conv2D(16, (3, 3), activation='elu', padding='same',name='encode14')(x)
x = Conv2D(16, (2, 2), activation='elu', padding='same',name='encode15')(x)
x = Conv2D(3, (3, 3), activation='elu', padding='same',name='encode16')(x)
x = Flatten()(x)
x = Dense(256, activation='elu',name='encode17')(x)
encoded = Dense(D_INPUT_SHAPE[0], activation='sigmoid',name='encode18')(x)
x = Dense(D_INPUT_SHAPE[0], activation='elu', name='decode1')(encoded)
x = Dense(192, activation='elu', name='decode2')(x)
x = Reshape((8, 8, 3))(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode3')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode4')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans1')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode5')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode6')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans2')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode7')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode8')(x)
x = Conv2DTranspose(64, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans3')(x)
x = Conv2D(32, (3, 3), activation='elu', padding='same', name='decode9')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same', name='decode10')(x)
x = Conv2DTranspose(128, (3, 3), strides=(2, 2), activation='elu', padding='same', name='decodetrans4')(x)
x = Conv2D(64, (4, 4), activation='elu', padding='same', name='decode11')(x)
x = Conv2D(64, (3, 3), activation='elu', padding='same', name='decode12')(x)
x = Conv2D(32, (2, 2), activation='elu', padding='same', name='decode13')(x)
x = Conv2D(16, (1, 1), activation='elu', padding='same', name='decode14')(x)
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same', name='decode15')(x)
return Model(inp, decoded)
model = Autoencoder()
model.compile(optimizer=Adam(lr=0.001), loss='mean_squared_error')
model.summary()
```
# Checkpoints
Good to have some checkpoints for the models. The autoencoder really only benefits from ReduceLROnPlateau, the other checkpoints are just standard.
```
learning_rate_reduction = ReduceLROnPlateau(monitor='loss',
patience=2,
verbose=1,
factor=0.5,
min_lr=0.00001)
filepath = "Face_Auto_Model.h5"
checkpoint = ModelCheckpoint(filepath,
save_best_only=True,
monitor='loss',
mode='min')
early_stopping = EarlyStopping(monitor='loss',
patience=3,
verbose=1,
mode='min',
restore_best_weights=True)
```
# Train a Decoder on Random Data
First thing, just for fun, let's quickly see what happens when we train just the decoder on random noise. By training the decoder on random noise we force the model to make average predictions on everything so we can see the most common features throughout the dataset.
```
D_train_noise = random.random((X_train.shape[0], D_INPUT_SHAPE[0]))
random_decoder = Decoder()
random_decoder.compile(optimizer='adam', loss='mean_squared_error')
%%time
random_decoder.fit(D_train_noise, X_train,
epochs=5,
batch_size=32,
callbacks=[learning_rate_reduction, checkpoint, early_stopping])
```
# Sample the Random Decoder
```
D_test_noise = random.random((100, D_INPUT_SHAPE[0]))
Test_imgs = random_decoder.predict(D_test_noise)
plt.figure(figsize=(20, 4))
for i in range(5):
plt.subplot(2, 10, i + 1)
plt.imshow(Test_imgs[i].reshape(INPUT_SHAPE))
plt.axis('off')
plt.tight_layout()
plt.show()
```
The result is the most average image the model could make. In a fairly uniform dataset like this one, we get a pretty clear face as a result with all the important features.
# Train the Autoencoder
Now to train the autoencoder proper. Standard autoencoder training procedure here except that we will not use any validation splits. The loss will use the ReduceLROnPlateau a few times before it is over. Takes around 1 hour.
```
%%time
model.fit(X_train_noisy, X_train,
epochs=70,
batch_size=50,
callbacks=[learning_rate_reduction, checkpoint, early_stopping])
```
# Sample the Autoencoder Model
```
decoded_imgs = model.predict(X_train_noisy)
plt.figure(figsize=(20, 4))
for i in range(5):
# original
plt.subplot(2, 10, i + 1)
plt.imshow(X_train[i].reshape(INPUT_SHAPE))
plt.axis('off')
# reconstruction
plt.subplot(2, 10, i + 1 + 10)
plt.imshow(decoded_imgs[i].reshape(INPUT_SHAPE))
plt.axis('off')
plt.tight_layout()
plt.show()
```
# Generate New Autoencoded Faces
In order to generate new faces, we will use PCA on the encoded results to make new "random" data that is still normally distributed in a similar way as the actual face results. I used some code found in this repository to get this part working correctly: https://github.com/HackerPoet/FaceEditor
```
model.save('Face_Auto_Model.hdf5')
model.save_weights("Face_Auto_Weights.hdf5")
encoder = Encoder()
decoder = Decoder()
encoder.load_weights("Face_Auto_Weights.hdf5", by_name=True)
decoder.load_weights("Face_Auto_Weights.hdf5", by_name=True)
Encoder_predicts = encoder.predict(X_train)
func = K.function([decoder.input, K.learning_phase()],
[decoder.output])
rand_vecs = np.random.normal(0.0, 1.0, (50, D_INPUT_SHAPE[0]))
x_mean = np.mean(Encoder_predicts, axis=0)
x_stds = np.std(Encoder_predicts, axis=0)
x_cov = np.cov((Encoder_predicts - x_mean).T)
e, v = np.linalg.eig(x_cov)
print(x_mean)
print(x_stds)
print(x_cov)
e_list = e.tolist()
e_list.sort(reverse=True)
plt.clf()
plt.bar(np.arange(e.shape[0]), e_list, align='center')
plt.draw()
x_vecs = x_mean + np.dot(v, (rand_vecs * e).T).T
y_faces = func([x_vecs, 0])[0]
```
# Sample New Faces
Here is a selection of the new random faces.
```
plt.figure(figsize=(50, 20))
for i in range(50):
plt.subplot(5, 10, i + 1)
plt.imshow(y_faces[i])
plt.axis('off')
```
# Results
The results are pretty good, farly clear faces with a lot of variety between them. We can automatically make more or manually adjust features in the array to get a feel for key features that the neural network found to be the most important.
If you enjoyed this notebook, please like, comment, and check out some of my other notebooks on Kaggle:
Making AI Dance Videos: https://www.kaggle.com/valkling/how-to-teach-an-ai-to-dance
Image Colorization: https://www.kaggle.com/valkling/image-colorization-using-autoencoders-and-resnet/notebook
Star Wars Steganography: https://www.kaggle.com/valkling/steganography-hiding-star-wars-scripts-in-images
| github_jupyter |
# Compute the iron sediment forcing (`fedsedflux`) supplied to the model
This notebook implements an approach to computing `fesedflux` originally in an IDL routine by J. K. Moore.
`fesedflux` includes two components:
- `fesedflux_oxic`: a constant low background value; increased in regions of high bottom horizontal current speed (sediment resuspenion) by up to a factor of 100.
- `fesedflux_reduce`: source everywhere linearly related to the sinking POC flux by `coef_fesedflux_POC_flux`, except:
- source is zero below `POC_flux_gCm2yr_min` (3 gC m$^{-2}$ yr$^{-1}$ in CESM2), and
- constant above `POC_flux_gCm2yr_max`.
- This puts a source on the shelf, and along productive slope/margins, but has little source in the deep ocean, where almost all the remineralization is oxic right on the sediment surface.
`fesedflux` is computed on subgrid-scale bathymetry, using the fraction of each cell that is ocean bottom at each depth: `fesedfrac`. `fesedfrac` is [computed from ETOPO1 bathymetry](sedfrac_compute.ipynb) and modified as follows:
- a minimum percentage of each grid cell that is sediments (`land_adj_sedfrac_min`) is applied to all land-adjacent grid cells.
**Arbitrary modification to this objective scheme:**
- `fesedflux_reduce` is multiplied by 10 in the western equatorial Pacific (135-200E, 15S-15N, above 504 m).
## Procedure
1. Prepare `fesedfrac`:
- Read pre-computed [`fesedfrac`](sedfrac_compute.ipynb);
- Determine land-adjascent points;
- Create `sedfrac_mod` by applying `land_adj_sedfrac_min`.
2. Compute `fesedflux_reduce`:
- Read `POC_flux` and convert units;
- Where `POC_flux < POC_flux_gCm2yr_min, POC_flux = 0.`;
- Where `POC_flux > POC_flux_gCm2yr_max, POC_flux = POC_flux_gCm2yr_max`
- `fesedflux_reduce = POC_flux * coef_fesedflux_POC_flux * sedfrac_mod`
- Apply ad hoc scaling in to select regions.
3. Compute `fesedflux_oxic`:
- Read `UVEL` and `VVEL` and compute `current_speed`
- Where `current_speed < 1.0: current_speed = 1.0`
- Where `current_speed > 10.0: current_speed = 10.0`
- `fesedflux_oxic = coef_fesedflux_current_speed * sedfrac_mod * current_speed**2`
4. Output `fesedflux_oxic` and `fesedflux_reduce` in model units: µmol/m^2/d
## Preliminary setup
```
%matplotlib inline
import os
import tqdm
import yaml
from itertools import product
from datetime import date, datetime, timezone
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import esmlab
import pop_tools
import util
id_string = 'Fe_sediment_flux_forcing.ipynb from github.com/marbl-ecosys/forcing-Fe-sedflux'
mol_per_nmol = 1e-9
mol_per_µmol = 1e-6
mol_per_mmol = 1e-3
mol_per_Gmol = 1e9
gC_per_mol = 12.011
cm2_per_m2 = 1e4
d_per_yr = 365.0
s_per_yr = d_per_yr * 86400.0
nmolCcm2s_to_gCm2yr = mol_per_nmol * gC_per_mol * cm2_per_m2 * s_per_yr
mmolCm2d_to_gCm2yr = mol_per_mmol * gC_per_mol * d_per_yr
mmolm2yr_to_µmolm2d = 1. / d_per_yr / mol_per_µmol * mol_per_mmol
```
### Specify model grid and associated parameters
```
dst_grid = 'POP_tx0.1v3'
# Parameters
dst_grid = "POP_gx1v7"
POC_flux_gCm2yr_max = 20.
POC_flux_gCm2yr_min = 3.
current_speed_min = 1. # cm/s
current_speed_max = 10. # cm/s
western_pacific_factor = 10. # scale Fe flux in W. Pacific
data_in_src_grid = dict(poc_flux='POP_gx1v7', velocity='POP_gx1v7')
i_pacific = util.nlon_pacific_xsection[dst_grid]
j_acc = util.nlat_acc_xsection[dst_grid]
if dst_grid == 'POP_gx3v7':
land_adj_sedfrac_min = 0.015
coef_fesedflux_POC_flux = 0.01584 # (mmolFe/m^2/yr)/(gC/m^2/yr)
coef_fesedflux_current_speed2 = 0.0008405
ltripole = False
elif dst_grid == 'POP_gx1v7':
land_adj_sedfrac_min = 0.03
coef_fesedflux_POC_flux = 0.01614 # (mmolFe/m^2/yr)/(gC/m^2/yr) # * 1.6
coef_fesedflux_current_speed2 = 0.0006568 # * 1.2
ltripole = False
elif dst_grid == 'POP_tx0.1v3':
land_adj_sedfrac_min = 0.2
coef_fesedflux_POC_flux = 0.018 # (mmolFe/m^2/yr)/(gC/m^2/yr)
coef_fesedflux_current_speed2 = 0.0005
ltripole = True
data_in_src_grid['velocity'] = 'POP_tx0.1v3'
data_in_src_grid
```
### Get model data `POC_FLUX_IN`, `UVEL`, `VVEL`
```
ds = pop_tools.get_grid(dst_grid)
for var_group, src_grid in data_in_src_grid.items():
zarr_in = f'{util.dirwork}/{var_group}.{src_grid}_to_{dst_grid}.zarr'
print(f'reading {zarr_in}')
with xr.open_zarr(zarr_in) as dsv:
dsv = dsv.drop([v for v in dsv.variables if v not in ['POC_FLUX_IN', 'UVEL', 'VVEL']]).load()
ds = xr.merge((ds, dsv))
ds
```
### Read `sedfrac` and apply `land_adj_sedfrac_min`
```
with xr.open_dataset(util.sedfrac_file(dst_grid)) as dstmp:
sedfrac = dstmp.sedfrac.reset_index('z_t', drop=True).compute()
land_adjacent = util.compute_topo_adjacent_points(dst_grid)
sedfrac_mod = util.apply_topo_adj_sedfrac_min(
sedfrac,
land_adjacent,
land_adj_sedfrac_min,
)
plt.figure()
sedfrac_mod.sum('z_t').where(ds.KMT > 0).plot(vmin=0, vmax=3.)
h = plt.title('Sum of sedfrac_mod in column')
plt.figure()
sedfrac_mod.sum('z_t').where(ds.KMT > 0).plot(vmin=0, vmax=1.)
h = plt.title('Sum of sedfrac in column, colormap scaled')
```
## Compute `fesedflux_reduce`
### Prepare `POC_flux`:
- convert units
- Where `POC_flux < POC_flux_gCm2yr_min, POC_flux = 0.`;
- Where `POC_flux > POC_flux_gCm2yr_max, POC_flux = POC_flux_gCm2yr_max`
```
POC_flux = ds.POC_FLUX_IN * nmolCcm2s_to_gCm2yr
POC_flux = xr.where(POC_flux <= POC_flux_gCm2yr_min, 0., POC_flux)
POC_flux = xr.where(POC_flux > POC_flux_gCm2yr_max, POC_flux_gCm2yr_max, POC_flux)
POC_flux = POC_flux.reset_index('z_t', drop=True)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1, 2, 1)
h = POC_flux.isel(z_t=3).plot()
ax = fig.add_subplot(1, 2, 2)
h = POC_flux.isel(z_t=20).plot()
```
### Compute `fesedflux_reduce`
Compute and apply ad hoc scaling in to select regions
```
# initial computation
fesedflux_reduce = coef_fesedflux_POC_flux * POC_flux * sedfrac_mod
fesedflux_reduce = fesedflux_reduce.fillna(0.)
fesedflux_reduce = fesedflux_reduce * mmolm2yr_to_µmolm2d
# plot initial values
plt.figure()
fesedflux_reduce.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-2, vmax=20.))
h = plt.title('fesedflux_reduce: column sum (before ad hoc adjustment)')
# apply ad hoc adjustments
region_def = ((134. <= ds.TLONG) & (ds.TLONG <= 200)
& (np.fabs(ds.TLAT) <= 15) & (ds.z_t <= 450e2))
region_def = region_def.where(ds.KMT > 0).reset_index('z_t', drop=True)
region_def = region_def.transpose('z_t', 'nlat', 'nlon')
fesedflux_reduce = xr.where(region_def==1, fesedflux_reduce * western_pacific_factor, fesedflux_reduce)
fesedflux_reduce.name = 'Fe sediment flux (reducing)'
fesedflux_reduce.attrs['units'] = 'micromol m$^{-2}$ d$^{-1}$'
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1, 2, 1)
region_def.isel(z_t=0).plot()
h = plt.title('ad hoc adjustment region')
ax = fig.add_subplot(1, 2, 2)
fesedflux_reduce.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-2, vmax=20.))
h = plt.title('fesedflux_reduce: column sum (after ad hoc adjustment)')
```
### Report global integral
```
fesedflux_reduce_global = esmlab.statistics.weighted_sum(fesedflux_reduce, weights=ds.TAREA/cm2_per_m2,
dim=('nlat', 'nlon')).sum('z_t')
fesedflux_reduce_global = fesedflux_reduce_global * mol_per_µmol / mol_per_Gmol * d_per_yr
print(f'Global integral of `fesedflux_reduce_global` = {fesedflux_reduce_global.values:0.4f} Gmol Fe/yr')
```
## Compute `fesedflux_oxic`
- Read `UVEL` and `VVEL` and compute `current_speed`
- Where `current_speed < current_speed_min: current_speed = current_speed_min`
- Where `current_speed > current_speed_max: current_speed = current_speed_max`
- `fesedflux_oxic = coef_fesedflux_current_speed2 * sedfrac * current_speed**2`
```
current_speed = np.sqrt(ds.UVEL**2 + ds.VVEL**2)
current_speed = xr.where(current_speed < current_speed_min, current_speed_min, current_speed)
current_speed = xr.where(current_speed > current_speed_max, current_speed_max, current_speed)
h = current_speed.isel(z_t=30).plot()
current_speed = current_speed.reset_index('z_t', drop=True)
current_speed
fesedflux_oxic = coef_fesedflux_current_speed2 * sedfrac_mod * current_speed**2
fesedflux_oxic = fesedflux_oxic * mmolm2yr_to_μmolm2d
fesedflux_oxic.name = 'Fe sediment flux (oxic)'
fesedflux_oxic.attrs['units'] = 'micromol m$^{-2}$ d$^{-1}$'
fesedflux_oxic.attrs['long_name'] = 'Fe sediment flux (oxic)'
plt.figure()
fesedflux_oxic.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-3, vmax=fesedflux_oxic.max()))
h = plt.title('fesedflux_oxic')
fesedflux_oxic_global = esmlab.statistics.weighted_sum(fesedflux_oxic, weights=ds.TAREA/cm2_per_m2,
dim=('nlat', 'nlon')).sum('z_t')
fesedflux_oxic_global = fesedflux_oxic_global * mol_per_µmol / mol_per_Gmol * d_per_yr
print(f'Global integral of `fesedflux_oxic_global` = {fesedflux_oxic_global.values:0.4f} Gmol Fe/yr')
```
## Compute total
```
fesedflux_total = (fesedflux_oxic + fesedflux_reduce)
fesedflux_total.attrs['units'] = 'micromol/m^2/d'
fesedflux_total.attrs['long_name'] = 'Fe sediment flux (total)'
fesedflux_total_global = esmlab.statistics.weighted_sum(fesedflux_total, weights=ds.TAREA/cm2_per_m2,
dim=('nlat', 'nlon')).sum('z_t')
fesedflux_total_global = fesedflux_total_global * mol_per_µmol / mol_per_Gmol * d_per_yr
print(f'Global integral of `fesedflux_total_global` = {fesedflux_total_global.values:0.4f} Gmol Fe/yr')
plt.figure()
fesedflux_total.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-2, vmax=20.))
plt.figure()
fesedflux_total.isel(nlon=i_pacific).plot(yincrease=False, norm=colors.LogNorm(vmin=1e-2, vmax=20.))
```
## Construct output file
The model uses a scale factor when reading in the `fesedflux`:
`scale_factor = 1.1574e-6`; this converts from µmol/m^2/d to nmol/cm^2/s.
```
dso = xr.Dataset()
dso['FESEDFLUXIN'] = fesedflux_total
dso.FESEDFLUXIN.encoding = {'_FillValue': None, 'dtype': np.single}
dso['FESEDFLUXIN_reduce'] = fesedflux_reduce
dso.FESEDFLUXIN_reduce.encoding = {'_FillValue': None, 'dtype': np.single}
dso['FESEDFLUXIN_oxic'] = fesedflux_oxic
dso.FESEDFLUXIN_oxic.encoding = {'_FillValue': None, 'dtype': np.single}
for v in ['TAREA', 'TLONG', 'TLAT', 'KMT', 'z_t']:
dso[v] = ds[v]
dso.encoding['_FillValue'] = None
datestamp = datetime.now(timezone.utc).strftime("%Y-%m-%d")
dso.attrs['history'] = f'created by {id_string} on {datestamp}'
datestamp = date.today().strftime("%y%m%d")
file_out = f'{util.dirout}/fesedflux_total_reduce_oxic_{dst_grid}.c{datestamp}.nc'
dso.to_netcdf(file_out)
util.ncks_fl_fmt64bit(file_out)
print(f'wrote: {file_out}')
dso.info()
```
## Compare with CESM2 forcing dataset
```
cesm2_file = f'{util.inputdata}/ocn/pop/gx1v6/forcing/fesedfluxTot_gx1v6_cesm2_2018_c180618.nc'
cesm2 = xr.open_dataset(cesm2_file).rename({'z': 'z_t', 'y': 'nlat', 'x': 'nlon'})
cesm2['z_t'] = pop_tools.get_grid('POP_gx1v7').z_t
cesm2['AREA_m2'] = pop_tools.get_grid('POP_gx1v7').TAREA * 1e-4
cesm2.FESEDFLUXIN.attrs['units'] = 'µmol/m^2/d'
cesm2
fesedflux_total_cesm2 = esmlab.statistics.weighted_sum(cesm2.FESEDFLUXIN, weights=cesm2.AREA_m2,
dim=('nlat', 'nlon')).sum('z_t')
fesedflux_total_cesm2 = fesedflux_total_cesm2 * mol_per_µmol / mol_per_Gmol * d_per_yr
print(f'Global integral of `fesedflux_total_cesm2` = {fesedflux_total_cesm2.values:0.4f} Gmol Fe/yr')
fig = plt.figure(figsize=(16, 4))
ax = fig.add_subplot(1, 2, 1)
cesm2.FESEDFLUXIN.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-2, vmax=20.))
plt.title('CESM2 Fe sedflux (vertical integral)')
ax = fig.add_subplot(1, 2, 2)
dso.FESEDFLUXIN.sum('z_t').plot(norm=colors.LogNorm(vmin=1e-2, vmax=20.))
plt.title('This dataset Fe sedflux (vertical integral)')
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1, 2, 1)
cesm2.FESEDFLUXIN.isel(nlon=200).plot(yincrease=False, norm=colors.LogNorm(vmin=1e-2, vmax=1.))
plt.title('CESM2 Fe sedflux (Pacific transect)')
ax = fig.add_subplot(1, 2, 2)
dso.FESEDFLUXIN.isel(nlon=i_pacific).plot(yincrease=False, norm=colors.LogNorm(vmin=1e-2, vmax=1.))
plt.title('This dataset Fe sedflux (Pacific transect)')
plt.figure()
esmlab.statistics.weighted_sum(cesm2.FESEDFLUXIN, weights=cesm2.AREA_m2, dim=('nlat', 'nlon')).plot(label='CESM2')
esmlab.statistics.weighted_sum(dso.FESEDFLUXIN, weights=ds.TAREA/cm2_per_m2, dim=('nlat', 'nlon')).plot(label='this dataset')
plt.ylabel('Fe flux [µmol/yr]')
plt.legend();
%load_ext watermark
%watermark --iversion -g -m -v -u -d
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
```
##########################################################################################################################################################################################################################################
## *Functions* for retrieving SNPs between each pair of *longitudinal* isolates (SNPs with $>= 70$% $\Delta$ AF)
##########################################################################################################################################################################################################################################
```
import vcf
%matplotlib inline
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.ticker as ticker
from pylab import plot, show, savefig, xlim, figure, hold, ylim, legend, boxplot, setp, axes
from itertools import compress
from pylab import MaxNLocator
import seaborn as sns; sns.set()
from matplotlib.colors import LogNorm
from matplotlib import gridspec
from matplotlib.gridspec import GridSpec
import ast
import itertools
import seaborn as sns
from sklearn.preprocessing import StandardScaler
import fastcluster
from sklearn import cluster, datasets
import scipy.cluster.hierarchy as hier
from sklearn.cluster import KMeans
import time
import sys
import Bio
from Bio.Alphabet import IUPAC
from Bio.Blast.Applications import NcbiblastnCommandline
from Bio.Blast import NCBIXML
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.SeqFeature import SeqFeature, FeatureLocation
from Bio import pairwise2
from Bio import SeqIO
from Bio.Graphics import GenomeDiagram
from Bio.SeqUtils import GC
from Bio.Align.Applications import MuscleCommandline
from StringIO import StringIO
from Bio import AlignIO
from Bio.Align import AlignInfo
from Bio.Seq import MutableSeq
import itertools
import networkx as nx
import scipy
import pickle
#for exporting to Adobe Illustrator
mpl.rcParams['pdf.fonttype'] = 42
mpl.rcParams['ps.fonttype'] = 42
```
### Decide on a threshold for difference in Alternate Allele Frequencies to call SNPs between two isolates
```
alt_AF_diff_threshold = 0.70 #x%
```
### Load regions to exclude from analysis per EBR score across H37Rv (dropping sites with EBR score < 0.8)
```
with open('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/variant_calling/H37Rv_sites_to_drop.pkl', 'rb') as f:
H37Rv_positions_to_drop = pickle.load(f)
#convert to a set (faster to query)
H37Rv_positions_to_drop = set(H37Rv_positions_to_drop)
```
### *Cell* to annotate SNPs
```
# Important Packages
################################################################################################################################################################################################
import os
import pandas as pd
import numpy as np
import sys
import pickle
import Bio
from Bio.Alphabet import IUPAC
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.SeqFeature import SeqFeature, FeatureLocation
from Bio import SeqIO
from StringIO import StringIO
from Bio import AlignIO
from Bio.Align import AlignInfo
from Bio.Seq import MutableSeq
################################################################################################################################################################################################
# Relevant Information for H37Rv sequence SNP functional annotation
################################################################################################################################################################################################
####### Collect all DNA and Amino Acid sequences corresponding to genes on H37Rv #######
#load reference genome and reference annotation
reference_genome = '/n/data1/hms/dbmi/farhat/bin/work-horse/bin/h37rv.fasta'
for reference_genome in SeqIO.parse(reference_genome, "fasta"):
reference_genome.seq.alphabet = IUPAC.unambiguous_dna
reference_genome_annotation = pd.read_csv('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/H37Rv/h37rv_genome_summary.txt', '\t').set_index('name')
####### Function to translate coding DNA sequences #######
def translate(gene_id, sequence):
#find which strand the gene is located on and translate
strand = reference_genome_annotation.loc[gene_id, 'strand']
if strand == '+':
protein_sequence = sequence.translate(table="Bacterial", cds=False)
elif strand == '-':
protein_sequence = sequence.reverse_complement().translate(table="Bacterial", cds=False)
return protein_sequence
####### Load in dictionaries for SNP annotation #######
with open('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/dicts_for_SNP_annotation/H37Rv_gene_seq_records.pickle', 'rb') as handle:
ref_gene_sequences_records = pickle.load(handle)
with open('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/dicts_for_SNP_annotation/H37Rv_protein_seq_records.pickle', 'rb') as handle:
ref_protein_sequences_records = pickle.load(handle)
with open('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/dicts_for_SNP_annotation/H37Rv_coord_gene_mapping.pickle', 'rb') as handle:
ReferencePosition_Gene_mapping = pickle.load(handle)
####### get Gene Categories #######
gene_categories = pd.read_csv('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/CSV_files/gene_categories/gene_categories.csv').set_index('name')
gene_categories_dict = dict([gene_id , gene_category] for gene_id, gene_category in zip(list(gene_categories.gene_id) , list(gene_categories.Gene_Category)))
####### get Gene Symbols #######
gene_symbol_dict = dict([gene_id , gene_symbol] for gene_id, gene_symbol in zip(list(reference_genome_annotation.symbol.index) , list( reference_genome_annotation.symbol )))
################################################################################################################################################################################################
# Function to annotate Intergenic SNPs
################################################################################################################################################################################################
def find_flanking_genes_for_intergenic_region(intergenic_ref_pos):
#this function finds the genes flagging an intergenic region given a reference position
#find gene immediately in the 5' direction
for i in range(0 , 100000):
#move toward 5' direction
if ReferencePosition_Gene_mapping[intergenic_ref_pos - i] != []:
gene_to_left = ReferencePosition_Gene_mapping[intergenic_ref_pos - i][0]
break
#find gene immediately in the 3' direction
for i in range(0 , 100000):
#move toward 3' direction
try:
if ReferencePosition_Gene_mapping[intergenic_ref_pos + i] != []:
gene_to_right = ReferencePosition_Gene_mapping[intergenic_ref_pos + i][0]
break
#KeyError means we have hit the 'end' of the chromosome, the intergenic region at then end of H37Rv in 5' > 3' orientation
#since TB chromosome is circular the gene to the 'right' is Rv0001
except KeyError:
gene_to_right = 'Rv0001'
break
return gene_to_left + '_' + gene_to_right
################################################################################################################################################################################################
# Function to determine whether SNPs are Synonymous or Non-Synonymous; Returns gene coordinate, codon position, AA changes, Gene Category & Symbol
################################################################################################################################################################################################
def SNP_annotate(ref_seq_position , alt_allele_i):
'''
This function takes as input a reference position on H37Rv located within a
gene and an alternate allele and returns whether the base change
would correspond to a different Amino Acid sequence that results
from translating the DNA sequence into an AA sequence.
'''
gene_intergenic_id_list = []
genomic_coord_list = []
gene_category_list = []
gene_symbol_list = []
Syn_NSyn_list = []
AA_change_list = []
#get the Reference Allele from the complete H37Rv reference genome, indexing starts from 0
ref_allele_i = reference_genome.seq[int(ref_seq_position) - 1]
#find the gene that SNP occurs on; check list corresponding to H37Rv coordinate to see if there are any genes associated with RefPosition
if len(ReferencePosition_Gene_mapping[ref_seq_position]) > 0:
#iterate through all genes that ReferencePosition is mapped to (i.e. SNP might correspond to 2 genes)
for gene_intergenic_id in ReferencePosition_Gene_mapping[ref_seq_position]:
#find genomic coordinate of SNP relative to gene (subtract 1 since reference seq starts counting at 1)
gene_relative_coord = (ref_seq_position - 1) - min( reference_genome_annotation.loc[gene_intergenic_id , 'chromStart'] , reference_genome_annotation.loc[gene_intergenic_id , 'chromEnd'] )
#find the genomic coordinate (relative to the gene, in the 5' to 3' direction)
strand = reference_genome_annotation.loc[gene_intergenic_id, 'strand']
if strand == '+':
genomic_5_to_3_coord = (ref_seq_position) - reference_genome_annotation.loc[gene_intergenic_id , 'chromStart']
elif strand == '-':
genomic_5_to_3_coord = (reference_genome_annotation.loc[gene_intergenic_id , 'chromEnd']) - (ref_seq_position-1)
#find gene category (if one exists)
try:
gene_category_i = gene_categories_dict[gene_intergenic_id]
except KeyError:
gene_category_i = 'None'
#find gene symbol (if one exists)
try:
gene_symbol_i = gene_symbol_dict[gene_intergenic_id]
except KeyError:
gene_symbol_i = 'None'
#alternate allele is an actual base
if alt_allele_i in ['A','C','G','T']:
#translate into protein sequence with the SNP in place if not InDel or intergenic region
SNP_change = alt_allele_i
#ALTERNATE allele (is it Syn or NSyn?)
#get sequence from dictionary of sequences (and convert to mutable object)
test_gene_sequence = ref_gene_sequences_records[gene_intergenic_id].seq.tomutable()
#change reference gene sequence by the SNP in the query sequence
test_gene_sequence[int(gene_relative_coord)] = SNP_change
#convert back immutable object
test_gene_sequence = test_gene_sequence.toseq()
#translate sequence into amino acid seq
test_protein_sequence = translate(gene_intergenic_id , test_gene_sequence)
#store the H37Rv AA seq to compare against
H37Rv_AA_sequence = ref_protein_sequences_records[gene_intergenic_id].seq
#get the codon number where the SNP occurs within
## take the genomic coordinate (relative to the gene, in the 5' to 3' direction), divide by 3, then take the ceiling of this number (will be fraction if SNP occurs in 1st or 2nd position on codon)
strand = reference_genome_annotation.loc[gene_intergenic_id, 'strand']
if strand == '+':
genomic_5_to_3_coord = (ref_seq_position) - reference_genome_annotation.loc[gene_intergenic_id , 'chromStart']
elif strand == '-':
genomic_5_to_3_coord = (reference_genome_annotation.loc[gene_intergenic_id , 'chromEnd']) - (ref_seq_position-1)
codon_coord = int(np.ceil( float( genomic_5_to_3_coord) / 3.0 ))
#compare to AA seq of original gene
if test_protein_sequence == H37Rv_AA_sequence:
SNP_type = 'S'
#get the AA before & after
AA_change = H37Rv_AA_sequence[codon_coord-1] + str(codon_coord) + test_protein_sequence[codon_coord-1]
else:
SNP_type = 'N'
#get the AA before & after
AA_change = H37Rv_AA_sequence[codon_coord-1] + str(codon_coord) + test_protein_sequence[codon_coord-1]
#alternate allele is a dummy (Base Call completely supports the Reference Allele)
else:
SNP_type = 'None'
AA_change = 'None'
#store relevant info in lists
gene_intergenic_id_list.append(gene_intergenic_id)
genomic_coord_list.append(genomic_5_to_3_coord)
gene_category_list.append(gene_category_i)
gene_symbol_list.append(gene_symbol_i)
Syn_NSyn_list.append(SNP_type)
AA_change_list.append(AA_change)
#if no gene in H37Rv corresponds to the Reference Position for SNP, then SNP must be intergenic
else:
gene_intergenic_id = find_flanking_genes_for_intergenic_region(ref_seq_position)
genomic_5_to_3_coord = 'None'
gene_category_i = 'None'
gene_symbol_i = 'None'
SNP_type = 'I'
AA_change = 'None'
#store relevant info in lists
gene_intergenic_id_list.append(gene_intergenic_id)
genomic_coord_list.append(genomic_5_to_3_coord)
gene_category_list.append(gene_category_i)
gene_symbol_list.append(gene_symbol_i)
Syn_NSyn_list.append(SNP_type)
AA_change_list.append(AA_change)
#if there is only a single gene associated with this SNP, just return the individual elememts
if len(gene_intergenic_id_list) == 1:
return [ref_allele_i , gene_intergenic_id , genomic_5_to_3_coord , gene_category_i , gene_symbol_i , SNP_type , AA_change]
#else if there are two genes associated with this SNP, return elements for each SNP annotation in a list
elif len(gene_intergenic_id_list) > 1:
return [ref_allele_i , gene_intergenic_id_list , genomic_coord_list , gene_category_list , gene_symbol_list , Syn_NSyn_list , AA_change_list]
################################################################################################################################################################################################
```
### *Function* to get SNPs between paired isolates (filtered for $\Delta AF$, MGE and low EBR score regions)
```
def get_filtered_SNPs_between_isolates(isolate_pair_ID , alt_AF_diff_threshold):
'''
This function only the fixed SNP variants that occur between a given isolate pair
by loading in the pickled DataFrame for isolate pair and comparing alternate allele frequencies called in each isolate.
(Differing Base Calls that have an Alternate Allele Frequencies >= x% different).
This function also drops regions flagged as Mobile Genetic Elements & Regions of poor Illumina mapping / variant calling
per Empirical Base Recall (EBR) scores across H37Rv.
'''
################################################################################
### get SNPs between pair of isolates
################################################################################
population = sample_annotation.loc[isolate_pair_ID , 'population'][0]
#load in the differing Base Calls for the isolate pair from pickle
different_base_calls_between_isolates = pd.read_pickle(SNP_variant_dir + population + '_' + isolate_pair_ID + '/base_calls_different_between_isolates.pkl')
################################################################################
### Drop SNPs with change in AF < x%
################################################################################
#FILTER out paired Base Calls that have a difference in Alternate Allele Frequency of less than x%
alt_AF_isolate_A = different_base_calls_between_isolates.loc[range(0 , np.shape(different_base_calls_between_isolates)[0] , 2) , 'alt_AF']
alt_AF_isolate_B = different_base_calls_between_isolates.loc[range(1 , np.shape(different_base_calls_between_isolates)[0] , 2) , 'alt_AF']
alt_AF_diff_btwn_paired_isolates = abs(alt_AF_isolate_A.values - alt_AF_isolate_B.values)
isolate_A_Base_Call_indices_small_change_alt_AF = list(alt_AF_isolate_A[alt_AF_diff_btwn_paired_isolates < alt_AF_diff_threshold].index)
isolate_B_Base_Call_indices_small_change_alt_AF = list(alt_AF_isolate_B[alt_AF_diff_btwn_paired_isolates < alt_AF_diff_threshold].index)
Base_Call_Indices_SMALL_Alt_AF_Diff = isolate_A_Base_Call_indices_small_change_alt_AF + isolate_B_Base_Call_indices_small_change_alt_AF
#drop paired Base Calls w/ corresponding change in Alterante Allele Frequency < x%
different_base_calls_between_isolates.drop(Base_Call_Indices_SMALL_Alt_AF_Diff , axis = 0 , inplace = True)
#reset index of filtered SNP DataFrame
different_base_calls_between_isolates.reset_index(inplace = True, drop = True)
################################################################################
### Drop SNPs with change in regions with low EBR scores
################################################################################
#Drop Base Calls in H37Rv sites with low EBR score (make sure there is at least 1 SNP)
if np.shape(different_base_calls_between_isolates)[0] > 0:
#create a boolean filter for SNPs to keep
SNPs_to_keep_filter = [SNP_i_ref_pos not in H37Rv_positions_to_drop for SNP_i_ref_pos in different_base_calls_between_isolates.ref_position]
#filter out SNPs in H37Rv sites with low EBR scores and reset index
different_base_calls_between_isolates = different_base_calls_between_isolates[SNPs_to_keep_filter]
different_base_calls_between_isolates.reset_index(inplace = True, drop = True)
################################################################################
### Annotate SNPs & Drop SNPs in MGE regions
################################################################################
gene_id_list = []
gene_coord_list = []
gene_category_list = []
gene_symbol_list = []
SNP_ftype_list = []
AA_change_list = []
#Annotate Filtered Base Calls (make sure there is at least 1 SNP)
if np.shape(different_base_calls_between_isolates)[0] > 0:
for ref_position_i , alt_base_i in zip(list(different_base_calls_between_isolates.ref_position) , list(different_base_calls_between_isolates.alt_base)):
#annotate SNP
gene_id_i , gene_coord_i , gene_category_i , gene_symbol_i , SNP_ftype_i , AA_change_i = SNP_annotate(ref_position_i , alt_base_i)[1:]
gene_id_list.append(gene_id_i)
gene_coord_list.append(gene_coord_i)
gene_category_list.append(gene_category_i)
gene_symbol_list.append(gene_symbol_i)
SNP_ftype_list.append(SNP_ftype_i)
AA_change_list.append(AA_change_i)
#create columns to store SNP annotation info
different_base_calls_between_isolates['gene_id'] = gene_id_list
different_base_calls_between_isolates['gene_coord'] = gene_coord_list
different_base_calls_between_isolates['gene_category'] = gene_category_list
different_base_calls_between_isolates['gene_symbol'] = gene_symbol_list
different_base_calls_between_isolates['SNP_ftype'] = SNP_ftype_list
different_base_calls_between_isolates['AA_change'] = AA_change_list
#FILTER out Base Calls in MGE regions (Mobile Genentic Elements)
SNPs_to_drop_filter = [] #True if SNP is located within an MGE region
for gene_id_i in list(different_base_calls_between_isolates.gene_category):
#only 1 or 0 genes associated with this SNP
if (type(gene_id_i) == str) and (gene_id_i == 'Mobile Genetic Element'):
SNPs_to_drop_filter.append(True)
#two genes associated with this SNP
elif (type(gene_id_i) == list) and ('Mobile Genetic Element' in gene_id_i):
SNPs_to_drop_filter.append(True)
#SNP not in an MGE region so don't drop
else:
SNPs_to_drop_filter.append(False)
#create a boolean filter for SNPs to keep
SNPs_to_keep_filter = [not MGE_SNP for MGE_SNP in SNPs_to_drop_filter]
#filter out SNPs in MGE regions and reset index
different_base_calls_between_isolates = different_base_calls_between_isolates[SNPs_to_keep_filter]
different_base_calls_between_isolates.reset_index(inplace = True, drop = True)
#No SNPs detected between this pair of isolates (empty DataFrame)
else:
different_base_calls_between_isolates['gene_id'] = ""
different_base_calls_between_isolates['gene_coord'] = ""
different_base_calls_between_isolates['gene_category'] = ""
different_base_calls_between_isolates['gene_symbol'] = ""
different_base_calls_between_isolates['SNP_ftype'] = ""
different_base_calls_between_isolates['AA_change'] = ""
return different_base_calls_between_isolates
```
##########################################################################################################################################################################################################################################
## Longitudinal Sample pairs
##########################################################################################################################################################################################################################################
#### Import Sample Annotation file
```
sample_annotation = pd.read_csv('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/CSV_files/sample_annotation_files/Longitudinal_fastq_path_names_and_JankyPipe_tags_filtered_final.csv' , sep = ',').set_index('patient_id')
SNP_variant_dir = '/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/variant_calling/longitudinal_SNPs/all_SNPs_between_longitudinal_pairs/'
sample_annotation.head()
num_isolate_pair_IDs = np.shape(sample_annotation)[0] / 2
print num_isolate_pair_IDs
isolate_pair_ID_list = list(set(sample_annotation.index))
```
### Call function to collect SNPs passing Difference in Alternate Allele Frequency Threshold
```
Base_Call_variants_btwn_isolates_big_change_in_alt_AF = []
isolate_pair_index = 0
#iterate through isolate pairs, collect all SNP variants arising between each pair of isolates
for isolate_pair_ID in isolate_pair_ID_list:
#retrieve filtered paired Base Calls with a change in Alternate Allele Frequency > threshold
Base_Call_variants_btwn_isolates_big_change_in_alt_AF_pair_i = get_filtered_SNPs_between_isolates(isolate_pair_ID , alt_AF_diff_threshold)
#store relevant Base Call info in list of DataFrames (1 for each isolate pair)
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.append(Base_Call_variants_btwn_isolates_big_change_in_alt_AF_pair_i)
isolate_pair_index += 1
if isolate_pair_index % 5 == 0:
print isolate_pair_index
#concatenate DataFrames for each subject into 1 DataFrame
Base_Call_variants_btwn_isolates_big_change_in_alt_AF = pd.concat(Base_Call_variants_btwn_isolates_big_change_in_alt_AF , axis = 0)
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.reset_index(inplace = True , drop = True)
```
### *Filter*: Drop paired Base Calls if both Base Calls in a pair support *different* Alternate Alleles
```
#list that stores the indices of paired Base Calls with DIFFERENT Alternate Alleles
Base_Calls_to_Drop = []
#iterate through each PAIR of corresponding Base Calls from paired isolates
for isolate_A_Base_Call_i , isolate_B_Base_Call_i in zip(range(0 , np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)[0] , 2) , range(1 , np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)[0] , 2) ):
#pull info that both Base Calls should have in COMMON
isolate_A_Base_Call_info = list( Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[isolate_A_Base_Call_i , ['ref_base','ref_position','gene_id','genomic_coord','population','patient_id']] )
isolate_B_Base_Call_info = list( Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[isolate_B_Base_Call_i , ['ref_base','ref_position','gene_id','genomic_coord','population','patient_id']] )
#make sure Base Calls Match with respect to Reference Base, Reference Position, gene ID, Genomic Coordinate, Gene Category, Symbol, Population & Patient ID
if isolate_A_Base_Call_info == isolate_B_Base_Call_info:
#pull alternate Allele for each of the paired isolates
isolate_A_Alt_Base = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[isolate_A_Base_Call_i , 'alt_base']
isolate_B_Alt_Base = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[isolate_B_Base_Call_i , 'alt_base']
#Check to see if there is a 'Z' in the pair of Alternate Alleles, if so one of the Base Calls supported the Reference Base (so there was no Alternate Allele)
if (isolate_A_Alt_Base == 'Z') or (isolate_B_Alt_Base == 'Z'):
pass
#if neither Alternate Allele is a 'Z', then check to see that the Alternate Allele Bases Match
elif isolate_A_Alt_Base == isolate_B_Alt_Base:
pass
#if the Alternate Alleles DON'T match and both Base Calls supported Alternate Alleles (not the Reference), then we can't compare the Allele Frequencies of these Alternate Alleles (since they're different)
else:
Base_Calls_to_Drop = Base_Calls_to_Drop + [isolate_A_Base_Call_i , isolate_B_Base_Call_i]
#print indices of Base Calls and see what went wrong if the paired Base Calls have different information that the Calls should have in Common (Ref Position, Ref Base, Gene ID, Patient ID, etc.)
else:
print (isolate_A_Base_Call_i , isolate_B_Base_Call_i)
#Drop Paired Base Calls that supported different Alternate Alleles
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.drop(Base_Calls_to_Drop , axis = 0 , inplace = True)
#reset index
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.reset_index(inplace = True, drop = True)
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.head(n = 10)
np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)
```
### Drop any glpK mutants
```
Base_Call_variants_btwn_isolates_big_change_in_alt_AF[Base_Call_variants_btwn_isolates_big_change_in_alt_AF.gene_symbol == 'glpK']
#Drop glpK mutants
Base_Call_variants_btwn_isolates_big_change_in_alt_AF = Base_Call_variants_btwn_isolates_big_change_in_alt_AF[Base_Call_variants_btwn_isolates_big_change_in_alt_AF.gene_symbol != 'glpK']
#reset index
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.reset_index(inplace = True, drop = True)
np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)
```
### Check for SNPs that actually occurred in overlapping regions with two CDS
SNPs in these regions will be treated as 2 SNPs for downstream analysis, manually adjust DataFrame of Base Calls
```
SNP_overlapping_CDS_region_filter = [type(gene_id)==list for gene_id in Base_Call_variants_btwn_isolates_big_change_in_alt_AF.gene_id]
Base_Call_variants_btwn_isolates_big_change_in_alt_AF[SNP_overlapping_CDS_region_filter]
Base_Call_226_INFO = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[226 , 'INFO']
Base_Call_227_INFO = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[227 , 'INFO']
#drop each row and replace with two additional rows, one for each CDS region
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.drop([226,227] , axis = 0 , inplace = True)
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[500 , :] = ['G' , 'A' , 223690 , 2045 , 'Alt_PASS' , [] , Base_Call_226_INFO , 0.99 , 59 , 'ERR181875' , 'GUERRA' , 'KPS_68' , 'Rv0192' , 127 , 'Non-Essential' , 'nan' , 'N' , 'G43R']
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[501 , :] = ['G' , 'Z' , 223690 , 1395 , 'Ref_PASS' , [] , Base_Call_227_INFO , 0.00 , 38 , 'ERR176472' , 'GUERRA' , 'KPS_68' , 'Rv0192' , 127 , 'Non-Essential' , 'nan' , 'None' , 'None']
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[502 , :] = ['G' , 'A' , 223690 , 2045 , 'Alt_PASS' , [] , Base_Call_226_INFO , 0.99 , 59 , 'ERR181875' , 'GUERRA' , 'KPS_68' , 'Rv0192A' , 84 , 'Antigen' , 'nan' , 'S' , 'L28L']
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[503 , :] = ['G' , 'Z' , 223690 , 1395 , 'Ref_PASS' , [] , Base_Call_227_INFO , 0.00 , 38 , 'ERR176472' , 'GUERRA' , 'KPS_68' , 'Rv0192A' , 84 , 'Antigen' , 'nan' , 'None' , 'None']
#reset index
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.reset_index(inplace = True , drop = True)
np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)
```
### Pickle DataFrame for Downstream analyses (Alternate Allele Frequency 1 vs. Alternate Allele Frequency 2)
```
Base_Call_variants_btwn_isolates_big_change_in_alt_AF.to_pickle('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/variant_calling/longitudinal_SNPs/longitudinal_SNP_variants_70_delta_in_alt_AF.pkl')
```
### Re-Shape Filtered DataFrame (Paired Base Calls across all isolate pairs) to store one entry per SNP
```
SNP_variants_between_paired_isolates = pd.DataFrame()
#common information to both Base Calls (can just look at isolate A)
population_dict = {}
patient_id_dict = {}
ref_position_dict = {}
ref_allele_dict = {}
gene_id_dict = {}
genomic_coord_dict = {}
gene_category_dict = {}
gene_symbol_dict = {}
#look at info for both Base Calls
alt_allele_dict = {}
alt_AF_diff_dict = {}
SNP_type_dict = {}
AA_change_dict = {}
SNP_index = 0
#iterate through indices for isolate A (store common information for isolate pair A & B came from and Base Call), calculate different in Alternate Allele Frequencies, store Syn or NSyn info
for even_index in range(0 , np.shape(Base_Call_variants_btwn_isolates_big_change_in_alt_AF)[0] , 2):
#Base Call info for isolate A
Base_Call_info_isolate_A = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[even_index , :]
#Base Call info for isolate B
Base_Call_info_isolate_B = Base_Call_variants_btwn_isolates_big_change_in_alt_AF.loc[even_index+1 , :]
population_dict[SNP_index] = Base_Call_info_isolate_A.population
patient_id_dict[SNP_index] = Base_Call_info_isolate_A.patient_id
ref_position_dict[SNP_index] = Base_Call_info_isolate_A.ref_position
ref_allele_dict[SNP_index] = Base_Call_info_isolate_A.ref_base
gene_id_dict[SNP_index] = Base_Call_info_isolate_A.gene_id
genomic_coord_dict[SNP_index] = Base_Call_info_isolate_A.gene_coord
gene_category_dict[SNP_index] = Base_Call_info_isolate_A.gene_category
gene_symbol_dict[SNP_index] = Base_Call_info_isolate_A.gene_symbol
#get alternate allele
alt_allele_calls = [Base_Call_info_isolate_A.alt_base , Base_Call_info_isolate_B.alt_base]
try:
alt_allele_calls.remove('Z')
except ValueError:
pass
alt_allele_dict[SNP_index] = alt_allele_calls[0]
#get difference in Alternate Allele Frequencies
alt_AF_diff_dict[SNP_index] = abs(Base_Call_info_isolate_A.alt_AF - Base_Call_info_isolate_B.alt_AF)
#get type of SNP
if 'S' in [Base_Call_info_isolate_A.SNP_ftype , Base_Call_info_isolate_B.SNP_ftype]:
SNP_type_dict[SNP_index] = 'S'
elif 'N' in [Base_Call_info_isolate_A.SNP_ftype , Base_Call_info_isolate_B.SNP_ftype]:
SNP_type_dict[SNP_index] = 'N'
elif 'I' in [Base_Call_info_isolate_A.SNP_ftype , Base_Call_info_isolate_B.SNP_ftype]:
SNP_type_dict[SNP_index] = 'I'
#get AA change
AA_change_calls = [Base_Call_info_isolate_A.AA_change , Base_Call_info_isolate_B.AA_change]
try:
AA_change_calls.remove('None')
except ValueError:
pass
AA_change_dict[SNP_index] = AA_change_calls[0]
SNP_index += 1
#convert dictionaries into series
population = pd.Series(population_dict)
patient_id = pd.Series(patient_id_dict)
ref_position = pd.Series(ref_position_dict)
ref_allele = pd.Series(ref_allele_dict)
alt_allele = pd.Series(alt_allele_dict)
gene_id = pd.Series(gene_id_dict)
genomic_coord = pd.Series(genomic_coord_dict)
gene_category = pd.Series(gene_category_dict)
gene_symbol = pd.Series(gene_symbol_dict)
alt_AF_diff = pd.Series(alt_AF_diff_dict)
SNP_type = pd.Series(SNP_type_dict)
AA_change = pd.Series(AA_change_dict)
#create DataFrame
SNP_variants_between_paired_isolates['population'] = population
SNP_variants_between_paired_isolates['patient_id'] = patient_id
SNP_variants_between_paired_isolates['ref_position'] = ref_position
SNP_variants_between_paired_isolates['ref_position'] = SNP_variants_between_paired_isolates.ref_position.astype(int) #ensure ref position is a column of integers
SNP_variants_between_paired_isolates['ref_allele'] = ref_allele
SNP_variants_between_paired_isolates['alt_allele'] = alt_allele
SNP_variants_between_paired_isolates['gene_id'] = gene_id
SNP_variants_between_paired_isolates['genomic_coord'] = genomic_coord
SNP_variants_between_paired_isolates['gene_category'] = gene_category
SNP_variants_between_paired_isolates['gene_symbol'] = gene_symbol
SNP_variants_between_paired_isolates['alt_AF_diff'] = alt_AF_diff
SNP_variants_between_paired_isolates['SNP_type'] = SNP_type
SNP_variants_between_paired_isolates['AA_change'] = AA_change
SNP_variants_between_paired_isolates.head()
np.shape(SNP_variants_between_paired_isolates)
```
#### Save DataFrame as CSV
```
SNP_variants_between_paired_isolates.to_csv('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/CSV_files/variant_calling/longitudinal_SNPs/SNPs_between_isolates_delta_70.csv' , sep = ',')
```
#### Pickle DataFrame for Downstream analyses
```
SNP_variants_between_paired_isolates.to_pickle('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/variant_calling/longitudinal_SNPs/SNPs_between_isolates_delta_70.pkl')
```
### Store non-redundant Filtered SNPs for use in SIMULATIONS (only simulate SNPs occuring within genes)
```
SNP_variants_between_paired_isolates.head()
#Drop SNPs occurring in intergenic regions
non_redundant_SNP_variants_between_paired_isolates = SNP_variants_between_paired_isolates[SNP_variants_between_paired_isolates.SNP_type != 'I']
#reset index after dropping rows
non_redundant_SNP_variants_between_paired_isolates.reset_index(inplace = True , drop = True)
#Find any Duplicate Base Calls ( Reference Base , Alternate Base, Reference Position , Gene ID & Genomic Coordinate identify a unique SNP )
duplicated_Base_Calls_filter = list( non_redundant_SNP_variants_between_paired_isolates.duplicated(subset = ['ref_allele' , 'alt_allele' , 'ref_position' , 'gene_id'] , keep = 'first') )
non_duplicate_Base_Calls_filter = [not duplicated for duplicated in duplicated_Base_Calls_filter]
#Drop any duplicate Base Calls
non_redundant_SNP_variants_between_paired_isolates = non_redundant_SNP_variants_between_paired_isolates[non_duplicate_Base_Calls_filter]
#reset index after dropping rows
non_redundant_SNP_variants_between_paired_isolates.reset_index(inplace = True , drop = True)
#ensure ref position is a column of integers
non_redundant_SNP_variants_between_paired_isolates['ref_position'] = non_redundant_SNP_variants_between_paired_isolates.ref_position.astype(int)
#Drop unnecessary columns
non_redundant_SNP_variants_between_paired_isolates.drop(['population' , 'patient_id' , 'alt_AF_diff'] , axis = 1 , inplace = True)
np.shape(non_redundant_SNP_variants_between_paired_isolates) #all SNPs are 'non-redundant'
non_redundant_SNP_variants_between_paired_isolates.head(n=3)
```
#### Store as a Pickled DataFrame
```
non_redundant_SNP_variants_between_paired_isolates.to_pickle('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/pickled_files/variant_calling/longitudinal_SNPs/longitudinal_SNPs_to_simulate.pkl')
```
#### Store as a CSV file
```
non_redundant_SNP_variants_between_paired_isolates.to_csv('/n/data1/hms/dbmi/farhat/Roger/inhost_TB_dynamics_project/CSV_files/variant_calling/longitudinal_SNPs/longitudinal_SNPs_to_simulate.csv')
```
| github_jupyter |
# Median Combination Pedagogy
### Week of 3.29.2018
```
# python 2/3 compatibility
from __future__ import print_function
# numerical python
import numpy as np
# file management tools
import glob
import os
# good module for timing tests
import time
# plotting stuff
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib as mpl
cmap = mpl.cm.viridis
%matplotlib inline
# ability to read/write fits files
from astropy.io import fits
# fancy image combination technique
from astropy.stats import sigma_clipping
# median absolute deviation: for photometry
#from astropy.stats import mad_std
# photometric utilities
#from photutils import DAOStarFinder,aperture_photometry, CircularAperture, CircularAnnulus
# import a library for generating pseudo random numbers
# [there is great computer science theory on 'random' numbers, check it out!]
from numpy import random
# make the definitions for reading in images accessible in this notebook.
from HDI_io import *
# load up shifting methods
from shift_methods import *
```
### Part 1: Distribution Basics
Topics:
1. Median selection
2. Small number statistics
We'll start by imagining these in one dimension.
Draw 10000 samples from a normal distribution (an approximation of the background noise) and visualizing.
```
# set the parameters for the normal distribution
mean = 0.5
sigma = 0.1
# we'll use random.random([size]), which returns a value on the interval [0.,1.)
# [that means 0 is allowed, but 1 is not: 'half-open']
# scale this using a prefactor, if desired
nsamples = 10000
# draw the values and 'bin' to 0.01 increments
randvals = np.round(random.normal(mean,sigma,nsamples),2)
# here's a cute method to make a pseudo histogram because of rounding
rand_y_vals = np.zeros_like(randvals)
for x in range(1,len(randvals)):
rand_y_vals[x] = len(np.where(randvals[0:x] == randvals[x])[0])
# visualize
plt.figure(figsize=(5,3))
# render the points, normalizing to 1
plt.scatter(randvals,rand_y_vals/nsamples,color='black',s=1.)
# draw the median line and the theoretical median
plt.plot([np.median(randvals),np.median(randvals)],[0.,1.2*np.max(rand_y_vals/nsamples)],color='gray',lw=1.)
plt.plot([mean,mean],[0.,1.2*np.max(rand_y_vals/nsamples)],color='red',lw=1.)
plt.xlim(mean-5*sigma,mean+5*sigma)
plt.title('Nsamples={}'.format(nsamples),size=24)
plt.xlabel('Pixel Value',size=24)
plt.ylabel('# of samples',size=24)
```
Obviously, this is not totally smooth, even at this sample number. What happens if we decrease the number of samples?
```
# identical to above, but with fewer samples
nsamples = 20
randvals = np.round(random.normal(mean,sigma,nsamples),2)
# here's a cute method to make a pseudo histogram
rand_y_vals = np.zeros_like(randvals)
for x in range(1,len(randvals)):
rand_y_vals[x] = len(np.where(randvals[0:x] == randvals[x])[0])
plt.figure(figsize=(5,3))
plt.scatter(randvals,rand_y_vals/nsamples,color='black',s=1.)
# calculated median
plt.plot([np.median(randvals),np.median(randvals)],[0.,1.2*np.max(rand_y_vals/nsamples)],color='gray',lw=1.)
# theoretical median
plt.plot([mean,mean],[0.,1.2*np.max(rand_y_vals/nsamples)],color='red',lw=1.)
plt.xlim(mean-5*sigma,mean+5*sigma)
plt.title('Nsamples={}'.format(nsamples),size=24)
plt.xlabel('Pixel Value',size=24)
plt.ylabel('# of samples',size=24)
```
We can empirically check the convergence of the samples to the true value by drawing samples repeatedly and computing the median.
```
samplenums = np.arange(1,10000,1)
randvals_median = np.zeros(len(samplenums))
for indx,val in enumerate(samplenums):
randvals_median[indx] = np.median(random.normal(mean,sigma,val))
fig = plt.figure(figsize=(5,3))
ax1 = fig.add_axes([0.2,0.6,0.6,0.35])
ax2 = fig.add_axes([0.2,0.2,0.6,0.35])
ax1.plot(np.log10(samplenums),randvals_median,color='black')
ax1.set_ylabel('Median',size=18)
ax1.set_xticklabels(())
ax2.plot(np.log10(samplenums),np.abs(randvals_median-0.5),color='red')
ax2.plot(np.log10(samplenums),0.1/(np.sqrt(samplenums)),color='gray')
ax2.set_xlabel('log # samples',size=18)
ax2.set_ylabel('Median\nError',size=18)
```
Unfortunately, we are almost always in the very low number region (so we still have plenty of noise). But, now we can start to see why deep images will start to minimize background noise!
Therefore, we want to do something like this at every point on the sky: take many observations and calculate the median. We do this at each pixel point in some fiducial image, that we have matched many other images to spatially.
But, we can do a little better. Enter sigma clipping.
### Part 1: Sigma Clipping
Let's grab a well-sample histogram of values, as above:
```
nsamples = 10000
randvals = np.round(random.normal(0.5,0.1,nsamples),2)
# here's a cute method to make a pseudo histogram
rand_y_vals = np.zeros_like(randvals)
for x in range(1,len(randvals)):
rand_y_vals[x] = len(np.where(randvals[0:x] == randvals[x])[0])
plt.figure(figsize=(5,3))
plt.scatter(randvals,rand_y_vals/nsamples,color='black',s=1.)
median_value = np.median(randvals)
standard_deviation_value = np.std(randvals)
plt.plot([median_value,median_value],[0.,1.2*np.max(rand_y_vals/nsamples)],color='red',lw=1.)
plt.plot([median_value-3.*standard_deviation_value,median_value-3.*standard_deviation_value],\
[0.,0.6*np.max(rand_y_vals/nsamples)],color='blue',lw=1.)
plt.plot([median_value+3.*standard_deviation_value,median_value+3.*standard_deviation_value],\
[0.,0.6*np.max(rand_y_vals/nsamples)],color='blue',lw=1.)
# calculated median
plt.plot([median_value,median_value],[0.,1.2*np.max(rand_y_vals/nsamples)],color='gray',lw=1.)
# theoretical median
plt.plot([mean,mean],[0.,1.2*np.max(rand_y_vals/nsamples)],color='red',lw=1.)
plt.xlim(mean-5*sigma,mean+5*sigma)
plt.title('Pixel Distibution\nwith 3$\sigma$ marked',size=24)
plt.xlabel('Pixel Value',size=24)
plt.ylabel('# of samples',size=24)
```
Now we'd elimate anything to the left and right of the 3$\sigma$ values as being outliers.
This doesn't seem very important (and indeed, if we decrease nsample, we often won't even get fluxes out there). However, occasionally have a pixel value that is drawn from a totally different distribution. That is, another distribution may interlope with some frequency. Let's model it!
```
# draw the distribution as per usual
nsamples = 1000
randvals = np.round(random.normal(0.5,0.1,nsamples),2)
# now, some fraction of the time, a large flux is added
# this is a cosmic ray, with a different
cosmic_ray_mean = 0.9
cosmic_ray_sigma = 0.2
probability_of_cosmic_ray = 0.02
for x in range(0,nsamples):
if random.random() < probability_of_cosmic_ray:
randvals[x] += np.round(random.normal(cosmic_ray_mean,cosmic_ray_sigma),2)
# plot as normal
rand_y_vals = np.zeros_like(randvals)
for x in range(1,len(randvals)):
rand_y_vals[x] = len(np.where(randvals[0:x] == randvals[x])[0])
plt.figure(figsize=(5,3))
plt.scatter(randvals,rand_y_vals/nsamples,color='black',s=1.)
median_value = np.median(randvals)
standard_deviation_value = np.std(randvals)
plt.plot([median_value,median_value],[0.,1.2*np.max(rand_y_vals/nsamples)],color='red',lw=2.)
plt.plot([median_value-3.*standard_deviation_value,median_value-3.*standard_deviation_value],\
[0.,0.6*np.max(rand_y_vals/nsamples)],color='blue',lw=2.)
plt.plot([median_value+3.*standard_deviation_value,median_value+3.*standard_deviation_value],\
[0.,0.6*np.max(rand_y_vals/nsamples)],color='blue',lw=2.)
plt.xlim(0.,2.)
plt.title('Pixel Distribution\nwith cosmic rays',size=24)
plt.xlabel('Pixel Value',size=24)
plt.ylabel('# of samples',size=24)
```
Even 2% can be annoying, and we'd want to eliminate them. Note that they also throw the standard deviation off! We'll exploit this principle in a minute to find cosmic rays.
Luckily, it is more obvious if the added flux is large compared to the signal (as is the case for cosmic rays).
### Part 2: Real Data
Let's grab some real data and investigate how this works in practice.
```
data_dir = 'sample_data/'
obj_files = glob.glob(data_dir+'*o00*')
print('Using files:',obj_files)
phdr = fits.getheader(obj_files[0],0)
data = np.zeros([int(phdr['IMNAXIS2']),int(phdr['IMNAXIS1']),len(obj_files)])
for indx,infile in enumerate(obj_files):
data[:,:,indx] = fits.getdata(infile)[0:phdr['IMNAXIS2'],0:phdr['IMNAXIS1']] - \
np.nanmedian(fits.getdata(infile)[0:4150,phdr['IMNAXIS1']:4150])
#
# do the cross correlations and shift the arrays
#
shifted_data = np.zeros([int(phdr['IMNAXIS2']),int(phdr['IMNAXIS1']),len(obj_files)])
# first image is the fiducial to match to
shifted_data[:,:,0] = data[:,:,0]
for indx in range(1,len(obj_files)):
xshift,yshift = cross_image(data[:,:,0],data[:,:,indx],boxsize=3000)
print('Xshift={}, Yshift={}'.format(np.round(xshift,2),np.round(yshift,2)))
# note that these are reversed owing to how HDI stores axes
shifted_data[:,:,indx] = shift_image(data[:,:,indx],xshift,yshift)
# supersede old data:
data = shifted_data
```
Let's take a look at a small 5x5 sample of the images to try and understand what the median is actually doing.
```
fig = plt.figure(figsize=(16,12))
ax0 = fig.add_axes([0.2,0.2,0.25,0.25])
ax1 = fig.add_axes([0.45,0.2,0.25,0.25])
ax2 = fig.add_axes([0.7,0.2,0.25,0.25])
xmin = 400
xmax = 5
# set color minimum, maximum
vmin=np.min(data[xmin:xmin+xmax,xmin:xmin+xmax,:])
vmax=np.max(data[xmin:xmin+xmax,xmin:xmin+xmax,:])
ax0.imshow(data[xmin:xmin+xmax,xmin:xmin+xmax,0],origin='lower',vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax1.imshow(data[xmin:xmin+xmax,xmin:xmin+xmax,1],origin='lower',vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax2.imshow(data[xmin:xmin+xmax,xmin:xmin+xmax,2],origin='lower',vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax0.set_xticks([0,2,4])
ax1.set_xticks([0,2,4])
ax2.set_xticks([0,2,4])
ax0.set_title('Image 0',size=24)
ax1.set_title('Image 1',size=24)
ax2.set_title('Image 2',size=24)
# this is convolted, but must be like this to plot the numbers correctly!
for x in range(xmax-1,-1,-1):
for y in range(0,xmax):
ax0.text(y,x,int(data[xmin+x,xmin+y,0]),size=16,color='white',ha='center')
ax1.text(y,x,int(data[xmin+x,xmin+y,1]),size=16,color='white',ha='center')
ax2.text(y,x,int(data[xmin+x,xmin+y,2]),size=16,color='white',ha='center')
fig = plt.figure(figsize=(5,5))
ax0 = fig.add_axes([0.2,0.2,0.6,0.6])
median_data = np.median(data[xmin:xmin+xmax,xmin:xmin+xmax,:],axis=2)
ax0.imshow(median_data,origin='lower',vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax0.set_xticks([0,2,4])
ax0.set_yticks([0,2,4])
ax0.set_title('Median Image',size=24)
# this is convolted, but must be like this to plot the numbers correctly!
for x in range(xmax-1,-1,-1):
for y in range(0,xmax):
ax0.text(y,x,int(median_data[x,y]),size=16,color='white',ha='center')
```
This is much closer to a uniform field!
We can go one step further and make a two-dimensional sigma map for each image.
```
# make a two-dimensional median and standard deviation map
median_field = np.median(data,axis=2)
std_field = np.std(data,axis=2)
# make a map of the sigma values for each pixel
data_sigma = np.zeros_like(data)
for indx in range(0,len(obj_files)):
data_sigma[:,:,indx] = (data[:,:,indx] - median_field)/std_field
# blank out bad values
data_sigma[:,:,indx][~np.isfinite(data_sigma[:,:,indx])] = 0.
fig = plt.figure(figsize=(16,12))
ax0 = fig.add_axes([0.2,0.2,0.25,0.25])
ax1 = fig.add_axes([0.45,0.2,0.25,0.25])
ax2 = fig.add_axes([0.7,0.2,0.25,0.25])
vmin=np.min(data_sigma[xmin:xmin+xmax,xmin:xmin+xmax,:])
vmax=np.max(data_sigma[xmin:xmin+xmax,xmin:xmin+xmax,:])
ax0.imshow(data_sigma[xmin:xmin+xmax,xmin:xmin+xmax,0],vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax1.imshow(data_sigma[xmin:xmin+xmax,xmin:xmin+xmax,1],vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax2.imshow(data_sigma[xmin:xmin+xmax,xmin:xmin+xmax,2],vmin=vmin,vmax=vmax,cmap=cmap,interpolation='none')
ax0.set_xticks([0,2,4])
ax1.set_xticks([0,2,4])
ax2.set_xticks([0,2,4])
ax0.set_title('$\sigma$ Map 0',size=24)
ax1.set_title('$\sigma$ Map 1',size=24)
ax2.set_title('$\sigma$ Map 2',size=24)
for x in range(0,xmax):
for y in range(0,xmax):
ax0.text(y,x,np.round(data_sigma[xmin+x,xmin+y,0],1),size=16,color='white',ha='center')
ax1.text(y,x,np.round(data_sigma[xmin+x,xmin+y,1],1),size=16,color='white',ha='center')
ax2.text(y,x,np.round(data_sigma[xmin+x,xmin+y,2],1),size=16,color='white',ha='center')
```
The pixels with '0.0' will be the accepted pixels. Why?
Hey, did you know that you can plot in 3d?
```
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(16,4))
xmin = 400
# let's zoom out a little
xmax = 50
median_data = np.median(data[xmin:xmin+xmax,xmin:xmin+xmax,:],axis=2)
xvals = np.arange(0,xmax,1)
X,Y = np.meshgrid(xvals,xvals)
vmin=np.min(data[xmin:xmin+xmax,xmin:xmin+xmax,:])
vmax=np.max(data[xmin:xmin+xmax,xmin:xmin+xmax,:])
ax0 = fig.add_subplot(1,4,1)
ax0.imshow(data[xmin:xmin+xmax,xmin:xmin+xmax,0],vmin=vmin,vmax=vmax,origin='lower',cmap=cmap,interpolation='none')
ax0.set_title('Noisy Image',size=24)
ax0 = fig.add_subplot(1,4,2, projection='3d')
ax0.scatter(X,Y,data[xmin:xmin+xmax,xmin:xmin+xmax,0],c=data[xmin:xmin+xmax,xmin:xmin+xmax,1],cmap=cmap,edgecolor='none',vmin=vmin,vmax=vmax)
ax0.scatter(X,Y,data[xmin:xmin+xmax,xmin:xmin+xmax,1],c=data[xmin:xmin+xmax,xmin:xmin+xmax,1], cmap=cmap,edgecolor='none',vmin=vmin,vmax=vmax)
ax0.scatter(X,Y,data[xmin:xmin+xmax,xmin:xmin+xmax,2],c=data[xmin:xmin+xmax,xmin:xmin+xmax,2], cmap=cmap,edgecolor='none',vmin=vmin,vmax=vmax)
ax0.set_title('Data Scatter',size=24)
# set the viewing angle
ax0.view_init(22, -135)
ax1 = fig.add_subplot(1,4,3, projection='3d')
ax1.scatter(X,Y,median_data,c=median_data,cmap=cmap,edgecolor='none',vmin=vmin,vmax=vmax)
ax1.view_init(22, -135)
ax1.set_title('Median Scatter',size=24)
ax2 = fig.add_subplot(1,4,4)
ax2.imshow(median_data,vmin=vmin,vmax=vmax,origin='lower',cmap=cmap,interpolation='none')
_ = ax2.set_title('Medianed Image',size=24)
```
Does sigma clipping actually work?
Let's go back to the standard deviation map and spot check some of the large values to see what exactly we are getting rid of by taking the median:
```
w = np.where(std_field > 10000.)
# spot check 10
for indx in range(100,110):
pix_vals = np.round(data[w[0][indx],w[1][indx]],0)
print(pix_vals[pix_vals.argsort()])
# spot check 10: convert to sigmas
for indx in range(100,110):
pix_vals = data[w[0][indx],w[1][indx]]
sigma_pix = np.std(pix_vals)
median_pix = np.median(pix_vals)
sorted_pix = pix_vals[pix_vals.argsort()]
print(np.round(((sorted_pix-median_pix)/sigma_pix),2))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ybex/CustomModules/blob/master/amld2022_forecasting_meta_learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# AMLD 2022 Forecasting & Meta Learning Workshop
In this notebook, we will get our hands dirty with Darts. We will do the following things:
* **Part 1:** Forecasting passenger counts series for 300 airlines (`air` dataset). We will train one model per series.
* **Part 2:** Using "global" models - i.e., models trained on all 300 series simultaneously. Here we split every timeseries into data from the trainset and data from the testset.
* **Part 3:** We will try some *meta learning*, and see what happens if we train some global models on one (big) dataset (`m4` dataset) and use them on another dataset. Compared to part 2, m4 is the trainset and m3 will be our testset.
* **Part 4:** We will reuse our pre-trained model(s) of Part 3 on another new dataset (`m3` dataset) and see how it compares to models specifically trained on this dataset.
## Part 0: Setup (No code to write - execute only)
First, we need to install the right libraries and make the right imports. For the deep learning models, it will help to use a GPU runtime. To get a GPU instance, click on the "RAM/Disk" info bars on the upper right, select "Change runtime type" and choose a GPU as hardware accelerator. The following command will show you the GPU available (if any). If there's no GPU available, you can still go ahead and work on CPU.
```
# You can run this command to see if there's a GPU:
!nvidia-smi
!pip install darts &> /dev/null
!pip install pyyaml==5.4.1 &> /dev/null
!pip install xlrd==2.0.1 &> /dev/null
!pip install matplotlib==3.1.3 &> /dev/null
```
Don't be afraid, we will uncover what these imports mean through the workshop :)
```
# filter out unecessary warnings during import
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import os
import pickle
import random
import time
from datetime import datetime
from typing import Dict, List, Tuple
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import seaborn as sns
import torch
import tqdm.notebook as tq
from pytorch_lightning.callbacks import Callback, EarlyStopping
from sklearn.linear_model import Ridge
from sklearn.preprocessing import MaxAbsScaler, MinMaxScaler
from torch import nn
from darts import TimeSeries
from darts.dataprocessing.transformers import Scaler
from darts.metrics import mape, mase, smape
from darts.models import *
from darts.utils.data import HorizonBasedDataset
from darts.utils.losses import SmapeLoss
from darts.utils.utils import ModelMode, SeasonalityMode, TrendMode
```
We define the forecast horizon here - for all of the (monthly) time series used in this notebook, we'll be interested in forecasting 18 months in advance. We pick 18 months as this is what is used in the M3/M4 competitions for monthly series.
```
HORIZON = 18
```
### Datasets loading methods
Here, we define some helper methods to load the three datasets we'll be playing with: `air`, `m3` and `m4`.
First, we download the datasets (Note: we processed some of the datasets as pickle files for simplicity and speed):
```
# Execute this cell once to download all three datasets
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/m3_dataset.xls\?raw\=true -o m3_dataset.xls
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/passengers.pkl\?raw\=true -o passengers.pkl
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/m4_monthly_scaled.pkl\?raw\=true -o m4_monthly_scaled.pkl
```
All the methods below return two list of `TimeSeries`: one list of training series and one list of "test" series (of length `HORIZON`).
For convenience, all the series are already scaled here, by multiplying each of them by a constant so that the largest value is 1. Such scaling is necessary for many models to work correctly (esp. deep learning models). It does not affect the sMAPE values, so we can evaluate the accuracy of our algorithms on the scaled series. In a real application, we would have to keep the Darts `Scaler` objects somewhere in order to inverse-scale the forecasts.
```
def load_m3() -> Tuple[List[TimeSeries], List[TimeSeries]]:
print('building M3 TimeSeries...')
# Read DataFrame
df_m3 = (pd.read_excel('m3_dataset.xls', 'M3Month'))
# Build TimeSeries
m3_series = []
for row in tq.tqdm(df_m3.iterrows(), position=0, leave=True):
s = row[1]
start_year = int(s['Starting Year'])
start_month = int(s['Starting Month'])
values_series = s[6:].dropna()
if start_month == 0:
continue
start_date = datetime(year=start_year, month=start_month, day=1)
time_axis = pd.date_range(start_date, periods=len(values_series), freq='M')
series = TimeSeries.from_times_and_values(time_axis, values_series.values).astype(np.float32)
m3_series.append(series)
print('\nThere are {} monthly series in the M3 dataset'.format(len(m3_series)))
# Split train/test
print('splitting train/test...')
m3_train = [s[:-HORIZON] for s in m3_series]
m3_test = [s[-HORIZON:] for s in m3_series]
# Scale so that the largest value is 1
print('scaling...')
scaler_m3 = Scaler(scaler=MaxAbsScaler())
m3_train_scaled: List[TimeSeries] = scaler_m3.fit_transform(m3_train)
m3_test_scaled: List[TimeSeries] = scaler_m3.transform(m3_test)
print('done. There are {} series, with average training length {}'.format(
len(m3_train_scaled), np.mean([len(s) for s in m3_train_scaled])
))
return m3_train_scaled, m3_test_scaled
def load_air() -> Tuple[List[TimeSeries], List[TimeSeries]]:
# load TimeSeries
print('loading air TimeSeries...')
with open('passengers.pkl', 'rb') as f:
all_air_series = pickle.load(f)
# Split train/test
print('splitting train/test...')
air_train = [s[:-HORIZON] for s in all_air_series]
air_test = [s[-HORIZON:] for s in all_air_series]
# Scale so that the largest value is 1
print('scaling series...')
scaler_air = Scaler(scaler=MaxAbsScaler())
air_train_scaled: List[TimeSeries] = scaler_air.fit_transform(air_train)
air_test_scaled: List[TimeSeries] = scaler_air.transform(air_test)
print('done. There are {} series, with average training length {}'.format(
len(air_train_scaled), np.mean([len(s) for s in air_train_scaled])
))
return air_train_scaled, air_test_scaled
def load_m4() -> Tuple[List[TimeSeries], List[TimeSeries]]:
# load TimeSeries - the splitting and scaling has already been done
print('loading M4 TimeSeries...')
with open('m4_monthly_scaled.pkl', 'rb') as f:
m4_series = pickle.load(f)
m4_train_scaled, m4_test_scaled = zip(*m4_series)
print('done. There are {} series, with average training length {}'.format(
len(m4_train_scaled), np.mean([len(s) for s in m4_train_scaled])
))
return m4_train_scaled, m4_test_scaled
```
Finally, we define a handy function to tell us how good a bunch of forecasted series are:
```
def eval_forecasts(pred_series: List[TimeSeries],
test_series: List[TimeSeries]) -> List[float]:
print('computing sMAPEs...')
smapes = smape(test_series, pred_series)
mean, std = np.mean(smapes), np.std(smapes)
print('Avg sMAPE: %.3f +- %.3f' % (mean, std))
plt.figure(figsize=(4,4), dpi=144)
plt.hist(smapes, bins=50)
plt.ylabel('Count')
plt.xlabel('sMAPE')
plt.show()
plt.close()
return smapes
```
## Part 1: Local models on the `air` dataset
### Preparing Data
The `air` dataset shows the number of air passengers that flew in or out of the USA per carrier (or airline company) from the year 2000 until 2019.
**Your turn:** First, you can load the train and test series by calling `load_air()` function that we have defined above.
```
air_train, air_test = ...
```
It's a good idea to start by visualising a few of the series to get a sense of what they look like. We can plot a series by calling `series.plot()`.
```
for i in [1, 20, 50, 100, 250]: # Feel free to plot a few other series
plt.figure(figsize=(4,4), dpi=144)
air_train[i].plot()
plt.ylabel('Passengers')
plt.xlabel('Time')
plt.show()
plt.close()
```
We can see that most series look quite different, and they even have different time axes.
Question: What is the shortest series available?
```
```
### A useful function to evaluate models
Below, we write a small function that will make our life easier for quickly trying and comparing different local models. We loop through each serie, fit a model and then evaluate on our test dataset.
> ⚠️ Please note `tq.tqdm` is optional and is only there to help display the training progress (as you will see it can take some time when training 300+ time series)
```
def eval_local_model(train_series: List[TimeSeries],
test_series: List[TimeSeries],
model_cls,
**kwargs) -> Tuple[List[float], float]:
preds = []
start_time = time.time()
for series in tq.tqdm(train_series):
model = model_cls(**kwargs)
model.fit(series)
pred = model.predict(n=HORIZON)
preds.append(pred)
elapsed_time = time.time() - start_time
smapes = eval_forecasts(preds, test_series)
return smapes, elapsed_time
```
### Building and evaluating models
We can now try a first forecasting model on this dataset. As a first step, it is usually a good practice to see how a (very) naive model blindly repeating the last value of the training series performs. This can be done in Darts using a [NaiveSeasonal](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.baselines.html#darts.models.forecasting.baselines.NaiveSeasonal) model:
```
naive_seasonal_last_smapes, naive_seasonal_last_elapsed_time = eval_local_model(air_train, air_test, NaiveSeasonal, K=1)
```
So the most naive model gives us a sMAPE of about 39.38.
**Your turn:** Can we do better with a "less naive" model exploiting the fact that most monthly series have a seasonality of 12?
```
```
All of the Darts forecasting models can be trained and used in the same way!
So we invite you to go over the [list of models in the API documentation](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.html) and try a few more models. Here are some suggestions:
* `ExponentialSmoothing`
* `Theta`
* `ARIMA` - but the default parameters probably won't do very well. Using `p=12`, `d=1`, `q=0` might be a good start.
* ... Your ideas here!
We recommend that you keep track of the SMAPEs and elapsed times for each model. Later we will use these values for quickly comparing models. Some models will take longer than others to run. Don't hesitate to interrupt the execution or run only on a subset of series.
**Your turn:** Try to get the lowest possible errors with some other models of your choice.
```
# model_X_smapes, model_X_elapsed_time = eval_local_model(air_train, air_test, ModelX, **hyper_params_for_model_X)
```
### Comparing models
Below, we define a couple of functions that we will use to obtain an overview of the SMAPEs and time required to obtain the predictions.
```
def smapes_boxplot(method_to_smapes: Dict[str, List[float]], title: str):
method_names = []
smapes = []
for curr_method_name, curr_smapes in method_to_smapes.items():
method_names += [curr_method_name] * len(curr_smapes)
smapes += curr_smapes
smapes_df = pd.DataFrame({'Method': method_names, 'sMAPE': smapes})
plt.figure(figsize=(7,4), dpi=144)
ax = sns.boxplot(x="Method", y="sMAPE", data=smapes_df)
ax.grid(False)
# Display median score on each box
medians = smapes_df.groupby(['Method'])['sMAPE'].median().round(decimals=2)
vertical_offset = smapes_df['sMAPE'].median() * 0.1
for xtick, name in enumerate(method_to_smapes.keys()):
ax.text(xtick, medians[name] + vertical_offset, medians[name],
horizontalalignment='center', size='x-small', color='w', weight='semibold')
plt.xticks(rotation=90)
plt.title(title)
plt.show()
plt.close()
def elapsed_time_barplot(method_to_elapsed_times: Dict[str, float], title: str):
elapsed_times_df = pd.DataFrame({'Method': method_to_elapsed_times.keys(), 'Elapsed time [s]': method_to_elapsed_times.values()})
ax = plt.figure(figsize=(7,4), dpi=144)
sns.barplot(x="Method", y="Elapsed time [s]", data=elapsed_times_df)
plt.xticks(rotation=90)
plt.title(title)
plt.show()
plt.close()
```
**Your turn:** We are now ready to visualise our models. Fill in the cells below to call `smapes_boxplot()` and `elpased_time_barplot()` with the right arguments.
```
smapes = {
'naive-last': naive_seasonal_last_smapes,
# 'model_X': model_X_smapes,
# ...
}
smapes_boxplot(smapes, title='sMAPEs on air passengers dataset')
elapsed_times = {
'naive-last': naive_seasonal_last_elapsed_time,
# 'model_X': model_X_elapsed_time,
# ...
}
elapsed_time_barplot(elapsed_times, title='Predict durations on air passengers dataset')
```
You can also try to directly predict some of the forecasts in order to visualise them.
What are your conclusions so far?
What are your best forecasts? Let us know!
## Part 2: Global models on the `air` dataset
In this section we will use "global models" - that is, models that fit on multiple series at once. Darts has essentially two kinds of global models:
* `RegressionModels` which are wrappers around sklearn-like regression models (Part 2.1).
* PyTorch-based models, which offer various deep learning models (Part 2.2).
Both models can be trained on multiple series by "tabularizing" the data - i.e., taking many (input, output) sub-slices from all the training series, and training machine learning models in a supervised fashion to predict the output based on the input.
**Your turn** We will start by defining a function `eval_global_model()` which works similarly to `eval_local_model()`, but on global models. You can complete it below (hint: you will not need the for-loop that was present in `eval_local_model()`).
```
def eval_global_model(train_series: List[TimeSeries],
test_series: List[TimeSeries],
model_cls,
**kwargs) -> Tuple[List[float], float]:
start_time = time.time()
model = ... # build your model here
... # fit your model here
... # get some predictions here
elapsed_time = time.time() - start_time
smapes = eval_forecasts(preds, test_series)
return smapes, elapsed_time
```
### Part 2.1: Using Darts `RegressionModel`s.
`RegressionModel` in Darts are forecasting models that can wrap around any "scikit-learn compatible" regression model to obtain forecasts. Compared to deep learning, they represent good "go-to" global models because they typically don't have many hyper-parameters and can be faster to train. In addition, Darts also offers some "pre-packaged" regression models such as `LinearRegressionModel` and `LightGBMModel`.
We'll now use our function `eval_global_models()`. In the following cells, you can try using some regression models, for example:
* `LinearRegressionModel`
* `LightGBMModel`
* `RegressionModel`(your_sklearn_model)
You can refer to [the API doc](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.regression_model.html) for how to use them.
Important parameters are `lags` and `output_chunk_length`. They determine respectively the length of the lookback and "lookforward" windows used by the model, and they correspond to the lengths of the input/output subslices used for training. For instance `lags=24` and `output_chunk_length=12` mean that the model will consume the past 24 lags in order to predict the next 12. In our case, because the shortest training series has length 36, we must have `lags + output_chunk_length <= 36`. (Note that `lags` can also be a list of integers representing the individual lags to be consumed by the model instead of the window length).
```
# model_X_smapes, model_X_elapsed_time = eval_global_model(air_train, air_test, GlobalModelX, **hyper_params_for_model_X)
```
### Part 2.2: Using deep learning
Below, we will train an N-BEATS model on our `air` dataset. Again, you can refer to [the API doc](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.nbeats.html) for documentation on the hyper-parameters.
The following hyper-parameters should be a good starting point, and training should take in the order of a minute or two.
During training, you can have a look at the [N-BEATS paper](https://arxiv.org/abs/1905.10437).
```
### Possible N-BEATS hyper-parameters
# Slicing hyper-params:
IN_LEN = 24
OUT_LEN = 12
# Architecture hyper-params:
NUM_STACKS = 18
NUM_BLOCKS = 3
NUM_LAYERS = 3
LAYER_WIDTH = 180
COEFFS_DIM = 6
LOSS_FN = SmapeLoss()
# Training settings:
LR = 5e-4
BATCH_SIZE = 1024
NUM_EPOCHS = 4
```
You can now complete the skeleton below to build, train and predict using an N-BEATS model:
```
# reproducibility
np.random.seed(42)
torch.manual_seed(42)
## Use this to specify "optimizer_kwargs" parameter of the N-BEATS model:
optimizer_kwargs={'lr': LR},
## In addition, when using a GPU, you should specify this for
## the "pl_trainer_kwargs" parameter of the N-BEATS model:
pl_trainer_kwargs={"enable_progress_bar": True,
"accelerator": "gpu",
"gpus": -1,
"auto_select_gpus": True}
start_time = time.time()
nbeats_model_air = ... # Build the N-BEATS model here
nbeats_model_air.fit(..., # fill in series to train on
...) # fill in number of epochs
# get predictions
nb_preds = ...
nbeats_smapes = eval_forecasts(nb_preds, air_test)
nbeats_elapsed_time = time.time() - start_time
smapes_2 = {**smapes,
**{
# ... Fill in here sMAPEs values of any global model you tried
'NBeats': nbeats_smapes,
}
}
smapes_boxplot(smapes_2, title='sMAPEs on air')
elapsed_time_2 = {**elapsed_times,
**{
# ... Fill in here duration values of any global model you tried
'NBeats': nbeats_elapsed_time,
}
}
elapsed_time_barplot(elapsed_time_2, title='Durations on air')
```
What are your conclusions so far, and which results did you manage to get (let us know!)
## Part 3: Training an N-BEATS model on `m4` dataset and use it to forecast `air` dataset
Deep learning models often do better when trained on *large* datasets. Let's try to load all 48,000 monthly time series in the M4 dataset and train our model once more on this larger dataset.
```
m4_train, m4_test = load_m4()
# filter to keep only those that are long enough
m4_train = [s for s in m4_train if len(s) >= 48]
m4_test = [s for s in m4_test if len(s) >= 48]
print('There are {} series of length >= 48.'.format(len(m4_train)))
```
We can start from the same hyper-parameters as before.
With 48,000 M4 training series being on average ~200 time steps long, we would end up with ~10M training samples. With such a number of training samples, each epoch would take too long. So here, we'll limit the number of training samples used per series. This is done when calling `fit()` with the parameter `max_samples_per_ts`. We add a new hyper-parameter `MAX_SAMPLES_PER_TS` to capture this.
Since the M4 training series are all >= 48 time steps long, we can also use a longer `input_chunk_length` of 36.
```
# Slicing hyper-params:
IN_LEN = 36
OUT_LEN = 12
# Architecture hyper-params:
NUM_STACKS = 18
NUM_BLOCKS = 3
NUM_LAYERS = 3
LAYER_WIDTH = 180
COEFFS_DIM = 6
# Training settings:
LR = 5e-4
BATCH_SIZE = 1024
MAX_SAMPLES_PER_TS = 8 # <-- new param, limiting nr of training samples per epoch
NUM_EPOCHS = 4
```
You can now build and train the model, as before.
Running this cell with the proposed hyper-parameters should take ~10 minutes on a Colab GPU.
*If this is taking too long, you can also simply run the next cell, which will download and load the same N-BEATS pre-trained on M4 with these hyper-parameters.*
```
# reproducibility
np.random.seed(42)
torch.manual_seed(42)
nbeats_model_m4 = NBEATSModel(..., # fill in hyper-params
# learning rate goes here
optimizer_kwargs={'lr': LR},
# remove this one if your notebook does not have a GPU:
pl_trainer_kwargs={"enable_progress_bar": True,
"accelerator": "gpu",
"gpus": -1,
"auto_select_gpus": True},
)
# Train
nbeats_model_m4.fit(..., # fill in series to train on
..., # fill in number of epochs
...) # fill in max number of samples per time series
```
The cell below will download a pre-trained version of this N-BEATS model - you can run this if training takes too long in your case:
```
## /!\ RUNNING THIS CELL WILL DOWNLOAD AND OVERWRITE THE MODEL nbeats_model_m4
# Load already trained model
!curl -L https://github.com/unit8co/amld2022-forecasting-and-metalearning/blob/main/data/nbeats_model_m4.pth.tar\?raw\=true -o nbeats_model_m4.pth.tar
nbeats_model_m4 = NBEATSModel.load_model('nbeats_model_m4.pth.tar')
```
We can now use our M4-trained model to get forecasts for the air passengers series. As we use the model in a "meta learning" (or transfer learning) way here, we will be timing only the inference part.
```
start_time = time.time()
preds = ... # get forecasts
nbeats_m4_smapes = eval_forecasts(preds, air_test)
nbeats_m4_elapsed_time = time.time() - start_time
```
What are your conclusions?
### Try training other global models on `m4` and applying on airline passengers
You can now try to train other global models on the M4 dataset in order to see if we can get similar results. If that's taking too long, it might be a good idea to take only e.g., 5000 or 10000 time series. You can do this easily by training on, say, `random.choices(m4_train, k=5000)` instead of `m4_train`. You will again need to specify some small enough value for `max_samples_per_ts` in order to limit the number of training samples.
```
# model_X = GlobalModelX(...)
# model_X.fit(...,
# max_samples_per_ts=...)
start_time = time.time()
# preds = ... # Get predictions
# model_X_smapes = eval_forecasts(preds, air_test) # compute errors
# model_X_elapsed_time = time.time() - start_time # store timing
```
Let's now compare how our different models are doing
```
smapes_3 = {**smapes_2,
**{
'N-BEATS M4': nbeats_m4_smapes,
# 'model_X': model_X_smapes
}
}
smapes_boxplot(smapes_3, title='sMAPEs on air')
elapsed_time_3 = {**elapsed_time_2,
**{
'NBeats M4': nbeats_m4_elapsed_time,
# 'model_X': model_X_elapsed_time
}
}
elapsed_time_barplot(elapsed_time_3, title='Durations on air')
```
## Part 4: Forecasting the `m3` dataset
Until now, we have seen that we can use some M4-trained models to predict another dataset, namely the `air` dataset.
But can we try to convince ourselves a bit more, and try the same approach on a third dataset?
In this part of the notebook, we propose to consolidate all our learnings so far using the `m3` dataset:
* Try fitting local models directly on `m3`
* Try fitting global ML models directly on `m3` --> how far can you push it?
* Try applying our previous M4-trained model on `m3` --> what are your conclusions?
Hint: The Theta model was one of the best performing model during the M3 competition.
```
# First, load the actual dataset
m3_train, m3_test = load_m3()
# Then try your models :)
```
| github_jupyter |
# How to search the IOOS CSW catalog with Python tools
This notebook demonstrates a how to query a [Catalog Service for the Web (CSW)](https://en.wikipedia.org/wiki/Catalog_Service_for_the_Web), like the IOOS Catalog, and to parse its results into endpoints that can be used to access the data.
```
import os
import sys
ioos_tools = os.path.join(os.path.pardir)
sys.path.append(ioos_tools)
```
Let's start by creating the search filters.
The filter used here constraints the search on a certain geographical region (bounding box), a time span (last week), and some [CF](http://cfconventions.org/Data/cf-standard-names/37/build/cf-standard-name-table.html) variable standard names that represent sea surface temperature.
```
from datetime import datetime, timedelta
import dateutil.parser
service_type = 'WMS'
min_lon, min_lat = -90.0, 30.0
max_lon, max_lat = -80.0, 40.0
bbox = [min_lon, min_lat, max_lon, max_lat]
crs = 'urn:ogc:def:crs:OGC:1.3:CRS84'
# Temporal range: Last week.
now = datetime.utcnow()
start, stop = now - timedelta(days=(7)), now
start = dateutil.parser.parse('2017-03-01T00:00:00Z')
stop = dateutil.parser.parse('2017-04-01T00:00:00Z')
# Ocean Model Names
model_names = ['NAM', 'GFS']
```
With these 3 elements it is possible to assemble a [OGC Filter Encoding (FE)](http://www.opengeospatial.org/standards/filter) using the `owslib.fes`\* module.
\* OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models.
```
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:AnyText')
or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw)
for val in model_names])
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:ServiceType')
serviceType = fes.PropertyIsLike(literal=('*%s*' % service_type), **kw)
begin, end = fes_date_filter(start, stop)
bbox_crs = fes.BBox(bbox, crs=crs)
filter_list = [
fes.And(
[
bbox_crs, # bounding box
begin, end, # start and end date
or_filt, # or conditions (CF variable names)
serviceType # search only for datasets that have WMS services
]
)
]
from owslib.csw import CatalogueServiceWeb
endpoint = 'https://data.ioos.us/csw'
csw = CatalogueServiceWeb(endpoint, timeout=60)
```
The `csw` object created from `CatalogueServiceWeb` did not fetched anything yet.
It is the method `getrecords2` that uses the filter for the search. However, even though there is a `maxrecords` option, the search is always limited by the server side and there is the need to iterate over multiple calls of `getrecords2` to actually retrieve all records.
The `get_csw_records` does exactly that.
```
def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000):
"""Iterate `maxrecords`/`pagesize` times until the requested value in
`maxrecords` is reached.
"""
from owslib.fes import SortBy, SortProperty
# Iterate over sorted results.
sortby = SortBy([SortProperty('dc:title', 'ASC')])
csw_records = {}
startposition = 0
nextrecord = getattr(csw, 'results', 1)
while nextrecord != 0:
csw.getrecords2(constraints=filter_list, startposition=startposition,
maxrecords=pagesize, sortby=sortby)
csw_records.update(csw.records)
if csw.results['nextrecord'] == 0:
break
startposition += pagesize + 1 # Last one is included.
if startposition >= maxrecords:
break
csw.records.update(csw_records)
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = '\n'.join(csw.records.keys())
print('Found {} records.\n'.format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print('[{}]\n{}\n'.format(value.title, key))
csw.request
#write to JSON for use in TerriaJS
csw_request = '"{}": {}"'.format('getRecordsTemplate',str(csw.request,'utf-8'))
import io
import json
with io.open('query.json', 'a', encoding='utf-8') as f:
f.write(json.dumps(csw_request, ensure_ascii=False))
f.write('\n')
```
| github_jupyter |
# Creating an agent
This notebook will go through the how to create a new agent within the tomsup framework. In this tutorial we will be making an reversed win-stay, lose-switch agent, e.g. an win-switch, lose-stay agent.
This guides assumes a basic understanding of classes in python, if you don't know these or need to recap we suggest examing this [chapter](http://hplgit.github.io/primer.html/doc/pub/class/._class-readable002.html) in the free ebook a byte of python
Let us first import the package:
```
#assuming you are in the github folder change the path - not relevant if tomsup is installed via. pip
import os
os.chdir("..") # go back one folder
import tomsup as ts
```
Now lets first take a look at the current win-stay, lose-switch (WSLS) agent:
```
sigmund = ts.WSLS() #create agent
# inspect sigmund
print(f"sigmund is an class of type: {type(sigmund)}") #f is for format
if isinstance(sigmund, ts.Agent):
print(f"but sigmund is also of has the parent class ts.Agent")
```
As we can see sigmund is a WSLS agent with the parent class tsAgent. This us some benefits as WSLS inherit some of the attributes of the parent class, such as the ability to save play history and the ability to reset the agents. To see more of the inherited methods see help(ts.WSLS).
## Creating a new class
Now let's try to create our own agent one bit at a time (if you are confortable with classes simply jump to 'The final reversed WSLS):
```
import numpy as np
class ReversedWSLS(ts.Agent): # make sure that the parent class is ts.Agent
"""
ReversedWSLS: Win-switch, lose-stay.
This agent is a reversed win-stay, lose-switch agent, which ...
"""
# add a docstring which explains the agent
pass # we will later replace this pass with something else
freud = ReversedWSLS()
print(f"is freud an Agent? {isinstance(freud, ts.Agent)}")
```
### Add initialization
Let's add an initalization of the agent. These are things which should be created prior to the agent competing.
```
class ReversedWSLS(ts.Agent):
"""
ReversedWSLS: Win-switch, lose-stay.
This agent is a reversed win-stay, lose-switch agent, which ...
"""
def __init__(self, first_move, **kwargs): #initalize the agent
self.strategy = "ReversedWSLS" # set the strategy name
# set internal parameters
self.first_move = first_move
super().__init__(**kwargs) # pass additional argument the ts.Agent class (could e.g. include 'save_history = True')
self._start_params = {'first_move': first_move, **kwargs} # save any starting parameters used when the agent is reset
freud = ReversedWSLS(first_move = 1)
print(f"what is freud's first move? {freud.first_move}")
print(f"what is freud's an starting parameters? {freud.get_start_params()}")
print(f"what is freud's strategy? {freud.get_strategy()}")
```
In the above you sucessfully created an freud as an agent and that his first move is 1. We also see that functions such as the ```get_start_params()``` from the ts.Agent is inherited by the new agent.
**Note** that we have set ```**kwargs```, this simply means that function accept additional arguments, e.g. ```save_history = True```.
These arguments are then passed to the ```super()__init__()```, which initialize the parent class (in this case the ts.Agent class) as well as the ```_start_params``` which is the starting parameters. The starting parameter are used when resetting the agent, which is relevant e.g. when setting up a tournament settings.
#### Add a compete function
All agent naturally need a compete function. Let us add one to the agent
```
class ReversedWSLS(ts.Agent):
"""
ReversedWSLS: Win-switch, lose-stay.
This agent is a reversed win-stay, lose-switch agent, which ...
"""
def __init__(self, first_move, **kwargs): #initalize the agent
self.strategy = "ReversedWSLS" # set the strategy name
# set internal parameters
self.first_move = first_move
super().__init__(**kwargs) # pass additional argument the ts.Agent class (could e.g. include 'save_history = True')
self._start_params = {'first_move': first_move, **kwargs} # save any starting parameters used when the agent is reset
def compete(self, p_matrix, op_choice = None, agent = 0):
"""
win-switch, lose-stay strategy, with the first move being set when the class is initilized (__init__())
p_matrix is a PayoffMatrix
op_choice is either 1 or 0
agent is either 0 or 1 and indicated the perpective of the agent in the game (whether it is player 1 og 2)
"""
if self.choice is None: # if a choice haven't been made: Choose the redifined first move
self.choice = self.first_move #fetch from self
else: # if a choice have been made:
payoff = p_matrix.payoff(self.choice, op_choice, agent) # calculate payoff of last round
if payoff == 1: # if the agent won then switch
self.choice = 1-self.choice # save the choice in self (for next round)
# also save any other internal states which you might
# want the agent to keep for next round in self
self._add_to_history(choice = self.choice) # save action and (if any) internal states in history
# note that _add_to_history() is not intented for
# later use within the agent
return self.choice # return choice which is either 1 or 0
freud = ReversedWSLS(first_move = 1) #create the agent
# fetch payoff matrix for the pennygame
penny = ts.PayoffMatrix(name = "penny_competitive")
print("This is the payoffmatrix for the game (seen from freud's perspective):", penny()[0,:,:], sep = "\n")
# have freud compete
choice = freud.compete(penny)
print(f"what is freud's choice the first round? {choice}")
choice = freud.compete(penny, op_choice = 1)
print(f"what is freud's choice the second round if his opponent chose 1? {choice}")
```
In the above script we add freud's compete function, which for the first round choses his own move and for future moves it uses the win-switch, lose-stay strategy. It then return either a 0 or 1 depending on whether is choses e.g. right or left hand in the penny game. It is important that the agent does only return 0 or 1 in its compete function otherwise the agent will not function in the context of the package.
**Note** the ```self._add_to_history(choice = self.choice)```, which indicated which variables I would like to add to the agent history, assuming save history is set to ```True```. In this case we would like to.
Finally when you have the ```__init__()``` and the ```compete()``` working you can add any additional function you might want your agent to have, for example you will se that we have added the ```get_first_move()```, which is a helper function to extract the first move of the agent.
## The final reversed WSLS
The following is the finalized version of the win-switch, lose-stay agent.
```
import numpy as np
class ReversedWSLS(ts.Agent):
"""
ReversedWSLS: Win-switch, lose-stay.
This agent is a reversed win-stay, lose-switch agent, which ...
Examples:
>>> waade = ReversedWSLS(first_move = 1)
>>> waade.compete(op_choice = None, p_matrix = penny)
1
"""
def __init__(self, first_move, **kwargs):
self.strategy = "ReversedWSLS"
# set internal parameters
self.first_move = first_move
super().__init__(**kwargs) # pass additional argument the ts.Agent class (could e.g. include 'save_history = True')
self._start_params = {'first_move': first_move, **kwargs} # save any starting parameters used when the agent is reset
def compete(self, p_matrix, op_choice = None):
if self.choice is None: # if a choice haven't been made: Choose the redifined first move
self.choice = self.first_move #fetch from self
else: # if a choice have been made:
payoff = p_matrix.payoff(self.choice, op_choice, 0) # calculate payoff of last round
if payoff == 1: # if the agent won then switch
self.choice = 1-self.choice # save the choice in self (for next round)
# also save any other internal states which you might
# want the agent to keep for next round in self
self._add_to_history(choice = self.choice) # save action and (if any) internal states in history
# note that _add_to_history() is not intented for
# later use within the agent
return self.choice # return choice
# define any additional function you wish the class should have
def get_first_move(self):
return self.first_move
```
## Test your knowlegde
1) Create an agent called Random, which simply choose randomly
2) Check that it is an agent and that the compete function work
3) Have the agent compete against another agent within the package using the ```ts.compete()```, which one win?
# FAQ
- I have developed an agent which I would like to include in your package
Sounds lovely, we would love to include the agent. Feel free to make a pull request on Github or contact us at kennethcenevoldsen@gmail.com.
| github_jupyter |
<a href="https://colab.research.google.com/github/Muzzamal-Hameed/Deep-Learning-Models/blob/main/tennis_ball_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/gdrive')
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d domhenjes/ballsemtpytt
! unzip ballsemtpytt.zip
%matplotlib inline
import pandas as pd
import os,shutil,math,scipy,cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
from sklearn.metrics import confusion_matrix,roc_curve,auc
from PIL import Image
from PIL import Image as pil_image
from time import time
from PIL import ImageDraw
from glob import glob
from tqdm import tqdm
from skimage.io import imread
from IPython.display import SVG
from scipy import misc,ndimage
from scipy.ndimage.interpolation import zoom
from keras import backend as K
from keras import layers
from keras.preprocessing.image import save_img
from keras.utils.vis_utils import model_to_dot
from keras.applications.vgg16 import VGG16,preprocess_input
from keras.applications.xception import Xception
from keras.applications.nasnet import NASNetMobile
from keras.models import Sequential,Input,Model
from keras.layers import Dense,Flatten,Dropout,Concatenate,GlobalAveragePooling2D,Lambda,ZeroPadding2D
from keras.layers import SeparableConv2D,BatchNormalization,MaxPooling2D,Conv2D
from keras.preprocessing.image import ImageDataGenerator
from keras.utils.vis_utils import plot_model
from keras.callbacks import ModelCheckpoint,EarlyStopping,TensorBoard,CSVLogger,ReduceLROnPlateau,LearningRateScheduler
def show_final_history(history):
fig, ax = plt.subplots(1, 2, figsize=(15,5))
ax[0].set_title('loss')
ax[0].plot(history.epoch, history.history["loss"], label="Train loss")
ax[0].plot(history.epoch, history.history["val_loss"], label="Validation loss")
ax[1].set_title('acc')
ax[1].plot(history.epoch, history.history["acc"], label="Train acc")
ax[1].plot(history.epoch, history.history["val_acc"], label="Validation acc")
ax[0].legend()
ax[1].legend()
augs = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.3)
train_gen = augs.flow_from_directory(
'/content/emptyballs/train/',
target_size = (150,150),
batch_size=8,
class_mode = 'binary')
test_gen = augs.flow_from_directory(
'/content/emptyballs/test/',
target_size=(150,150),
batch_size=8,
class_mode='binary',
subset='validation')
base_model = VGG16(include_top=False,
input_shape = (150,150,3),
weights = 'imagenet')
for layer in base_model.layers[:-12]:
layer.trainable = False
for layer in base_model.layers:
print(layer,layer.trainable)
model = Sequential()
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dense(1024,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1,activation='sigmoid'))
model.summary()
SVG(model_to_dot(model).create(prog='dot', format='svg'))
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True, expand_nested=True)
checkpoint = ModelCheckpoint(
'./base.model',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
save_weights_only=False,
period=1
)
earlystop = EarlyStopping(
monitor='val_loss',
min_delta=0.01,
patience=30,
verbose=1,
mode='auto'
)
tensorboard = TensorBoard(
log_dir = './logs',
histogram_freq=0,
batch_size=16,
write_graph=True,
write_grads=True,
write_images=False,
)
csvlogger = CSVLogger(
filename= "training_csv.log",
separator = ",",
append = False
)
reduce = ReduceLROnPlateau(
monitor='val_loss',
factor=0.1,
patience=1,
verbose=1,
mode='auto'
)
callbacks = [checkpoint,tensorboard,csvlogger,reduce]
import tensorflow as tf
opt = tf.keras.optimizers.SGD(lr=1e-5,momentum=0.95)
opt1 = tf.keras.optimizers.Adam(lr=1e-4)
model.compile(
loss='binary_crossentropy',
optimizer=opt1,
metrics=['accuracy']
)
history = model.fit_generator(
train_gen,
steps_per_epoch = 150,
validation_data = test_gen,
validation_steps = 150,
epochs = 30,
verbose = 1,
callbacks=callbacks)
predictions = np.rint(model.predict(x_test))
print( classification_report(y_test, predictions) )
# show_final_history(history)
model.load_weights('./base.model')
model_score = model.evaluate_generator(test_gen)
print("Model Test Loss:",model_score[0])
print("Model Test Accuracy:",model_score[1])
model_json = model.to_json()
with open("model.json","w") as json_file:
json_file.write(model_json)
model.save("model_tennis.h5")
print("Weights Saved")
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
LOG_DIR = './logs' # Here you have to put your log directory
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 8080 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 8080 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
```
| github_jupyter |
```
%matplotlib inline
import warnings
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import gridspec
warnings.filterwarnings('ignore')
```
# Introduction to Pulsar Timing
[](http://mybinder.org/repo/matteobachetti/timing-lectures)
<img src="0737.png" alt="0737" style="height: 300px;margin: auto;"/>
(These slides are obtained from the iPython notebook that can be found [here](https://github.com/matteobachetti/timing-lectures))
## Contents
* Finding pulsars: the buried clock
* Frequency analysis: the Fourier Transform and the Power Density Spectrum
* Refine the search: Folding search (+ $Z^2_n$, $H$-test, ...)
* Getting pulse arrival times
# Finding pulsars: the buried clock
<img src="buriedclock.jpg" alt="Buried Clock" style="height: 300px;margin: auto;"/>
# Finding pulsars: the buried clock
* Pulsars are stable rotators: very predictable "clocks"
<img src="smallmodpulsar.gif" alt="Pulsar" style="height: 300px;margin: auto;"/>
# Finding pulsars: the buried clock
* Pulsars are stable rotators: very predictable "clocks"
* Often signal buried in noise (below: a 0.853-Hz sinusoidal pulse buried in noise ~30 times stronger)
```
def pulsar(time, period=1):
return (1 + np.sin(2 * np.pi / period * time)) / 2.
time_bin = 0.009
# --- Parameters ----
period = 0.8532
pulsar_amp = 0.5
pulsar_stdev = 0.05
noise_amp = 15
noise_stdev = 1.5
# --------------------
# refer to the center of the time bin
time = np.arange(0, 100, time_bin) + time_bin / 2
signal = np.random.normal(pulsar_amp * pulsar(time, period), pulsar_stdev)
noise = np.random.normal(noise_amp, noise_stdev, len(time))
total = signal + noise
# PLOTTING -------------------------
plt.plot(time, signal, 'r-', label='signal')
plt.plot(time, noise, 'b-', label='noise')
plt.plot(time, total, 'k-', label='total')
plt.xlim(0, 30)
plt.xlabel('Time')
plt.ylabel('Flux')
a = plt.legend()
# -----------------------------------
```
# Frequency analysis: the Fourier Transform
Through the Fourier transform, we can decompose a function of time into a series of functions of frequency:
\begin{equation}
\mathcal{F}(\omega) = \int^{\infty}_{-\infty} e^{-i\omega t} f(t)
\end{equation}
or, more appropriate to our case, in the discrete form, we can decompose a time series into a frequency series:
\begin{equation}
F_k = \sum^{N-1}_{k=0} e^{-2\pi i k n/N} t_n
\end{equation}
it is, in general, a **complex** function.
The Fourier transform of a sinusoid will give a high (in absolute terms) value of the $F_k$ corresponding to the frequency of the sinusoid. Other periodic functions will produce high contribution at the fundamental frequency plus one or more multiples of the fundamental, called *harmonics*.
## Our example
Let's take the Fourier transform of the signal we simulated above (only taking *positive* frequencies)
```
ft = np.fft.fft(total)
freqs = np.fft.fftfreq(len(total), time[1] - time[0])
good = freqs >0
freqs = freqs[good]
ft = ft[good]
# PLOTTING ---------------------------
plt.plot(freqs, ft.real, 'r-', label='real')
plt.plot(freqs, ft.imag, 'b-', label='imag')
plt.xlim([-0.1, 10])
a = plt.legend()
_ = plt.xlabel('Frequency (Hz)')
_ = plt.ylabel('FT')
# -------------------------------------
```
Note that the imaginary part and real part of the Fourier transform have different contributions at the pulsar frequency (1/0.85 s ~ 1.2 Hz). This is because they depend strongly on the phase of the signal [Exercise: **why?**].
## Our example - 2
If we applied a shift of 240 ms (just any value) to the signal:
```
shift = 0.240
signal_shift = np.roll(total, int(shift / time_bin))
ft_shift = np.fft.fft(signal_shift)
freqs_shift = np.fft.fftfreq(len(total), time[1] - time[0])
good = freqs_shift >0
freqs_shift = freqs_shift[good]
ft_shift = ft_shift[good]
# PLOTTING -------------------------------------
plt.plot(freqs_shift, ft_shift.real, 'r-', label='real')
plt.plot(freqs_shift, ft_shift.imag, 'b-', label='imag')
plt.xlim([-0.1, 10])
a = plt.legend()
_ = plt.xlabel('Frequency (Hz)')
_ = plt.ylabel('FT')
# ----------------------------------------------
```
we would clearly have non-zero values at ~0.85 Hz both in the real and the imaginary parts.
# The Power Density Spectrum
To solve these issues with real and imaginary parts, we can instead take the *squared modulus* of the Fourier transform. This is called **Periodogram**, but most people use the word **Power Density Spectrum** (a periodogram is actually a single realization of the underlying PDS).
\begin{equation}
\mathcal{P}(\omega) = \mathcal{F}(\omega) \cdot \mathcal{F}^*(\omega)
\end{equation}
This function is positive-definite and in our case results in a clear peak at the pulse frequency, *consistent* between original and shifted signal:
```
pds = np.abs(ft*ft.conj())
pds_shift = np.abs(ft_shift*ft_shift.conj())
fmax = freqs[np.argmax(pds)]
pmax = 1 / fmax
# PLOTTING ---------------------------------
plt.plot(freqs, pds, 'r-', label='PDS of signal')
plt.plot(freqs_shift, pds_shift, 'b-', label='PDS of shifted signal')
a = plt.legend()
a = plt.xlabel('Frequency (Hz)')
a = plt.ylabel('PDS')
plt.xlim([-0.1, 3.5])
_ = plt.gca().annotate('max = {:.2f} s ({:.2f} Hz)'.format(pmax, fmax), xy=(2., max(pds) / 2))
# -------------------------------------------
```
## The Power Density Spectrum -2
The PDS of a generic non-sinusoidal pulse profile will, in general, contain more than one harmonic, with the fundamental not always predominant.
```
def gaussian_periodic(x, x0, amp, width):
'''Approximates a Gaussian periodic function by summing the contributions in the phase
range 0--1 with those in the phase range -1--0 and 1--2'''
phase = x - np.floor(x)
lc = np.zeros_like(x)
for shift in [-1, 0, 1]:
lc += amp * np.exp(-(phase + shift - x0)**2 / width ** 2)
return lc
def generate_profile(time, period):
'''Simulate a given profile with 1-3 Gaussian components'''
total_phase = time / period
ngauss = np.random.randint(1, 3)
lc = np.zeros_like(total_phase)
for i in range(ngauss):
ph0 = np.random.uniform(0, 1)
amp = np.random.uniform(0.1, 1)
width = np.random.uniform(0.01, 0.2)
lc += gaussian_periodic(total_phase, ph0, amp, width)
return lc
# PLOTTING -------------------------
ncols = 2
nrows = 3
fig = plt.figure(figsize=(12, 8))
fig.suptitle('Profiles and their PDSs')
gs = gridspec.GridSpec(nrows, ncols)
for c in range(ncols):
for r in range(nrows):
# ----------------------------------
noise = np.random.normal(noise_amp, noise_stdev, len(time))
lc = generate_profile(time, period)
lc_noisy = np.random.normal(2 * lc, 0.2) + noise
lcft = np.fft.fft(lc_noisy)
lcfreq = np.fft.fftfreq(len(lc_noisy), time[1] - time[0])
lcpds = np.absolute(lcft) ** 2
# PLOTTING -------------------------
gs_2 = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[r, c])
ax = plt.subplot(gs_2[0])
good = time < period * 3
ax.plot(time[good] / period, lc[good])
ax.set_xlim([0,3])
ax.set_xlabel('Phase')
ax = plt.subplot(gs_2[1])
ax.plot(lcfreq[lcfreq > 0], lcpds[lcfreq > 0] / max(lcpds[lcfreq > 0]))
ax.set_xlabel('Frequency')
ax.set_xlim([0, 10])
# ----------------------------------
```
## Pulsation?
Here are some examples of power density spectra. In some cases, it might look like a pulsation is present in the data. How do we assess this?
```
# PLOTTING -------------------------
ncols = 2
nrows = 3
fig = plt.figure(figsize=(12, 8))
fig.suptitle('Profiles and their PDSs')
gs = gridspec.GridSpec(nrows, ncols)
for c in range(ncols):
for r in range(nrows):
# ----------------------------------
noise = np.random.normal(noise_amp, noise_stdev, len(time))
lc = np.zeros_like(time)
lc_noisy = np.random.normal(2 * lc, 0.2) + noise
lcft = np.fft.fft(lc_noisy)
lcfreq = np.fft.fftfreq(len(lc_noisy), time[1] - time[0])
lcpds = np.absolute(lcft) ** 2
# PLOTTING -------------------------
gs_2 = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[r, c])
ax = plt.subplot(gs_2[0])
good = time < period * 3
ax.plot(time[good] / period, lc[good])
ax.set_xlim([0,3])
ax.set_xlabel('Phase')
ax = plt.subplot(gs_2[1])
ax.plot(lcfreq[lcfreq > 0], lcpds[lcfreq > 0] / max(lcpds[lcfreq > 0]))
ax.set_xlabel('Frequency')
ax.set_xlim([0, 10])
# ----------------------------------
```
# Epoch folding
Epoch folding consists of summing equal, one pulse period-long, chunks of data. If the period is just right, the crests will sum up in phase, gaining signal over noise [Exercise: **how much will we gain** by summing up in phase $N$ chunks of data at the right period?].
```
def epoch_folding(time, signal, period, nperiods=3, nbin=16):
# The phase of the pulse is always between 0 and 1.
phase = time / period
phase -= phase.astype(int)
# First histogram: divide phase range in nbin bins, and count how many signal bins
# fall in each histogram bin. The sum is weighted by the value of the signal at
# each phase
prof_raw, bins = np.histogram(
phase, bins=np.linspace(0, 1, nbin + 1),
weights=signal)
# "Exposure": how many signal bins have been summed in each histogram bin
expo, bins = np.histogram(phase, bins=np.linspace(0, 1, nbin + 1))
# ---- Evaluate errors -------
prof_sq, bins = np.histogram(
phase, bins=np.linspace(0, 1, nbin + 1),
weights=signal ** 2)
# Variance of histogram bin: "Mean of squares minus square of mean" X N
hist_var = (prof_sq / expo - (prof_raw / expo) ** 2) * expo
# Then, take square root -> Stdev, then normalize / N.
prof_err = np.sqrt(hist_var)
#-----------------------------
# Normalize by exposure
prof = prof_raw / expo
prof_err = prof_err / expo
# histogram returns all bin edges, including last one. Here we take the
# center of each bin.
phase_bins = (bins[1:] + bins[:-1]) / 2
# ---- Return the same pattern 'nperiods' times, for visual purposes -----
final_prof = np.array([])
final_phase = np.array([])
final_prof_err = np.array([])
for n in range(nperiods):
final_prof = np.append(final_prof, prof)
final_phase = np.append(final_phase, phase_bins + n)
final_prof_err = np.append(final_prof_err, prof_err)
# ---------------------------
return final_phase, final_prof, final_prof_err
phase, profile, profile_err = epoch_folding(time, total, period)
phase_shift, profile_shift, profile_shift_err = epoch_folding(time, signal_shift, period)
# PLOTTING -------------------------------------------------------------
plt.errorbar(
phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='Signal')
plt.errorbar(
phase_shift, profile_shift, yerr=profile_shift_err,
drawstyle='steps-mid', label='Shifted signal')
_ = plt.legend()
_ = plt.xlabel('Phase')
_ = plt.ylabel('Counts/bin')
# -------------------------------------------------------------------
```
# Epoch folding search
Now, let's run epoch folding at a number of trial periods around the pulse period. To evaluate how much a given profile "looks pulsar-y", we can use the $\chi^2$ statistics, as follows:
\begin{equation}
\mathcal{S} = \sum_{i=0}^N \frac{(p_i - \bar{p})^2}{\sigma_p^2}
\end{equation}
for each profile obtained for each trial value of the pulse frequency and look for peaks$^1$. [Exercise: do you know what statistics this is? And why does that statistics work for our case? Exercise-2: Note the very large number of trials. Can we optimize the search so that we use less trials without losing sensitivity?]
$^1$ 1. Leahy, D. A. et al. On searches for pulsed emission with application to four globular cluster X-ray sources - NGC 1851, 6441, 6624, and 6712. _ApJ_ **266**, 160 (1983).
```
def pulse_profile_stat(profile, profile_err):
return np.sum(
(profile - np.mean(profile)) ** 2 / profile_err ** 2)
trial_periods = np.arange(0.7, 1.0, 0.0002)
stats = np.zeros_like(trial_periods)
for i, p in enumerate(trial_periods):
phase, profile, profile_err = epoch_folding(time, total, p)
stats[i] = pulse_profile_stat(profile, profile_err)
bestp = trial_periods[np.argmax(stats)]
phase_search, profile_search, profile_search_err = \
epoch_folding(time, total, bestp)
phase, profile, profile_err = epoch_folding(time, total, period)
# PLOTTING -------------------------------
fig = plt.figure(figsize=(10, 3))
gs = gridspec.GridSpec(1, 2)
ax = plt.subplot(gs[0])
ax.plot(trial_periods, stats)
ax.set_xlim([0.7, 1])
ax.set_xlabel('Period (s)')
ax.set_ylabel('$\chi^2$')
ax.axvline(period, color='r', label="True value")
_ = ax.legend()
ax.annotate('max = {:.5f} s'.format(pmax), xy=(.9, max(stats) / 2))
ax2 = plt.subplot(gs[1])
ax2.errorbar(phase_search, profile_search, yerr=profile_search_err,
drawstyle='steps-mid', label='Search')
ax2.errorbar(phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='True period')
ax2.set_xlabel('Phase')
ax2.set_ylabel('Counts/bin')
_ = ax2.legend()
# ------------------------------------------
```
# Times of arrival (TOA)
To calculate the time of arrival of the pulses, we need to:
* Choose what **part of the pulse** is the reference (e.g., the maximum). Once we know that, if $\phi_{max}$ is the phase of the maximum of the pulse, $t_{start}$ the time at the start of the folded light curve, and $p$ is the folding period,
$TOA = t_{start} + \phi_{max} \cdot p$
* Choose a **method** to calculate the TOA:
+ The maximum bin?
+ The phase of a sinusoidal fit?
+ The phase of a more complicated fit?
Hereafter, we are going to use the maximum of the pulse as a reference, and we will calculate the TOA with the three methods above.
## TOA from the maximum bin
**Advantage**
* Very fast and easy to implement
**Disadvantages**
* Very rough (maximum precision, the width of the bin)
* Very uncertain (if statistics is low and/or the pulse is broad, many close-by bins can randomly be the maximum)
```
phase_bin = 1 / 32.
ph = np.arange(phase_bin / 2, phase_bin / 2 + 1, phase_bin)
shape = np.sin(2 * np.pi * ph) + 2
pr_1 = np.random.poisson(shape * 10000) / 10000
pr_2 = np.random.poisson(shape * 10) / 10
# PLOTTING -----------------------------
plt.plot(ph, shape, label='Theoretical shape', color='k')
plt.plot(
ph, pr_1, drawstyle='steps-mid', color='r',
label='Shape - good stat')
plt.plot(
ph, pr_2, drawstyle='steps-mid', color='b',
label='Shape - bad stat')
plt.axvline(0.25, ls=':', color='k', lw=2, label='Real maximum')
plt.axvline(
ph[np.argmax(pr_1)], ls='--', color='r', lw=2,
label='Maximum - good stat')
plt.axvline(
ph[np.argmax(pr_2)], ls='--', color='b', lw=2,
label='Maximum - bad stat')
_ = plt.legend()
# --------------------------------------
```
## TOA from single sinusoidal fit
**Advantage**
* Relatively easy task (fitting with a sinusoid)
* Errors are well determined provided that the pulse is broad
**Disadvantages**
* If profile is not sinusoidal, might not be well determined
Below, the phase of the pulse is always 0.25
```
def sinusoid(phase, phase0, amplitude, offset):
return offset + np.cos(2 * np.pi * (phase - phase0))
from scipy.optimize import curve_fit
# PLOTTING ------------------
fig = plt.figure(figsize=(12, 3))
gs = gridspec.GridSpec(1, 4)
ax1 = plt.subplot(gs[0])
ax1.set_title('Theoretical')
ax2 = plt.subplot(gs[1])
ax2.set_title('Sinusoidal, good stat')
ax3 = plt.subplot(gs[2])
ax3.set_title('Sinusoidal, noisy')
ax4 = plt.subplot(gs[3])
ax4.set_title('Complicated profile')
# ---------------------------
# Fit sinusoid to theoretical shape
par, pcov = curve_fit(sinusoid, ph, shape)
# PLOTTING -----------------------------------------------
ax1.plot(ph, sinusoid(ph, *par))
ax1.plot(ph, shape)
par[0] -= np.floor(par[0])
ax1.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax1.set_ylim([0, 4])
# Fit to good-stat line
# ---------------------------------------------------------
par, pcov = curve_fit(sinusoid, ph, pr_1)
# PLOTTING -----------------------------------------------
ax2.plot(ph, sinusoid(ph, *par))
ax2.plot(ph, pr_1)
par[0] -= np.floor(par[0])
ax2.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax2.set_ylim([0, 4])
# Fit to bad-stat line
# ---------------------------------------------------------
par, pcov = curve_fit(sinusoid, ph, pr_2)
# PLOTTING -----------------------------------------------
ax3.plot(ph, sinusoid(ph, *par))
ax3.plot(ph, pr_2, drawstyle='steps-mid')
par[0] -= np.floor(par[0])
ax3.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax3.set_ylim([0, 4])
# Now try with a complicated profile (a double Gaussian)
# ---------------------------------------------------------
pr_3_clean = 0.3 + np.exp(- (ph - 0.25) ** 2 / 0.001) + 0.5 * np.exp(- (ph - 0.75) ** 2 / 0.01)
pr_3 = np.random.poisson(pr_3_clean * 100) / 50
# Let us normalize the template with the same factor (100 / 50) of the randomized one. It will be helpful later
pr_3_clean *= 2
par, pcov = curve_fit(sinusoid, ph, pr_3, maxfev=10000)
# PLOTTING -----------------------------------------------
ax4.plot(ph, sinusoid(ph, *par), label='Fit')
ax4.plot(ph, pr_3, drawstyle='steps-mid', label='Noisy profile')
ax4.plot(ph, pr_3_clean, label='Real profile')
par[0] -= np.floor(par[0])
ax4.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax4.set_ylim([0, 4])
_ = ax4.legend()
# ---------------------------------------------------------
```
## TOA from non-sinusoidal fit: multiple harmonic fitting
**Multiple harmonic fitting**$^1$ (the profile is described by a sum of sinusoids) is just an extension of the single-harmonic fit by adding additional sinusoidal components at multiple frequencies.
**Advantages**
* Still conceptually easy, but more robust and reliable
**Disadvantages**
* The phase is not determined by the fit (in general, it isn't he phase of any of the sinusoids [Exercise: why?]) and needs to be determined from the maximum of the profile. Errors are not straightforward to implement.
$^1$e.g. Riggio, A. et al. Timing of the accreting millisecond pulsar IGR J17511-3057. _A&A_ **526**, 95 (2011).
## TOA from non-sinusoidal fit: Template pulse shapes
* **Cross-correlation** of template pulse shape
* **Fourier-domain fitting (FFTFIT)**$^1$ -> the usual choice. Consists of taking the Fourier transform of the profile $\mathcal{P}$ and of the template $\mathcal{T}$ and minimizing the following objective function (similar to $\chi^2$):
\begin{equation}
F = \sum_k \frac{|\mathcal{P}_k - a\mathcal{T}_k e^{-2\pi i k\phi}|^2}{\sigma^2}
\end{equation}
**Advantages**
* Much more robust and reliable
* Errors well determined whatever the pulse shape
**Disadvantages**
* Relatively trickier to implement
* Needs good template pulse profile
$^1$Taylor, J. H. Pulsar Timing and Relativistic Gravity. _Philosophical Transactions: Physical Sciences and Engineering_ **341**, 117–134 (1992).
```
def fftfit_fun(profile, template, amplitude, phase):
'''Objective function to be minimized - \'a la Taylor (1992).'''
prof_ft = np.fft.fft(profile)
temp_ft = np.fft.fft(template)
freq = np.fft.fftfreq(len(profile))
good = freq > 0
idx = np.arange(0, prof_ft.size, dtype=int)
sigma = np.std(prof_ft[good])
return np.sum(np.absolute(prof_ft - temp_ft*amplitude*np.exp(-2*np.pi*1.0j*idx*phase))**2 / sigma)
def obj_fun(pars, data):
'''Wrap parameters and input data up in order to be used with minimization
algorithms.'''
amplitude, phase = pars
profile, template = data
return fftfit_fun(profile, template, amplitude, phase)
# Produce 16 realizations of pr_3, at different amplitudes and phases, and reconstruct the phase
from scipy.optimize import fmin, basinhopping
# PLOTTING --------------------------
fig = plt.figure(figsize=(10, 10))
fig.suptitle('FFTfit results')
gs = gridspec.GridSpec(4, 4)
# -----------------------------------
amp0 = 1
phase0 = 0
p0 = [amp0, phase0]
for i in range(16):
# PLOTTING --------------------------
col = i % 4
row = i // 4
# -----------------------------------
factor = 10 ** np.random.uniform(1, 3)
pr_orig = np.random.poisson(pr_3_clean * factor)
roll_len = np.random.randint(0, len(pr_orig) - 1)
pr = np.roll(pr_orig, roll_len)
# # Using generic minimization algorithms is faster, but local minima can be a problem
# res = fmin(obj_fun, p0, args=([pr, pr_3_clean],), disp=False, full_output=True)
# amplitude_res, phase_res = res[0]
# The basinhopping algorithm is very slow but very effective in finding
# the global minimum of functions with local minima.
res = basinhopping(obj_fun, p0, minimizer_kwargs={'args':([pr, pr_3_clean],)})
amplitude_res, phase_res = res.x
phase_res -= np.floor(phase_res)
newphase = ph + phase_res
newphase -= np.floor(newphase)
# Sort arguments of phase so that they are ordered in plot
# (avoids ugly lines traversing the plot)
order = np.argsort(newphase)
# PLOTTING --------------------------
ax = plt.subplot(gs[row, col])
ax.plot(ph, pr, 'k-')
ax.plot(newphase[order], amplitude_res * pr_3_clean[order], 'r-')
# -------------------------------------
```
## The Z_n search
$Z_n^2$ is another widely used statistics for high-energy pulsar searches.
It measures how the probability of photons in a given phase is proportional to a given combination of $n$ harmonics. Or in other words, how well the pulse profile is described by a combination of sinusoidal harmonics.
The definition of this statistical indicator is (Buccheri+1983):
$$
Z^2_n = \dfrac{2}{N} \sum_{k=1}^n \left[{\left(\sum_{j=1}^N \cos k \phi_j\right)}^2 + {\left(\sum_{j=1}^N \sin k \phi_j\right)}^2\right] \; ,
$$
The formula can be slightly modified for binned data, by introducing a `weight` quantity giving the number of photons (or another measure of flux) in a given bin (Huppenkothen+2019):
$$
Z^2_n \approx \dfrac{2}{\sum_j{w_j}} \sum_{k=1}^n \left[{\left(\sum_{j=1}^m w_j \cos k \phi_j\right)}^2 + {\left(\sum_{j=1}^m w_j \sin k \phi_j\right)}^2\right]
$$
```
def z_n(time, p, n=2, weight=1):
'''Z^2_n statistics, a` la Buccheri+03, A&A, 128, 245, eq. 2.
Parameters
----------
phase : array of floats
The phases of the events
n : int, default 2
Number of harmonics, including the fundamental
Other Parameters
----------------
norm : float or array of floats
A normalization factor that gets multiplied as a weight.
Returns
-------
z2_n : float
The Z^2_n statistics of the events.
'''
phase = time / p
nbin = len(phase)
if nbin == 0:
return 0
weight = np.asarray(weight)
if weight.size == 1:
total_weight = nbin * weight
else:
total_weight = np.sum(weight)
phase = phase * 2 * np.pi
return 2 / total_weight * \
np.sum([np.sum(np.cos(k * phase) * weight) ** 2 +
np.sum(np.sin(k * phase) * weight) ** 2
for k in range(1, n + 1)])
trial_periods = np.arange(0.7, 1.0, 0.0002)
stats = np.zeros_like(trial_periods)
for i, p in enumerate(trial_periods):
stats[i] = z_n(time, p, weight=total)
bestp = trial_periods[np.argmax(stats)]
phase_search, profile_search, profile_search_err = \
epoch_folding(time, total, bestp)
phase, profile, profile_err = epoch_folding(time, total, period)
# PLOTTING -------------------------------
fig = plt.figure(figsize=(10, 3))
gs = gridspec.GridSpec(1, 2)
ax = plt.subplot(gs[0])
ax.plot(trial_periods, stats)
ax.set_xlim([0.7, 1])
ax.set_xlabel('Period (s)')
ax.set_ylabel('$\chi^2$')
ax.axvline(period, color='r', label="True value")
_ = ax.legend()
ax.annotate('max = {:.5f} s'.format(pmax), xy=(.9, max(stats) / 2))
ax2 = plt.subplot(gs[1])
ax2.errorbar(phase_search, profile_search, yerr=profile_search_err,
drawstyle='steps-mid', label='Search')
ax2.errorbar(phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='True period')
ax2.set_xlabel('Phase')
ax2.set_ylabel('Counts/bin')
_ = ax2.legend()
# ------------------------------------------
```
## Pulsation searches with HENDRICS
1. To read a fits file into an event list file:
```
$ HENreadevents file.evt.gz
```
a file called something like `file_mission_instr_ev.nc` appears
2. To calculate the light curve (binning the events) with a sample time of 1 s:
```
$ HENlcurve file_mission_instr_ev.nc -b 1
```
3. To calculate the averaged power density spectrum cutting the data by chunks of 128 s:
```
$ HENfspec file_mission_instr_lc.nc -f 128
```
4. To watch the power density spectrum:
```
$ HENplot file_mission_instr_pds.nc
```
5. To run a $Z^2_4$ search, e.g. between frequencies 0.5 and 0.6:
```
$ HENzsearch file_mission_instr_ev.nc -f 0.5 -F 0.6 -N 4
```
6. To run a $Z^2_2$ search searching in the frequency -- fdot space
```
$ HENzsearch file_mission_instr_ev.nc -f 0.5 -F 0.6 -N 2 --fast
$ HENplot file_mission_instr_Z2n.nc
```
7. Then... follow the instructions...
### BONUS
8. Calculate the TOAs and create a parameter and timing file (can you find how?)
9. Use `pintk` (from `github.com/nanograv/PINT`) to fit the pulse solution
```
$ pintk parfile.par timfile.tim
```
NB: due to a bug to PINT (under investigation), you might need to add the line
```
TZRMJD 55555
```
Substitute 55555 with the value of PEPOCH in the parameter file.
| github_jupyter |
<a href="https://colab.research.google.com/github/jeongukjae/distilkobert-sentence-encoder/blob/main/klue_roberta_kornli_simcse_tpu.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Train supervised SimCSE on KorNLI dataset using KLUE-RoBERTa large
```
BATCH_SIZE = 128
LEARNING_RATE = 2e-5
EPOCHS = 3
WARMUP_RATE = 0.1
TEMPERATURE = 0.05
```
## Prepare environments
```
import os
os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "UNCOMPRESSED"
print(os.environ['COLAB_TPU_ADDR'])
!pip install -U -q tensorflow-text tensorflow-datasets tfds-korean
import tensorflow as tf
import tensorflow_text as text
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tfds_korean import korsts, kornli, klue_sts
from tqdm import tqdm
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
```
## Prepare models
```
def create_preprocessing_model():
preproecssor_layer = hub.KerasLayer("https://tfhub.dev/jeongukjae/klue_roberta_cased_preprocess/1")
sentences = tf.keras.layers.Input(shape=(), dtype=tf.string, name="sentences")
encoder_inputs = preproecssor_layer(sentences)
preprocessor_model = tf.keras.Model(sentences, encoder_inputs)
return preprocessor_model
def create_model():
encoder = hub.KerasLayer("https://tfhub.dev/jeongukjae/klue_roberta_cased_L-24_H-1024_A-16/1", trainable=True)
inputs = {
"input_word_ids": tf.keras.Input([None], dtype=tf.int32, name="input_word_ids"),
"input_type_ids": tf.keras.Input([None], dtype=tf.int32, name="input_type_ids"),
"input_mask": tf.keras.Input([None], dtype=tf.int32, name="input_mask"),
}
logit = encoder(inputs)['pooled_output']
model = tf.keras.Model(inputs, logit)
model.summary()
return model
class BertScheduler(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, rate, warmup_ratio, total_steps, name=None):
super().__init__()
self.rate = rate
self.warmup_ratio = warmup_ratio
self.total_steps = float(total_steps)
self.warmup_steps = warmup_ratio * total_steps
self.name = name
def __call__(self, step):
with tf.name_scope("BertScheduler"):
total_steps = tf.convert_to_tensor(self.total_steps, name="total_steps")
warmup_steps = tf.convert_to_tensor(self.warmup_steps, name="warmup_steps")
current_step = step + 1.0
return self.rate * tf.cond(
current_step < warmup_steps,
lambda: self.warmup(current_step, warmup_steps),
lambda: self.decay(current_step, total_steps, warmup_steps),
)
@tf.function
def warmup(self, step, warmup_steps):
return step / tf.math.maximum(tf.constant(1.0), warmup_steps)
@tf.function
def decay(self, step, total_steps, warmup_steps):
return tf.math.maximum(
tf.constant(0.0), (total_steps - step) / tf.math.maximum(tf.constant(1.0), total_steps - warmup_steps)
)
def get_config(self):
return {
"warmup_ratio": self.warmup_ratio,
"total_steps": self.total_steps,
"warmup_steps": self.warmup_steps,
"name": self.name,
}
class ModelForSimCSE(tf.keras.Model):
def __init__(self, model, temperature, **kwargs):
super().__init__(**kwargs)
self.model = model
self.temperature = temperature
def call(self, data, training=None):
anchors, positives, negatives = data
batch_size = tf.shape(anchors['input_word_ids'])[0]
sentences = {key: tf.concat([anchors[key], positives[key], negatives[key]], axis=0) for key in positives.keys()}
sentences_embedding = tf.nn.l2_normalize(self.model(sentences, training=training), axis=-1)
anchor_embeddings = sentences_embedding[:batch_size]
positive_embeddings = sentences_embedding[batch_size:-batch_size]
negative_embeddings = sentences_embedding[-batch_size:]
ctx = tf.distribute.get_replica_context()
positive_embeddings, negative_embeddings = ctx.all_gather([positive_embeddings, negative_embeddings], axis=0)
candidate_embeddings = tf.concat([positive_embeddings, negative_embeddings], axis=0)
scores = tf.tensordot(anchor_embeddings, candidate_embeddings, axes=[[1], [1]])
scores /= self.temperature
local_batch_size = tf.shape(scores)[0]
label = tf.range(local_batch_size) + (local_batch_size * ctx.replica_id_in_sync_group)
return scores, label
def train_step(self, data):
with tf.GradientTape() as tape:
logits, label = self(data, training=True)
loss = self.compiled_loss(label, logits, regularization_losses=self.losses)
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
self.compiled_metrics.update_state(label, logits)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
logits, label = self(data, training=None)
loss = self.compiled_loss(label, logits, regularization_losses=self.losses)
self.compiled_metrics.update_state(label, logits)
return {m.name: m.result() for m in self.metrics}
```
## Prepare datasets
```
def get_kornli_dataset(preprocessor, batch_size):
def _get_ds_from_split(split) -> tf.data.Dataset:
with tf.device('/job:localhost'):
# batch_size=-1 is a way to load the dataset into memory
in_memory_ds = tfds.load("kornli", split=split, batch_size=-1, shuffle_files=True)
sentence1_list = in_memory_ds["sentence1"].numpy()
sentence2_list = in_memory_ds["sentence2"].numpy()
gold_label_list = in_memory_ds["gold_label"].numpy()
# label: [entailment, neutral, contradiction] => [0, 1, 2]
sentences = {}
for index in tqdm(range(len(sentence1_list)), desc=f"reading {split}"):
sentence1 = sentence1_list[index]
sentence2 = sentence2_list[index]
gold_label = gold_label_list[index]
if sentence1 not in sentences:
sentences[sentence1] = {}
if gold_label != 1: # not neutral
sentences[sentence1][gold_label] = sentence2
dataset_input = [(key, val[0], val[2]) for key, val in sentences.items() if 0 in val and 2 in val]
print(f"dataset length of split {split}: {len(dataset_input)}")
return tf.data.Dataset.from_tensor_slices(dataset_input), len(dataset_input)
mnli_train, num_mnli_train = _get_ds_from_split("mnli_train")
snli_train, num_snli_train = _get_ds_from_split("snli_train")
xnli_dev, num_xnli_dev = _get_ds_from_split("xnli_dev")
num_examples = num_mnli_train + num_snli_train
train_ds = (
mnli_train.concatenate(snli_train)
.shuffle(num_mnli_train + num_snli_train, reshuffle_each_iteration=True)
.batch(batch_size, drop_remainder=True)
.map(lambda x: (preprocessor(x[:, 0]), preprocessor(x[:, 1]), preprocessor(x[:, 2])), num_parallel_calls=tf.data.AUTOTUNE)
)
dev_ds = (
xnli_dev
.shuffle(num_xnli_dev)
.batch(batch_size, drop_remainder=True)
.map(lambda x: (preprocessor(x[:, 0]), preprocessor(x[:, 1]), preprocessor(x[:, 2])), num_parallel_calls=tf.data.AUTOTUNE)
)
return (train_ds, dev_ds), num_examples
```
## Run train
```
preprocessor = create_preprocessing_model()
preprocessor.summary()
import math
with strategy.scope():
(train_ds, dev_ds), num_examples = get_kornli_dataset(preprocessor, BATCH_SIZE)
print("Element spec:", train_ds.element_spec)
print("Num examples:", num_examples)
steps_per_epoch = math.ceil(num_examples / BATCH_SIZE)
print("steps per epoch:", steps_per_epoch)
num_train_steps = steps_per_epoch * EPOCHS
print("total num steps:", num_train_steps)
encoder = create_model()
model = ModelForSimCSE(encoder, TEMPERATURE)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'],
optimizer=tf.keras.optimizers.Adam(learning_rate=BertScheduler(LEARNING_RATE, WARMUP_RATE, num_train_steps)),
)
model.fit(
train_ds,
epochs=EPOCHS,
validation_data=dev_ds,
)
```
## Run evaluation on KorSTS and KLUE STS
```
from scipy import stats
from tqdm import tqdm
@tf.function
def calculate_similarity(sentence1, sentence2):
representation1 = tf.nn.l2_normalize(encoder(preprocessor(sentence1)), axis=-1)
representation2 = tf.nn.l2_normalize(encoder(preprocessor(sentence2)), axis=-1)
return tf.reduce_sum(representation1 * representation2, axis=-1)
def eval_korsts(ds):
label_score = []
pred_score = []
for item in tqdm(ds):
label_score.append(item["score"].numpy())
pred_score.append(calculate_similarity(item["sentence1"], item["sentence2"]).numpy())
label_score = tf.concat(label_score, axis=0)
pred_score = tf.concat(pred_score, axis=0)
print(stats.spearmanr(label_score, pred_score))
with tf.device('/job:localhost'):
korsts_ds = tfds.load("korsts", split=["dev", "test"], batch_size=32)
korsts_ds = {split: [item for item in korsts_ds[index]] for index, split in ((0, "dev"), (1, "test"))}
print("Evaluate dev")
eval_korsts(korsts_ds['dev'])
print("Evaluate test")
eval_korsts(korsts_ds['test'])
def eval_klue_sts(ds):
label_score = []
pred_score = []
for item in tqdm(ds):
label_score.append(item["label"].numpy())
pred_score.append(calculate_similarity(item["sentence1"], item["sentence2"]).numpy())
label_score = tf.concat(label_score, axis=0)
pred_score = tf.concat(pred_score, axis=0)
print(stats.pearsonr(label_score, pred_score))
with tf.device('/job:localhost'):
klue_sts = tfds.load("klue_sts", split="dev", batch_size=32)
klue_sts = [item for item in klue_sts]
print("Evaluate dev")
eval_klue_sts(klue_sts)
```
## Save model
```
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
encoder.save("klue-roberta-large-simcse", include_optimizer=False, options=save_options)
!zip -r klue-roberta-large-simcse.zip klue-roberta-large-simcse
!ls -alh
```
| github_jupyter |
# 과제1, 바텀듀오의 티어
## 라이브러리, 데이터 로드
```
import requests
import json
import pandas as pd
import numpy as np
from pandas.io.json import json_normalize
import warnings
warnings.filterwarnings(action='ignore')
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import math
url='*****************************************************'
adc_sup_pick_red
lol_data=data.text
lol_data=lol_data.replace('\n', ',\n')
lol_data='['+lol_data+']'
lol_data=lol_data.replace(']},\n]',']}\n]')
f = open("data.txt", 'w')
f.write(lol_data)
f.close()
lol_data=json.loads(lol_data)
output_df=json_normalize(lol_data)
sample=output_df
sample.reset_index(inplace=True)
del sample['index']
del sample['Unnamed: 0']
sample
```
## 데이터 전처리
### teams
#### 밴, 오브젝트에 대한 간략한 정보
```
def array_on_duplicate_keys(ordered_pairs):
d = {}
for k, v in ordered_pairs:
if k in d:
if type(d[k]) is list:
d[k].append(v)
else:
d[k] = [d[k],v]
else:
d[k] = v
return d
teams_output = pd.DataFrame(columns = ['firstdragon', 'firstinhibitor', 'pickturn', 'championid', 'baronkills',
'firstriftherald', 'firstbaron', 'riftheraldkills', 'firstblood',
'teamid', 'firsttower', 'vilemawkills', 'inhibitorkills', 'towerkills',
'dominionvictoryscore', 'win', 'dragonkills'])
def split_list(a_list):
half = len(a_list)//2
return a_list[:][:half], a_list[:][half:]
for i in range(len(sample)):
test=sample['teams'][i]
test=test.replace("'", "\"").replace('[{','').replace('}]','').replace('}, {', ', ').replace(' "bans":','').replace('False','\"False\"').replace('True','\"True\"')
test='[{' + test+ '}]'
test=json.loads(test, object_pairs_hook=array_on_duplicate_keys)
test=json_normalize(test)
teams_output=pd.concat([teams_output,test])
teams_output.reset_index(inplace=True)
del teams_output['index']
teams_output.head()
a=[]
b=[]
teams_output_blue=pd.DataFrame()
teams_output_red=pd.DataFrame()
for i in range(teams_output.shape[0]):
for j in range(teams_output.shape[1]):
A,B=split_list(teams_output.iloc[i][j])
a.append(A)
b.append(B)
teams_output_blue=pd.concat([teams_output_blue,pd.DataFrame(pd.Series(a)).transpose()])
teams_output_red=pd.concat([teams_output_red,pd.DataFrame(pd.Series(b)).transpose()])
a=[]
b=[]
teams_output_blue.columns=teams_output.columns
teams_output_red.columns=teams_output.columns
teams_output_blue.reset_index(inplace=True)
teams_output_red.reset_index(inplace=True)
teams_output_blue=teams_output_blue.rename({'championid':'championid_ban'},axis='columns')
teams_output_red=teams_output_blue.rename({'championid':'championid_ban'},axis='columns')
del teams_output_blue['index']
del teams_output_red['index']
teams_output_blue.head()
```
### participants
#### 팀 챔피언, 오브젝트, 킬에 대한 상세한 정보
```
participants_output=pd.DataFrame()
for i in range(len(sample)):
test=sample['participants'][i]
test=test.replace("'", "\"").replace('[{','').replace('}]','').replace('}, {', ', ').replace(' "bans":','').replace('False','\"False\"').replace('True','\"True\"')
test='[{' + test+ '}]'
test=json.loads(test, object_pairs_hook=array_on_duplicate_keys)
test=json_normalize(test)
participants_output=pd.concat([participants_output,test])
participants_output.reset_index(inplace=True)
del participants_output['index']
participants_output.head()
participants_output_if=pd.DataFrame(columns=['championid', 'kills', 'deaths', 'assists'])
for i in range(len(participants_output)):
participants_output_if = participants_output_if.append(pd.DataFrame([[participants_output['championid'][i],
list(json_normalize(participants_output['stats'][i])['kills']),
list(json_normalize(participants_output['stats'][i])['deaths']),
list(json_normalize(participants_output['stats'][i])['assists'])]], columns=['championid', 'kills', 'deaths', 'assists']), ignore_index=True)
a=[]
b=[]
participants_output_if_blue=pd.DataFrame()
participants_output_if_red=pd.DataFrame()
for i in range(participants_output_if.shape[0]):
for j in range(participants_output_if.shape[1]):
A,B=split_list(participants_output_if.iloc[i][j])
a.append(A)
b.append(B)
participants_output_if_blue=pd.concat([participants_output_if_blue,pd.DataFrame(pd.Series(a)).transpose()])
participants_output_if_red=pd.concat([participants_output_if_red,pd.DataFrame(pd.Series(b)).transpose()])
a=[]
b=[]
participants_output_if_blue.columns=participants_output_if.columns
participants_output_if_red.columns=participants_output_if.columns
participants_output_if_blue.reset_index(inplace=True)
participants_output_if_red.reset_index(inplace=True)
del participants_output_if_blue['index']
del participants_output_if_red['index']
participants_output_if_blue.head()
```
### gameduration
#### 게임 시간
```
sample['gameduration'].head()
```
### participantextendedstats
#### 게임 플레이어들의 티어
```
participantextendedstats_output=pd.DataFrame()
for i in range(len(sample)):
test=sample['participantextendedstats'][i]
test=test.replace("'", "\"").replace('[{','').replace('}]','').replace('}, {', ', ').replace(' "bans":','').replace('False','\"False\"').replace('True','\"True\"')
test='[{' + test+ '}]'
test=json.loads(test, object_pairs_hook=array_on_duplicate_keys)
test=json_normalize(test)
participantextendedstats_output=pd.concat([participantextendedstats_output,test])
a=[]
b=[]
participantextendedstats_output_blue=pd.DataFrame()
participantextendedstats_output_red=pd.DataFrame()
for i in range(participantextendedstats_output.shape[0]):
for j in range(participantextendedstats_output.shape[1]):
A,B=split_list(participantextendedstats_output.iloc[i][j])
a.append(A)
b.append(B)
participantextendedstats_output_blue=pd.concat([participantextendedstats_output_blue,pd.DataFrame(pd.Series(a)).transpose()])
participantextendedstats_output_red=pd.concat([participantextendedstats_output_red,pd.DataFrame(pd.Series(b)).transpose()])
a=[]
b=[]
participantextendedstats_output_blue.columns=participantextendedstats_output.columns
participantextendedstats_output_red.columns=participantextendedstats_output.columns
participantextendedstats_output_blue.reset_index(inplace=True)
participantextendedstats_output_red.reset_index(inplace=True)
del participantextendedstats_output_blue['index']
del participantextendedstats_output_red['index']
participantextendedstats_output_blue.head()
```
### champion info
#### 챔피언들의 코드, 영문명, 한글명
```
api_key = '**************************************************************'
r = requests.get('https://ddragon.leagueoflegends.com/api/versions.json') # version data 확인
current_version = r.json()[0] # 가장 최신 버전 확인
current_version
r = requests.get('http://ddragon.leagueoflegends.com/cdn/{}/data/ko_KR/champion.json'.format(current_version))
parsed_data = r.json()
ch_if = pd.DataFrame(parsed_data)
ch_if_df=pd.DataFrame(columns=['key','name','id'])
for i in range(len(ch_if)):
temp_df=ch_if['data'][i]
temp_df=json_normalize(temp_df)[['key','name','id']]
ch_if_df=pd.concat([ch_if_df,temp_df])
ch_if_df.reset_index(inplace=True)
del ch_if_df['index']
ch_if_df
```
### 픽률 계산
#### BLUE TEAM
```
adc_sup_pick_blue=pd.DataFrame(participants_output_if_blue['championid'])
adc_sup_pick_blue['adc_champ']=""
adc_sup_pick_blue['adc_champ_name']=""
adc_sup_pick_blue['adc_champ_kill']=''
adc_sup_pick_blue['adc_champ_deaths']=''
adc_sup_pick_blue['adc_champ_assists']=''
adc_sup_pick_blue['sup_champ']=""
adc_sup_pick_blue['sup_champ_name']=""
adc_sup_pick_blue['sup_champ_kill']=''
adc_sup_pick_blue['sup_champ_deaths']=''
adc_sup_pick_blue['sup_champ_assists']=''
adc_sup_pick_blue['win']=teams_output_blue['win']
def champ_name_korean(x):
if x!=-1: return ch_if_df.loc[ch_if_df['key'] == str(x), 'name'].values[0]
return
for i in range(len(adc_sup_pick_blue)):
adc_sup_pick_blue['adc_champ'][i]=adc_sup_pick_blue['championid'][i][participantextendedstats_output_blue['position'][i].index('ADC')]
adc_sup_pick_blue['adc_champ_name'][i]=champ_name_korean(adc_sup_pick_blue['adc_champ'][i])
adc_sup_pick_blue['adc_champ_kill'][i]=participants_output_if_blue['kills'][i][participantextendedstats_output_blue['position'][i].index('ADC')]
adc_sup_pick_blue['adc_champ_deaths'][i]=participants_output_if_blue['deaths'][i][participantextendedstats_output_blue['position'][i].index('ADC')]
adc_sup_pick_blue['adc_champ_assists'][i]=participants_output_if_blue['assists'][i][participantextendedstats_output_blue['position'][i].index('ADC')]
adc_sup_pick_blue['sup_champ'][i]=adc_sup_pick_blue['championid'][i][participantextendedstats_output_blue['position'][i].index('SUPPORT')]
adc_sup_pick_blue['sup_champ_name'][i]=champ_name_korean(adc_sup_pick_blue['sup_champ'][i])
adc_sup_pick_blue['sup_champ_kill'][i]=participants_output_if_blue['kills'][i][participantextendedstats_output_blue['position'][i].index('SUPPORT')]
adc_sup_pick_blue['sup_champ_deaths'][i]=participants_output_if_blue['deaths'][i][participantextendedstats_output_blue['position'][i].index('SUPPORT')]
adc_sup_pick_blue['sup_champ_assists'][i]=participants_output_if_blue['assists'][i][participantextendedstats_output_blue['position'][i].index('SUPPORT')]
del adc_sup_pick_blue['championid']
```
#### RED TEAM
```
adc_sup_pick_red=pd.DataFrame(participants_output_if_red['championid'])
adc_sup_pick_red['adc_champ']=""
adc_sup_pick_red['adc_champ_name']=""
adc_sup_pick_red['adc_champ_kill']=''
adc_sup_pick_red['adc_champ_deaths']=''
adc_sup_pick_red['adc_champ_assists']=''
adc_sup_pick_red['sup_champ']=""
adc_sup_pick_red['sup_champ_name']=""
adc_sup_pick_red['sup_champ_kill']=''
adc_sup_pick_red['sup_champ_deaths']=''
adc_sup_pick_red['sup_champ_assists']=''
adc_sup_pick_red['win']=teams_output_red['win']
for i in range(len(adc_sup_pick_red)):
adc_sup_pick_red['adc_champ'][i]=adc_sup_pick_red['championid'][i][participantextendedstats_output_red['position'][i].index('ADC')]
adc_sup_pick_red['adc_champ_name'][i]=champ_name_korean(adc_sup_pick_red['adc_champ'][i])
adc_sup_pick_red['adc_champ_kill'][i]=participants_output_if_red['kills'][i][participantextendedstats_output_red['position'][i].index('ADC')]
adc_sup_pick_red['adc_champ_deaths'][i]=participants_output_if_red['deaths'][i][participantextendedstats_output_red['position'][i].index('ADC')]
adc_sup_pick_red['adc_champ_assists'][i]=participants_output_if_red['assists'][i][participantextendedstats_output_red['position'][i].index('ADC')]
adc_sup_pick_red['sup_champ'][i]=adc_sup_pick_red['championid'][i][participantextendedstats_output_red['position'][i].index('SUPPORT')]
adc_sup_pick_red['sup_champ_name'][i]=champ_name_korean(adc_sup_pick_red['sup_champ'][i])
adc_sup_pick_red['sup_champ_kill'][i]=participants_output_if_red['kills'][i][participantextendedstats_output_red['position'][i].index('SUPPORT')]
adc_sup_pick_red['sup_champ_deaths'][i]=participants_output_if_red['deaths'][i][participantextendedstats_output_red['position'][i].index('SUPPORT')]
adc_sup_pick_red['sup_champ_assists'][i]=participants_output_if_red['assists'][i][participantextendedstats_output_red['position'][i].index('SUPPORT')]
del adc_sup_pick_red['championid']
adsup_pick=pd.concat([adc_sup_pick_blue,adc_sup_pick_red])
adsup_pick.reset_index(inplace=True)
del adsup_pick['index']
adsup_pick.head()
adsup_pickrate=pd.DataFrame(adsup_pick.groupby(['adc_champ_name','sup_champ_name']).count()['adc_champ']).sort_values(by='adc_champ',axis=0,ascending=False)
adsup_pickrate['duo_pickrate']=''
adsup_pickrate['duo_pickrate']=adsup_pickrate['adc_champ']/len(sample)
adsup_pickrate=adsup_pickrate.rename({'adc_champ':'duo_pickcount'},axis='columns')
adsup_pickrate.reset_index(inplace=True)
adsup_pickrate
```
### 승률 계산
```
adsup_winrate=adsup_pick
adsup_winrate = adsup_winrate.astype({'win': 'str'})
adsup_winrate["win"] = adsup_winrate["win"].apply(lambda x: 1 if x=="['Win']" else 0)
adsup_winrate=adsup_winrate.groupby(['adc_champ_name','sup_champ_name']).mean().sort_values(by='win',axis=0,ascending=False)
adsup_winrate
adsup_winrate.reset_index(inplace=True)
adsup_winrate=adsup_winrate.rename({'win':'duo_winrate'},axis='columns')
adsup_winrate
```
### KDA 평균 계산
```
adsup_kdamean=adsup_pick
adsup_kdamean['adc_kda']=''
adsup_kdamean['sup_kda']=''
for i in range(0,100):
if adsup_kdamean['adc_champ_deaths'][i]!=0:
adsup_kdamean['adc_kda'][i]=(adsup_kdamean['adc_champ_kill'][i]+adsup_kdamean['adc_champ_assists'][i])/adsup_kdamean['adc_champ_deaths'][i]
else:
adsup_kdamean['adc_kda'][i]=(adsup_kdamean['adc_champ_kill'][i]+adsup_kdamean['adc_champ_assists'][i])*1.2
if adsup_kdamean['sup_champ_deaths'][i]!=0:
adsup_kdamean['sup_kda'][i]=(adsup_kdamean['sup_champ_kill'][i]+adsup_kdamean['sup_champ_assists'][i])/adsup_kdamean['sup_champ_deaths'][i]
else:
adsup_kdamean['sup_kda'][i]=(adsup_kdamean['sup_champ_kill'][i]+adsup_kdamean['sup_champ_assists'][i])*1.2
adsup_kdamean['duo_kda']=(adsup_kdamean['adc_kda']+adsup_kdamean['sup_kda'])/2
adsup_kdamean.head()
adsup_kdamean = adsup_kdamean.astype({'duo_kda': 'int'})
adsup_kdamean=adsup_kdamean.groupby(['adc_champ_name','sup_champ_name']).mean()
adsup_kdamean.reset_index(inplace=True)
adsup_kdamean
```
## 데이터 합치기
```
adsup_stat=pd.merge(adsup_winrate,adsup_pickrate,on=['adc_champ_name','sup_champ_name'])
adsup_stat=pd.merge(adsup_stat,adsup_kdamean,on=['adc_champ_name','sup_champ_name'])
adsup_stat=adsup_stat.rename({'win':'duo_winrate'},axis='columns')
```
## 최종 데이터
```
adsup_stat
```
# 데이터 분석
## 주제 : 픽률, 승률, KDA를 기준으로 한 바텀듀오 티어
1. 가장 많이 픽된 듀오는?
2. 가장 승률이 높은 듀오는?
3. 가장 티어 (픽률, 승률, KDA 기준) 가 높은 듀오는?
LOL을 하면서 뜻하지 않게 서포터나 원거리 딜러 포지션에 배정받을 경우가 있다. 바텀 듀오는 바텀 라인에서 마치 둘이 한 몸이 된 것처럼 행동해야 CS나 적 챔피언을 잡을 수 있고 이는 곧 게임의 승패까지 연관된다.
이 분석을 통해 고랭크 유저들의 바텀듀오 조합을 바탕으로 티어를 책정했으며, 바텀 유저가 아니거나 저랭크 유저들이 데이터 분석 결과를 바탕으로 조합을 구성하는데 조금 더 도움이 됐으면 하는 목적이다.
사용데이터
- teams_output_blue , red : 팀에 대한 정보 (밴, 오브젝트 등)
- participants_output_if_blue , red : 플레이어 챔피언에 대한 정보
- gameduration : 게임 시간
- participantextendedstats_output_blue , red : 플레이어 포지션, 티어
- ch_if_df : 챔피언 정보
## 1. 가장 많이 픽된 듀오는?
```
adsup_stat['duo_pickcount'].mean()
plt.hist(adsup_stat['duo_pickcount'])
```
- LOL에는 수많은 원딜과 서포터들이 있고, 그 중 비원딜이나 다른 라인에서 잠시 내려온 서포터도 존재하기 때문에 모든 경우의 수를 고려할 수가 없다.
- 너무 데이터가 없는 값은 삭제하기로 한다.
- 평균 66회의 듀오픽이지만, 데이터가 좌측에 극단적으로 몰려있으므로, 임의적으로 300회 이상의 듀오 카운트를 가진 데이터만 사용하고자 한다.
```
adsup_stat_cut=adsup_stat[adsup_stat['duo_pickcount']>1000]
plt.hist(adsup_stat_cut['duo_pickcount']) # 나름 고른 모습을 보여줌.
len(adsup_stat_cut) # 17만건 게임의 95개의 듀오 대상
adsup_stat_cut.sort_values(by='duo_pickcount',axis=0,ascending=False)[0:10]
```
- 이즈리얼, 유미 듀오가 가장 많이 나왔다. 유미는 원거리 딜러의 몸에 달라붙어 있기 때문에 생존기가 우월한 이즈리얼이 자주 등장한다.
- 그 뒤로 케이틀린, 럭스 & 케이틀린, 모르가나이다. 케이틀린은 덫을 활용하여 헤드샷을 최대한 쏴 라인전을 강하게 가져가야 하기 때문에 원거리 속박기가 있는 럭스, 모르가나가 선호되었다.
- 그 뒤로 이즈리얼, 카르마이다. 역시 하드포킹 조합이다.
- 모두 솔로랭크에서 자주 볼 수 있는 조합이며, 승률도 대부분 50%를 넘는 모습을 보여주었다.
## 2. 가장 승률이 높은 듀오는?
```
adsup_stat_cut.sort_values(by='duo_winrate',axis=0,ascending=False)[0:10]
```
- 애쉬, 노틸러스는 CC에 맞을경우 한방에 갈 확률이 매우 높은 듀오이다. 두 챔프 모두 많은 CC기를 보유하고 있으며, 애쉬의 긴 사거리와 노틸러스의 닻줄을 통해 라인전을 강하게 가져간다.
- 진, 세나는 원거리 딜링, CC지원으로 상체의 캐리를 돕는 픽이다. 역시 궁합이 좋은 편이라고 할 수 있따.
- 루시안, 파이크는 사거리는 짧지만 파이크의 CC가 한번 닿을 경우, 상대 듀오를 모두 잡아낼 수 있는 픽이다.
- 그 외에 솔로랭크에서 궁합이 좋은 챔피언들이 구성되었다.
## 3. 가장 티어 (픽률, 승률, KDA 기준) 가 높은 듀오는?
- 듀오 픽률, 듀오 승률, 듀오 KDA를 기준으로 티어를 5티어까지 나눠보고자 한다.
- 세 변수를 기준으로 K-means 군집분석을 실시하여 군집을 나눈 후, 군집에 따라 티어를 책정했다.
```
adsup_stat_clustering=adsup_stat_cut[['adc_champ_name','sup_champ_name','duo_winrate','duo_pickrate','duo_kda']]
adsup_stat_clustering.reset_index(inplace=True)
del adsup_stat_clustering['index']
adsup_stat_clustering
X=np.array(adsup_stat_clustering.iloc[:,2:5])
X[0:5]
scaler=StandardScaler()
X_train_scale=scaler.fit_transform(X)
adsup_stat_clustering
X=pd.DataFrame(X_train_scale,columns=adsup_stat_clustering.columns[2:5])
X.head()
model = KMeans(n_clusters=5,algorithm='auto')
feature = X[['duo_winrate','duo_pickrate','duo_kda']]
model.fit(feature)
predict = pd.DataFrame(model.predict(feature))
predict.columns=['predict']
adsup_stat_clustering_output=pd.concat([adsup_stat_clustering,predict], axis=1)
adsup_stat_clustering_output.head()
count=adsup_stat_clustering_output.groupby('predict').count()['adc_champ_name']
count
X=pd.concat([X,predict],axis=1)
X.groupby('predict').mean().mean(axis=1)
# 2 -> 1티어, 4 -> 2티어, 0 -> 3티어, 3 -> 4티어, 4 -> 5티어
adsup_stat_clustering_output['tier']=''
for i in range(len(adsup_stat_clustering_output)):
if adsup_stat_clustering_output['predict'][i]==2:
adsup_stat_clustering_output['tier'][i]='1티어'
elif adsup_stat_clustering_output['predict'][i]==4:
adsup_stat_clustering_output['tier'][i]='2티어'
elif adsup_stat_clustering_output['predict'][i]==0:
adsup_stat_clustering_output['tier'][i]='3티어'
elif adsup_stat_clustering_output['predict'][i]==3:
adsup_stat_clustering_output['tier'][i]='4티어'
else:
adsup_stat_clustering_output['tier'][i]='5티어'
del adsup_stat_clustering_output['predict']
adsup_stat_clustering_output.head()
adsup_stat_clustering_output[adsup_stat_clustering_output['tier']=='1티어'] #1티어 바텀듀오
```
- 1티어 바텀듀오는 위와 같다.
- 대부분의 경기에서 등장했으며, 승률과 픽률 모두 높은 양상을 띈다.
- 솔로랭크에서 대부분 한 번쯤 봤던 조합이며, 이따금 대회에서 나오기도 하는 조합이다.
```
adsup_stat_clustering_output[adsup_stat_clustering_output['tier']=='2티어'] #2티어 바텀듀오
adsup_stat_clustering_output[adsup_stat_clustering_output['tier']=='3티어'] #3티어 바텀듀오
adsup_stat_clustering_output[adsup_stat_clustering_output['tier']=='4티어'] #4티어 바텀듀오
adsup_stat_clustering_output[adsup_stat_clustering_output['tier']=='5티어'] #5티어 바텀듀오
```
- 5티어 바텀조합이다. 주의해야 할 것은 1000회 이상 데이터가 기록된 데이터를 바탕으로 분석을 진행한 것이기 때문에 이 조합이 꼭 나쁘다는 것은 아니다.
- 상위티어 챔피언 구성에 비해 다소 지표가 떨어진다.
# 한계점
- 듀오승률, 듀오픽률, KDA 만 고려했을 뿐, 데미지 딜링이나 골드 수급 등 많은 변수를 고려하지 않은 분석이다. 특히 KDA라는 지표는 허점이 많은 지표이기 때문에 보정이나 다른 데이터 대체가 필요할 수도 있다.
- 픽률이 군집분석에서 너무 높은 부분을 가져 간듯 하다. 케이틀린, 이즈리얼같은 국민픽이라고 해서 반드시 상위티어 일수는 없다.
- 다른 고차원적인 분석이 분명 존재할 것이다.
# 보완해야 할 점
- JSON 파일을 다루는 데에 좀 더 익숙해져야 한다. JSON 파일 로드에 굉장히 많은 시간을 쏟았고, 결국 내부를 임의적으로 바꿔 로드하는데 그치고 말았다. 결과위주의 분석이기 때문에 추후 코드최적화가 반드시 필요하다.
- 리그오브레전드에 대한 이해도가 더욱 필요하다. 게임 상세정보에 분명 더 좋은 인사이트를 추출할 부분이 있을 것이다.
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
# We can view all of the classes that automap found
# Save references to each table
# Create our session (link) from Python to the DB
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
# Use Pandas to calcualte the summary statistics for the precipitation data
# Design a query to show how many stations are available in this dataset?
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
```
## Bonus Challenge Assignment
```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
```
try:
import tinygp
except ImportError:
%pip install -q tinygp
try:
import george
except ImportError:
%pip install -q george
from jax.config import config
config.update("jax_enable_x64", True)
```
(george)=
# Comparison With george
One of the `tinygp` design decisions was to provide a high-level API similar to the one provided by the [george](https://george.readthedocs.io) GP library.
This was partly because I (as the lead developer of `george`) wanted to ease users' transitions away from `george` to something more modern (like `tinygp`).
I also quite like the `george` API and don't think that there exist other similar tools.
The defining feature is `tinygp` does not include built-in implementations of inference algorithms.
Instead, it provides an expressive model-building interface that makes it easy to experiement with different kernels while still integrating with your favorite inference engine.
In this document, we compare the interfaces and computational performance of `george` and `tinygp` for constructing kernel models and evaluating the GP marginalized likelihood.
Since `tinygp` supports GPU-acceleration, we have executed this notebook on a machine with the following GPU:
```
!nvidia-smi --query-gpu=gpu_name --format=csv
```
By default, the CPU versions of both `george` and `tinygp` will also use parellelized linear algebra libraries to take advantage of multiple CPU threads, however to make the benchmarks more replicable, we'll disable this parallelization for the remainder of this notebook:
```
import os
os.environ["OMP_NUM_THREADS"] = "1"
os.environ["XLA_FLAGS"] = (
os.environ.get("XLA_FLAGS", "")
+ " --xla_cpu_multi_thread_eigen=false intra_op_parallelism_threads=1"
)
```
Then we generate some simulated data and define functions for computing the GP log likelihood using `george` and `tinygp` (with separate CPU and GPU version).
As mentioned above, the syntax of these functions is quite similar, but there are a few differences.
Most notably, the units of the "metric" or "length scale" parameter in the kernel is different (length-squared in `george` and not squared in `tinygp`).
Also, the `gp.compute` method no longer exists in `tinygp` since this would be less compatible with `jax`'s preference for pure functional programming.
```
from functools import partial
import numpy as np
import matplotlib.pyplot as plt
import jax
import jax.numpy as jnp
import george
import tinygp
sigma = 1.5
rho = 2.5
jitter = 0.1
random = np.random.default_rng(49382)
x = np.sort(random.uniform(0, 10, 20_000))
y = np.sin(x) + jitter * random.normal(0, 1, len(x))
def george_loglike(x, y, **kwargs):
kernel = sigma**2 * george.kernels.Matern32Kernel(rho**2)
gp = george.GP(kernel, **kwargs)
gp.compute(x, jitter)
return gp.log_likelihood(y)
def tinygp_loglike(x, y):
kernel = sigma**2 * tinygp.kernels.Matern32(rho)
gp = tinygp.GaussianProcess(kernel, x, diag=jitter**2)
return gp.condition(y)
hodlr_loglike = partial(
george_loglike, solver=george.solvers.HODLRSolver, tol=0.5
)
tinygp_loglike_cpu = jax.jit(tinygp_loglike, backend="cpu")
tinygp_loglike_gpu = jax.jit(tinygp_loglike, backend="gpu")
```
Now we benchmark the computational cost of computing the log likelihood using each of these methods:
```
ns = [10, 20, 100, 200, 1_000, 2_000, 10_000, len(x)]
george_time = []
hodlr_time = []
cpu_time = []
gpu_time = []
for n in ns:
print(f"\nN = {n}:")
args = x[:n], y[:n]
gpu_args = jax.device_put(x[:n]), jax.device_put(y[:n])
if n < 10_000:
results = %timeit -o george_loglike(*args)
george_time.append(results.average)
tinygp_loglike_cpu(*args).block_until_ready()
results = %timeit -o tinygp_loglike_cpu(*args).block_until_ready()
cpu_time.append(results.average)
results = %timeit -o hodlr_loglike(*args)
hodlr_time.append(results.average)
tinygp_loglike_gpu(*gpu_args).block_until_ready()
results = %timeit -o tinygp_loglike_gpu(*gpu_args).block_until_ready()
gpu_time.append(results.average)
```
In the plot of this benchmark, you'll notice several features:
1. For very small datasets, the `tinygp` CPU implementation is significantly faster than any of the other implementations. This is because `jax.jit` removes a lot of the Python overhead that is encountered when chaining `numpy` functions.
2. For medium to large datasets, `tinygp` is generally faster than `george`, with the GPU version seeing a significant advantage.
3. The CPU implementations approach the expected asymptotic complexity of $\mathcal{O}(N^3)$ only for the largest values of $N$. This is probably caused by memory allocation overhead or other operations with better scaling than the Cholesky factorization.
4. The approximate "HODLR" solver from `george` outperforms the GPU-enabled `tinygp` exact solver, but only for very large datasets, and it's important to note that the HODLR method does not scale well to larger input dimensions. Any existing or future approximate solvers like this that are implemented in `jax` could be easily used in conjunction with `tinygp`, but such things have not yet been implemented.
```
plt.loglog(ns[: len(george_time)], george_time, "o-", label="george (basic)")
plt.loglog(ns, hodlr_time, "o-", label="george (HODLR)")
plt.loglog(ns[: len(cpu_time)], cpu_time, "o-", label="tinygp (CPU)")
plt.loglog(ns, gpu_time, "o-", label="tinygp (GPU)")
ylim = plt.ylim()
plt.loglog(
ns,
0.5 * np.array(ns) ** 3 / ns[len(cpu_time) - 1] ** 3 * cpu_time[-1],
":k",
label="O($N^3$)",
)
plt.ylim(ylim)
plt.legend()
plt.xlabel("number of data points")
plt.ylabel("runtime [s]");
```
| github_jupyter |
# Make realistic spectra
Make our generated data look more like real data
```
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
from src.spectra.gen_spectra import gen_spectrum
from random import randint
from collections import namedtuple
import numpy as np
RealisticSpectrum = namedtuple('RealisticSpectrum', ['spectrum', 'abundance', 'precursor_mass'])
def gen_realistic_spectra(sequences: list) -> list:
'''
'''
realistic_spectra = []
for seq in sequences:
spec_props = gen_spectrum(seq)
spec = spec_props['spectrum']
precursor = spec_props['precursor_mass']
# Mess with it
# 1. Drop out peaks
dropout_rate = randint(60, 85)
dropouts = [randint(0, 100) < dropout_rate for _ in range(len(spec))]
leftover_peaks = [spec[i] for i in range(len(spec)) if not dropouts[i]]
# 2. introduce mass errors
for i in range(len(leftover_peaks)):
factor = 1 if randint(0, 10) < 5 else -1
leftover_peaks[i] += factor * np.random.pareto(600) # found from experiments
# 3. pick the abundance
abundances = points = np.random.pareto(1, len(leftover_peaks)) * 2000
realistic_spectra.append(RealisticSpectrum(leftover_peaks, abundances, precursor))
return realistic_spectra
```
## Make sequences
```
from src.file_io import fasta
fasta_file = '../../testing framework/data/databases/100prots.fasta'
database = fasta.read(fasta_file, True)
database = {x['name']: x for x in database}
from modules.sequence_generation import proteins, peptides
test_directory = '../data/testing_output/'
num_hybs = 5
min_length= 5
max_length = 20
num_peptides = 100
min_cont = 3 #min contribution for each side of a hybrid
# make hybrid proteins
hyb_prots = proteins.generate_hybrids([x for _, x in database.items()], num_hybs, min_contribution=max_length)
# create peptides
non_hybrid_peps = peptides.gen_peptides([x for _, x in database.items()], num_peptides, min_length=min_length, max_length=max_length, digest='random', dist='beta')
# create hybrid peptides
hyb_peps = peptides.gen_peptides(hyb_prots, num_hybs, min_length=min_length, max_length=max_length, digest='random', min_contribution=min_cont, hybrid_list=True)
all_proteins_raw = [x for _,x in database.items()] + hyb_prots
all_peptides_raw = non_hybrid_peps + hyb_peps
peptides = {}
for i, pep in enumerate(all_peptides_raw):
peptides[i] = pep
peptides[i]['scan_no'] = i
import json
experiment_info_file_name = 'experiment_info.json'
exp = {'database': fasta_file, 'peptides': peptides}
with open(test_directory + experiment_info_file_name, 'w') as o:
json.dump(exp, o)
from modules.sequence_generation import write_spectra
spectra = gen_realistic_spectra([p['sequence'] for p in all_peptides_raw])
write_spectra.write_mzml('realisticSpectra', [x._asdict() for x in spectra], output_dir=test_directory)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/lordfiftyfive/Special-Projects/blob/master/JANUSPHASETHREEPOINTTWO.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#!pip install --upgrade tf-nightly tfp-nightly
#!pip uninstall tensorflow
!pip install tensorflow==2.0.0-beta1
#!pip install tensorflow==2.0.0rc0
#!pip install tensorflow-probability
#!pip install gaussian_processes
!pip install quandl
#!pip install rpy2
!pip install bootstrapped
!pip install hurst
!pip install stldecompose
!pip install GPyOpt
!pip install parameter-sherpa
#!pip install tensorflow-probability==0.8.0
#!pip install gpflow==1.5.1
#!pip install gpflow==2.0.0rc1
#!git clone git://github.com/statsmodels/statsmodels.git
#!pip install git+https://github.com/statsmodels/statsmodels
#install-gpflow-20-beta-version.
```
possible additional data
https://usafacts.org/metrics/33937?regions=US&year=2016
https://usafacts.org/metrics/55121?regions=US&year=2018
```
import tensorflow as tf
import tensorflow_probability as tfp
import quandl
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
import rpy2
#from rpy2.robjects.packages import importr
from sklearn.decomposition import PCA, KernelPCA
#from keras.layers import Input, Dense, Activation, Flatten, BatchNormalization
#from tensorflow_probability import sts
import pandas as pd
import bootstrapped.bootstrap as bs
import bootstrapped.stats_functions as bs_stats
from scipy.stats import ttest_ind, ttest_ind_from_stats,chisquare, mstats
import seaborn as sns
import statsmodels.api
#from statsmodels.tsa.seasonal import STL
from statsmodels.tsa.seasonal import seasonal_decompose
from hurst import compute_Hc, random_walk
import scipy.integrate as integrate
from stldecompose import decompose
import sherpa
#import gpflow
```
SEE THIS LINK FOR JANUS PHASE THREE
https://github.com/GPflow/GPflow/issues/1007
```
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
"""
Importing libraries
"""
#import os
#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
"""
import keras
from keras.layers import Input, Dense, Activation, Flatten, BatchNormalization, RNN, LSTM
import numpy as np
#from numpy import array
from keras.models import load_model, Model, model_from_json, Sequential
from keras.callbacks import EarlyStopping
import pandas as pd
#import datetime
#import requests
#import sys
#from time import time, sleep
#import time as systime
import quandl
#import math
#import collections, itertools
import tensorflow as tf
#from sklearn import preprocessing
#from sklearn.preprocessing import Imputer, StandardScaler, Normalizer
from sklearn.preprocessing import MinMaxScaler
from scipy.stats import mstats
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind, ttest_ind_from_stats,chisquare
from sklearn.decomposition import PCA
"""
"""
@misc{,
title={EconomicGrowthPredictor},
author={Subarno},
year={2018},
}
"""
# in supervised clustering samples are in rows and features are in columns
"""
unsupervised learning can be used to find hidden patterns data
"""
"""
Getting data
"""
quandl.ApiConfig.api_key = 'DNMZo2iRzVENxpxqHBKF'
#te.getCalendarData(country=['united states', 'china'], category=['imports','exports'],
#initDate='2017-06-07', endDate='2017-12-31',
#output_type='df')
datafour = quandl.get("YALE/SPCOMP", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff", collapse="quarterly", start_date="1959-09-30", end_date="2018-12-31")# earliest date is 1960-06-30 #quandl.get("YALE/CPIQ", authtoken="DNMZo2iRzVENxpxqHBKF", collapse="quarterly", start_date="1970-12-31", end_date="2016-03-31")#quandl.get("UNAE/GDPCD_USA", authtoken="DNMZo2iRzVENxpxqHBKF", end_date="2016-12-31")# quandl.get("UNAE/GDPCD_USA", authtoken="DNMZo2iRzVENxpxqHBKF", end_date="2016-12-31")
Data_to_predict = quandl.get("FRBP/GDPPLUS_042619", authtoken="DNMZo2iRzVENxpxqHBKF", collapse="quarterly")#quandl.get("FRBP/GDPPLUS_042619", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff")#quandl.get("FRBP/GDPPLUS", authtoken="DNMZo2iRzVENxpxqHBKF", collapse="quarterly", start_date="1960-06-30")
datafive = quandl.get("FRED/PCETRIM1M158SFRBDAL", authtoken="DNMZo2iRzVENxpxqHBKF", collapse="quarterly", start_date="1977-02-01",end_date="2016-03-31")#quandl.get("FRED/VALEXPUSM052N", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff", collapse="quarterly", start_date="1960-09-30")#quandl.get("WWDI/USA_NE_GDI_TOTL_CD", authtoken="DNMZo2iRzVENxpxqHBKF", start_date="1970-12-31")
Data_To_predict = Data_to_predict.values
DESPAIR = quandl.get("USMISERY/INDEX", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff", collapse="quarterly", start_date="1959-09-30", end_date="2018-12-31")
Debt_data_change_of_change = quandl.get("FRED/NCBDSLQ027S", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff",collapse="quarterly",start_date="1959-06-30",end_date="2018-12-31")
DataSix = quandl.get("FRED/ROWFDIQ027S", authtoken="DNMZo2iRzVENxpxqHBKF", transform="rdiff", collapse="quarterly", start_date="1959-06-30",end_date="2018-12-31")
print(DataSix)
#Data_To_Predict = np.matrix(Data_To_predict)
print("datasix")
#print(Data_To_Predict)
#print("input")
#print(Datafour)
#early_stop = EarlyStopping(monitor='loss',patience=5, verbose=1)
data_to_predict = Data_To_predict#[-47::]#np.reshape(Data_TO_predict, (9,6)
"""
#Ask if my model is making good predictions or whether the predicitons are due to the loss of information from the nornmalization procedure of the outputs
#create a plot of the data before this ß
#split_date= pd.Timestamp('01-01-2011')
#train = df.loc(:split_date,[''])
#test = df.loc(split_date:, [''])
"""
#data preprocessing
trainingtarget = data_to_predict#normalizer.transform(data_to_predict)
#mstats.winsorize()
Scalar = MinMaxScaler(feature_range=(0,1))
#trainingtarget = Scalar.fit_transform(trainingtarget)
#datafour = preprocessing.scale(datafour)
#trainingtarget = preprocessing.scale(trainingtarget)
datasix = pd.DataFrame(DataSix)
datafour = pd.DataFrame(datafour)
datasix = datasix.drop(datasix.index[0])
print("data six")
print(datasix)
#
#class statsmodels.tsa.seasonal.STL(endog, period=None, seasonal=7, trend=None, low_pass=None, seasonal_deg=0, trend_deg=0, low_pass_deg=0, robust=False, seasonal_jump=1, trend_jump=1, low_pass_jump=1)¶
datafour = pd.concat([datafour, Debt_data_change_of_change,DESPAIR], axis=1)
#datafour = datafour.fillna(0)
data_to_predict = pd.DataFrame(data_to_predict)
#z = pd.concat([datafour,data_to_predict])
#print(z.ndim)
#z = Scalar.fit(z)
#z = np.array(z)
#z = np.vsplit(z,7)
#datafour = z[0]
#trainingtarget = z[1]
#z = pd.concat(datafour,trainingtarget)
#z = Scalar.fit_transform(z)
#print(z.shape())
#print(z)
print(datafour)
print(trainingtarget)
pca = PCA()
a = decompose(trainingtarget)#seasonal_decompose(trainingtarget,model='additive',freq=1)
a.plot()
print(trainingtarget)
plt.plot(trainingtarget)
#trainingtarget = a.trend
datafour = Scalar.fit_transform(datafour)
datafour = pca.fit_transform(datafour)
#datafour = normalizer.fit(datafour)
trainingtarget = Scalar.fit_transform(trainingtarget)
y = trainingtarget
X = datafour
#trainingtarget = pd.DataFrame(trainingtarget)
#trainingtarget = pd.DataFrame(trainingtarget)
#time_index = pd.date_range(start=1959.6, end=2018.8, periods=237)#np.linspace(1959.6,2018.8,237)#np.linspace(0, 10, N, endpoint=True)
#trainingtarget = trainingtarget.set_index(time_index)
#trainingtarget = statsmodels.tsa.seasonal.STL(trainingtarget,period=None)
#Reminder: look into possibility that the DATES column is being included into datafour
#inputOne = len(dataThree)
#print(inputOne)
#print(dataThree)
#x_train = Scalar.fit_transform(datafour[:220:])
#y_train = Scalar.fit_transform(trainingtarget[:220:])
#x_train = pca.fit(x_train)
#x_test = Scalar.transform(datafour[220::])
print("shape of x")
print(X.shape)
y = trainingtarget#[:, None]
#Xshape = np.arange(dataThree).reshape(86, 1)
#xshape.flat(86)
#ConsolidatedInput = pd.merge(dataThree,dataTwo,dataOne)
#b = dataThree.flatten('K')
#xShape = X[:,None]#datafour.shape[1]
#print(xShape)
#X=np.reshape(X.shape[1],X.shape[0])
#datafour = tf.cast(datafour, tf.float32)
#trainingtarget = tf.cast(trainingtarget, tf.float32)
#Deep learning algorithm
"""
inputs = tf.keras.Input(shape=(1,13))
first = tf.keras.layers.LSTM(10,activation='relu',kernel_initializer='ones', use_bias=False)(inputs)
u = tf.keras.layers.Dense(250,activation='relu',kernel_initializer='ones',use_bias=False)(first)
b = tf.keras.layers.BatchNormalization()(u)
u = tf.keras.layers.Dense(500, activation='relu',kernel_initializer='ones',use_bias=False)(b)
u = tf.keras.layers.Dense(10, activation='relu')(u)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(u)
Model = tf.keras.Model(inputs=inputs, outputs=outputs)
#Adadelta
Model.compile(optimizer='Adagrad',
loss='MSLE',
metrics=['accuracy'])
Model.fit(X,Y,epochs=8, verbose=1,validation_split=0.2)#, callbacks=[early_stop])
Prediction = Model.predict(X[189::])
print(Prediction)
print(Prediction.shape)
u = 219
Model.summary()
"""
#a = model.evaluate(trainingtarget[180::], Prediction, verbose=0)
#.save_weights('weights-init.h5')
#trainingtarget = trainingtarget#[:-15:]
#trainingtarget = normalizer.transform(trainingtarget)
#blue is prediction orange is actual
#Note to self: check to see that it is actually comparing predictions to TEST DATA
#Graphing
```
Hypotheses which underly this algorithm
1. The data exhibits long term dependence. This is in part due to the fact that we believe components of the S&p 500 input variable exhibits long term dependence. Note: this hypothesis has been confirmed.
2. IF hypothesis 1 is true the final posterior distribution does NOT follow a Standard normal distribution
3. THe input variables DO NOT follow a Independent standard normal distribution but are correlated to one another
what we need is a kernel which takes into account the fact that the data is time series (in other words we need a autocovariance function) and we need that auto covariance function to have long term dependence built in
```
#Hearst Exponent
H, c, data = compute_Hc(y, kind='change', simplified=True)
print(a.trend)
print(np.abs(H))
#labels = quandl.get("FRBP/GDPPLUS_042619", authtoken="DNMZo2iRzVENxpxqHBKF", collapse="quarterly")#quandl.get("FRBP/GDPPLUS_042619", authtoken="DNMZo2iRzVENxpxqHBKF")
#labels=Scalar.fit_transform(labels)
#b = decompose(labels)
X = datafour[:,None]
x = X
#y = b
#Y = a.trend
#y = Y
num_inducing_points = 45
#custom time series kernel integrate.quad(lambda x: special.jv(2.5,x), -π, π)e^(ih*lambda)*f(lambda)d(lambda)
#default RBF kernel k(x, y) = amplitude**2 * exp(-||x - y||**2 / (2 * length_scale**2))
"""
def kernel(self):
return 1/2*np.absolute(t)**(2H)+np.absolute(s)**(2H) - (np.absolute(t)-np.absolute(s))**2H
t = x
s = y
Calculating the average value, Xm, of the X1..Xn series
Calculating the standard series deviation, S
Normalization of the series by deducting the average value, Zr (where r=1..n), from each value
Creating a cumulative time series Y1=Z1+Zr, where r=2..n
Calculating the magnitude of the cumulative time series R=max(Y1..Yn)-min(Y1..Yn)
Dividing the magnitude of the cumulative time series by the standard deviation (S).
"""
#Time series gaussian kernel with long term dependence: E[BSubSccript(H)(H)BsubscriptH(s)] = 1/2|t|^2H-|t-s|&2H where H is the hearst exponent If we assume there is a long term dependence that means 1 > H > 1/2
x = x.astype(np.float32)#tf.dtypes.cast(x, tf.int32) #
#x = tf.cast(x, tf.float32)
#x = tensor_util.convert_nonref_to_tensor(x, dtype=x.dtype)
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
#print(dtype)
return x
@property
def kernel(self):
return tfp.positive_semidefinite_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
#x1 = x[0]
#x2 = x[1]
#return tf.convert_to_tensor(1/2*(np.absolute(x1))**(2*H)+(np.absolute(x2))**(2*H) - (np.absolute(x1)-np.absolute(x2))**2*H)#tf.as_dtype(1/2*(np.absolute(x))**(2*H)+(np.absolute(y))**(2*H) - (np.absolute(x)-np.absolute(y))**2*H)
#this is in reality a fractional brownian motion kernel
"""
class Brownian(gpflow.kernels.Kernel):
def __init__(self):
super().__init__(active_dims=[0])
#self.variance = gpflow.Param(1.0, transform=gpflow.transforms.positive)
#@gpflow.params_as_tensors
def K(self, X, X2=None):
if X2 is None:
X2 = X
return 1/2*(np.absolute(X))**(2*H)+(np.absolute(X2))**(2*H) - (np.absolute(X))-(np.absolute(X2))**(2*H)
def K_diag(self, X, presliced=None):
return tp.astype(self.variance * tf.reshape(X, (-1,)),tf.int64 )
"""
```
#custom gaussian time series kernel logic
## #the autocovariance function is ysubscript(f)(h) = integral pi, -p^ih*lambda*f(lambda)*d(lambda) is the function if P^nSubscript(f) is the distribution of X1, ... Xn for a stationary mean-zero gaussian time series (Xsubscript(t):t strange symbol which looks like E Z)
---
This is the final thing I have to implement for phase 3 to be complete. Unfortunately I will need to wait until gpflow is updated before I can make that final update
```
#bijector=tfb.BatchNorm()
#kernel = tfk.
#tf.keras.layers.BatchNormalization(),
#tf.keras.layers.Dense(250, activation='sigmoid',kernel_initializer='ones', use_bias=False),
#datafour = x
#trainingtarget = y
#x = datafour
#y = trainingtarget
x_tst = x[189::]
x_range = 237
num_distributions_over_Functions = 1
#kernel = Brownian #tfp.positive_semidefinite_kernels.ExponentiatedQuadratic#MaternOneHalf()
model = tf.keras.Sequential([
tf.keras.Input(shape=(1,13), dtype=x.dtype),
tf.keras.layers.LSTM(25,kernel_initializer='ones', dtype = x.dtype, use_bias=False),
#tf.keras.layers.InputLayer(input_shape=(10),dtype=x.dtype),#put a 1 before the 9 later
tf.keras.layers.Dense(50,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(75,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(125,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(150,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(175,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(200,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(225,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(250,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(225,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(200,kernel_initializer='ones',use_bias=False),
#goal is to eventually replace the first dense layer with an LSTM layer
#tf.keras.layers.LSTM
#tf.keras.layers.TimeDistributed(Dense(vocabulary)))
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(150,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(125,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(75,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(50,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(25, kernel_initializer='ones',use_bias=False,),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(dtype=x.dtype),
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(0,x_range, num=1125,#num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.log(np.expm1(1.)).astype(x.dtype))),
event_shape=[num_distributions_over_Functions],jitter=1e-06
)
#in unconstrained thing replace astype with tf.dtype thing.
])
#AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype' when trying to implement custom kernel
#TypeError: linspace() missing 1 required positional argument: 'stop' This is the error to be resolved before I can add the inducing_indexpoints_initializer
import sherpa.algorithms.bayesian_optimization as bayesian_optimization
parameters = [sherpa.Continuous('lrinit', [0.01, 0.011], 'log')]
#sherpa.Continuous('lrdecay', [1e-2, 1e-7], 'log')]
alg = bayesian_optimization.GPyOpt(max_num_trials=50)#sherpa.algorithms.GPyOpt('GP', num_initial_data_points='infer',initial_data_points=[0.1,0.11,0.12], acquisition_type='MPI',verbosity=True)
study = sherpa.Study(parameters=parameters,
algorithm=alg,
lower_is_better=False)
batch_size =19
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
num_iterations = 5
epochs = 20
for trial in study:
lr = trial.parameters['lrinit']
model = tf.keras.Sequential([
tf.keras.Input(shape=(1,13), dtype=x.dtype),
tf.keras.layers.LSTM(25,kernel_initializer='ones', dtype = x.dtype, use_bias=False),
#tf.keras.layers.InputLayer(input_shape=(10),dtype=x.dtype),#put a 1 before the 9 later
tf.keras.layers.Dense(50,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(75,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(125,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(150,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(175,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(200,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(225,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(250,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(225,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(200,kernel_initializer='ones',use_bias=False),
#goal is to eventually replace the first dense layer with an LSTM layer
#tf.keras.layers.LSTM
#tf.keras.layers.TimeDistributed(Dense(vocabulary)))
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(150,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(125,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(100,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(75,kernel_initializer='ones', use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(50,kernel_initializer='ones',use_bias=False),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(25, kernel_initializer='ones',use_bias=False,),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(dtype=x.dtype),
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(0,x_range, num=1125,#num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.log(np.expm1(1.)).astype(x.dtype))),
event_shape=[num_distributions_over_Functions],jitter=1e-06
)
#in unconstrained thing replace astype with tf.dtype thing.
])
optimizer = tf.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=optimizer, loss=loss)
for i in range(epochs):
model.fit(x, y,epochs=epochs, verbose=True,validation_split=0.2)
loss= model.evaluate(x[189::],y[189::])
study.add_observation(trial=trial,iteration=i,objective=loss,context={'loss':loss})
#training_error = model.fit(epochs=1)
study.finalize(trial=trial)
#
# Do inference.
#Note: look into BATCHES and make sure they are fed in in order second potential problem to explore: It is possible I already looked into this possibility in phase 1 but make sure that it isnt using input data from the same day to create its predictions
batch_size =19
#use a different loss other than the variational gaussian loss
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.011), loss=loss)#tf.optimizers.Adam(learning_rate=0.01)
model.fit(x, y,epochs=290, verbose=True,validation_split=0.2)
#model.predict(x)
#adjust it to 240 epochs if using pca first
#prediction = model.predict(x_tst)
# Make predictions.
#yhats = [model(x_tst) for _ in range(100)]
#
```
what this algorithm is a joint distribution describing the relationship between the input variables and economic growth. this joint distribution is actually a distribution over functions between input and output.
We then use these to determine a predictive distribution and make point predictions Although this can be used to make a overall predictive distribution in reality this is far more useful for making a distribution over every point in the function. In other words a contnuous set of changing liklihoods for our point predictions.
```
yhat = model(x_tst)#Note that this is a distribution not a tensor
num_samples = 6 #note: num_samples refers to how many generated future evolutions of economic growth you want it to generate over the specified period of time
Models = []
#probability =
average = []
"""
note: the below code works for turning from a tensorflow distribution back to a tensor
a = tf.compat.v1.convert_to_tensor(
yhat,
dtype=None,
name=None,
preferred_dtype=None,
dtype_hint=None
)
"""
#plt.plot(a)
#plt.plot(yhats)#Note: yhat is supposedly a distribution and not a tensor explore the possibility that it is outputting probabilities and not tensors
for i in range(num_samples):
#plt.plot(yhat)
sample_ = yhat.sample().numpy()
#Model = sample
#sample_[..., 0].T,
#print(model.summary)
mean = yhat.mean().numpy()
#print(mean)
#variance = yhat.variance().numpy()
#std = yhat.stddev().numpy()
#print(yhat.sample().numpy)
print(len(r[0]))
plt.plot(sample_[..., 0].T,
'r',
linewidth=0.2,
label='ensemble means' if i == 0 else None);
"""
(variational_loss,
variational_distributions) = tfp.sts.build_factored_variational_loss(
model=model, observed_time_series=observed_time_series,
init_batch_shape=[10])
"""
#a = yhat.sample_[..., 0].T
print("Predictions")
#print(predictions)
#plt.plot(predictions)
#forecast_dist = tfp.sts.forecast(model, observed_time_series,parameter_samples=sample_,num_steps_forecast=5)
print(sample_)
r = []
#r = pd.DataFrame(r)
for i in range(num_samples):
sample_ = yhat.sample().numpy()
#sns.kdeplot(sample_[..., 0].T)
e = sample_[..., 0].T
print("asdf")
#print(len(e))
r.append(e)
print(len(r[0]))
rfinal = (r[0]+r[1]+r[2]+r[3]+r[4]+r[5])/6
plt.plot(rfinal)
u1 = r[0]
u2 = r[1]
u3 = r[2]
u4 = r[3]
u5 = r[4]
u6 = r[5]
ufinal = ((u1 - rfinal) + (u2-rfinal) + (u4-rfinal) + (u5 - rfinal) + (u6-rfinal))/6
from shapely.geometry import LineString
#note: use shapely to fine intersection points
#first_line = LineString(np.column_stack((x, f)))
#second_line = LineString(np.column_stack((x, g)))
#intersection = first_line.intersection(second_line)
#this is for plotting the intersection points
#plt.plot(*LineString(intersection).xy, 'o')
#this is for getting the x and y of the intersection
#x, y = LineString(intersection).xy
print(ufinal)
print(len(ufinal))#note ufinal len is 48
plt.plot(y[189::])
print("average")
#st_dev = average.mean()#stddev()
#average = mean#.mean()#.numpy()
#plt.plot(sample_[20::])
print("samples")
#print(sample_.T[20::])
print(sample_)
print(sample_.shape)
#plt.plot(average[20::])
#print(len(average))#Note that supposedly the average error rate for the Fed's current forcasting model called gdpNow is 0.56%
#print(std)
error = rfinal/y[189::] #if this was a standard normal distribution than 95% would be 0.252 and 0.968
print("error")
print(error)
print("average error")
print(np.sum(error)/len(error))
#c = np.corrcoef(rfinal, y[:-189:])[0, 1]
#print(c)
from scipy import signal
#average error rate is about 10%
r = []
#r = pd.DataFrame(r)
for i in range(num_samples):
sample_ = yhat.sample().numpy()
sns.kdeplot(sample_[..., 0].T)
e = sample_[..., 0].T
r.append(e)
#Model = sample
#sample_[..., 0].T,
#print(model.summary)
#print(yhat.log_prob(y[189::]))
mean = yhat.mean().numpy()
#print(mean)
#variance = yhat.variance().numpy()
#std = yhat.stddev().numpy()
#print(yhat.sample().numpy)
#you have to figure out what the x and y axis are before you present
#it appears from this plot of economic growth has a very low hurst exponent
print(r[0])
print(r[1])
rfinal = (r[0]+r[1]+r[2]+r[3]+r[4]+r[5])/6
One = r[0]
two = r[1]
Three = r[2]
four = r[3]
five = r[4]
pone = signal.fftconvolve(r[0],r[1],'same')
ptwo = signal.fftconvolve(r[2],r[3],'same')
pthree = signal.fftconvolve(r[4],r[5],'same')
pfinal = signal.fftconvolve(pone,ptwo)
pfinal = signal.fftconvolve(pfinal,pthree, 'same')
#yhat.prob()
#print(mean)
sns.kdeplot(rfinal)
tfp.stats.stddev(
yhat,
sample_axis=0,
keepdims=False,
name=None
)#yhat
#observed_time_series = x_tst
#co2_by_month_training_data = co2_by_month[:-num_forecast_steps] this has to be the format in which
#between 0.16 and 0.18
y = np.squeeze(y)
sns.kdeplot(y[189::])
model.summary()
#confidence and credible intervals
!pip install bayesint
from bayesint import rel_risk
for i in range(num_samples):
#print("Credible interval")
#print(rel_risk(sample_[..., 0].T))
print(bs.bootstrap(sample_[..., 0].T, stat_func=bs_stats.mean))
print(y[189::])
#print("probability")
#print(yhat.log_prob([0,0.34]))
from sklearn.ensemble import AdaBoostRegressor
regr = AdaBoostRegressor(random_state=0, n_estimators=100)
x = np.squeeze(x)
regr.fit(x[189::], y[189::])
plt.plot(y[189::])
#plt.plt(48)
plt.plot(regr.predict(x[:48:]))
print(__doc__)
# Author: Vincent Dubourg <vincent.dubourg@gmail.com>
# Jake Vanderplas <vanderplas@astro.washington.edu>
# Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>s
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
np.random.seed(1)
def f(x):
"""The function to predict."""
return x * np.sin(x)
# ----------------------------------------------------------------------
# First the noiseless case
X = np.atleast_2d([1., 3., 5., 6., 7., 8.]).T
# Observations
y = f(X).ravel()
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
x = np.atleast_2d(np.linspace(0, 10, 1000)).T
# Instantiate a Gaussian Process model
kernel = C(1.0, (1e-3, 1e3)) * RBF(10, (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, sigma = gp.predict(x, return_std=True)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
plt.figure()
plt.plot(x, f(x), 'r:', label=r'$f(x) = x\,\sin(x)$')
plt.plot(X, y, 'r.', markersize=10, label='Observations')
plt.plot(x, y_pred, 'b-', label='Prediction')
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-10, 20)
plt.legend(loc='upper left')
# ----------------------------------------------------------------------
# now the noisy case
X = np.linspace(0.1, 9.9, 20)
X = np.atleast_2d(X).T
# Observations and noise
y = f(X).ravel()
dy = 0.5 + 1.0 * np.random.random(y.shape)
noise = np.random.normal(0, dy)
y += noise
# Instantiate a Gaussian Process model
gp = GaussianProcessRegressor(kernel=kernel, alpha=dy ** 2,
n_restarts_optimizer=10)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, sigma = gp.predict(x, return_std=True)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
plt.figure()
plt.plot(x, f(x), 'r:', label=r'$f(x) = x\,\sin(x)$')
plt.errorbar(X.ravel(), y, dy, fmt='r.', markersize=10, label='Observations')
plt.plot(x, y_pred, 'b-', label='Prediction')
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.show()
sns.kdeplot(regr.predict(x[:48:]))
def my_kernel(X, X2,H):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
H = 0.85
return 1/2*(np.absolute(X))**(2*H)+(np.absolute(X2))**(2*H) - (np.absolute(X))-(np.absolute(X2))**(2*H)
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.datasets import load_iris
import numpy as np
#y = y.ravel()
r, g = load_iris(return_X_y=True)
print(r)
print(g)
#x = pd.DataFrame(x)
#y = pd.DataFrame(y)
X = np.squeeze(X)
gpc = GaussianProcessRegressor(kernel=my_kernel,random_state=0).fit(X, trainingtarget.ravel())
a = gpc.predict_proba(x[:2,:])
print("gpc")
print(a)
#gpc = GaussianProcessClassifier().fit(x,y)
a = gpc.predict(x[220::])
b = gpc.log_marginal_likelihood()
#b = gpc.predict(x[220::])
#score = gpr.score(x,y)
#is the great score because training and test arent seperated?
#print(b)
#print(gpc.score(x[220::],y[220::]))
#print(score)
print(-1*gpc.log_marginal_likelihood())
print(a)
plt.plot(a)
plt.plot(y[220::])
#model.fit(x, y,epochs=50, verbose=True,validation_split=0.2)
#List of stuff left to do
#1. average out all the various evolutions and see if you can create an additional composite line for economic growth. Bonus: see if you can do a weighted average so that for example stuff outside of the 95% confidence interval at that point is given less weight
#2. Allow it to predict the future and allow it to give an 95% confidence interval for that one prediction. Also make sure that the various possible evolutions of economic growth and the composite are part of that
#3.figure out a way to convert the first Dense layer to an LSTM layer
#4. This is the easiest make it have 2 stages with an exponential decay for the second stage learning rate so the learning rate never becomes negative
modelOne = []
modelTwo = []
modelThree = []
modelFour = []
modelFive = []
modelSix = []
print(len(sample_))
#suspected current loss function
"""
vgp.variational_loss(
observations=y_train_batch,
observation_index_points=x_train_batch,
kl_weight=float(batch_size) / float(num_training_points_))
"""
#gpc.predict_proba(X[:2,:])
plt.plot(x_tst)
#second stage of Janus Prediction engine
time_series = np.linspace(1960.25,2016.25,224)
observed_time_series = tfp.sts.MaskedTimeSeries(
time_series=time_series, is_missing=None
)
"""
local_level_model = LocalLevelStateSpaceModel(
num_timesteps=50,
level_scale=0.5,
initial_state_prior=tfd.MultivariateNormalDiag(scale_diag=[1.]))
"""
ndims = 2
step_std = 1.0
noise_std = 5.0
predictions = tfp.sts.forecast(observed_time_series,parameter_samples=sample_,observed_time_series=observed_time_series,num_steps_forecast=5)
print(predictions)
plt.plot(predictions)
"""
#potential alternate loss function
#lambda y, rv_y: -rv_y.log_prob(y)
"""
lambda y, rv_y: -rv_y.kl_divergence(
y,
rv_y,
name='loss'
)
"""
loss = lambda y, rv_y: tfp.distributions.kl_divergence(
y,
kl,
allow_nan_stats=True,
name=None
)
#los = los*-1
one_step_predictive_dist = tfp.sts.one_step_predictive(
model, observed_time_series, parameter_samples=sample_)
"""
model = make_state_space_model(
num_timesteps,
param_vals=None,
initial_state_prior=None,
initial_step=0
)
tfp.sts.build_factored_variational_loss(
model,
observed_time_series,
init_batch_shape=(),
seed=None,
name=None
)
"""
model = tf.keras.Sequential([
tf.keras.Input(shape=(1,13), dtype=x.dtype),
tf.keras.layers.LSTM(8,kernel_initializer='ones', use_bias=False),
#tf.keras.layers.InputLayer(input_shape=(10),dtype=x.dtype),#put a 1 before the 9 later
#tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(400,kernel_initializer='ones', use_bias=False),#check what the shape is it should be 237,9,1 and put LSTM HERE later
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(800,kernel_initializer='ones',use_bias=False),
#goal is to eventually replace the first dense layer with an LSTM layer
#tf.keras.layers.LSTM
#tf.keras.layers.TimeDistributed(Dense(vocabulary)))
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, kernel_initializer='ones',use_bias=False,),
tfp.layers.DenseFlipout(512, activation=tf.nn.relu),
#tfp.layers.DenseFlipout(1),
])
#kalaman filter
model = LinearGaussianStateSpaceModel(
num_timesteps=100,
transition_matrix=tfl.LinearOperatorIdentity(ndims),
transition_noise=tfd.MultivariateNormalDiag(
scale_diag=step_std**2 * tf.ones([ndims])),
observation_matrix=tfl.LinearOperatorIdentity(ndims),
observation_noise=tfd.MultivariateNormalDiag(
scale_diag=noise_std**2 * tf.ones([ndims])),
initial_state_prior=tfd.MultivariateNormalDiag(
scale_diag=tf.ones([ndims])))
)
"""
#First sucessful build using tf 2.0 with functional API
num_inducing_points = 45
inputs = tf.keras.Input(shape=(1,10))
first = tf.keras.layers.LSTM(8,kernel_initializer='ones', use_bias=False)(inputs)
first = tf.keras.layers.Dense(8,kernel_initializer='ones', use_bias=False)(first)
u = tf.keras.layers.BatchNormalization()(first)
u = tf.keras.layers.Dense(800,kernel_initializer='ones',use_bias=False)(u)
outputs = tf.keras.layers.Dense(10, activation='sigmoid')(u)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
```
notes on presentation
asks what diversity and inclusion are
different priorities or miscommunication may be affected by background
diversity; who is on the team
culture plays a role in every attitude
do i care about your unique perspective
do i understand differences
diversity is more then just race and ethnicity and age gender race national origin
now asking what does it mean to be in a more i diverse community
Showing pictures of diversity initiatives from companies
now asking how to develop cultural competency
| github_jupyter |
### Tutorial in hamiltorch for log probabilities
* For the corresponding blog post please see: https://adamcobb.github.io/journal/hamiltorch.html
* Bayesian neural networks are left to a different notebook
```
#notes for setting by Hyunji
#install
# - git clone the following repo in google drive/Colab Notebook folder
# - go to browser google drive/Colab Notebook and open this notebook with colab (right click)
# -
# !pip3 install git+https://github.com/hyunjimoon/hamiltorch
# to edit the python files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# add hamiltorch for edit purpose (RUN THIS CELL ARTER .py EDIT!)
import sys
sys.path.append('/content/drive/MyDrive/Colab Notebooks/hamiltorch')
import hamiltorch
#/content/drive/MyDrive/Colab Notebooks/hamiltorch import hamiltorch
import torch
#import hamiltorch
import matplotlib.pyplot as plt
%matplotlib inline
hamiltorch.set_random_seed(123)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(hamiltorch.__version__)
```
## Sampling a multivariate Gaussian
In `hamiltorch`, we have designed the samplers to receive a function handle `log_prob_func`, which the sampler will use to evaluate the log probability of each sample. A `log_prob_func` must take a 1-d vector of length equal to the number of parameters that are being sampled. For the example of our multivariate Gaussian distribution, we can define our `log_prob_func` as follows:
```
def log_prob(omega):
mean = torch.tensor([0.,0.,0.])
stddev = torch.tensor([.5,1.,2.])
return torch.distributions.MultivariateNormal(mean, torch.diag(stddev**2)).log_prob(omega).sum()
N = 400
step_size = .3
L = 5
```
### Sample using standard HMC
* Initialise the parameters e.g. `params_init = torch.zeros(3)` and pass them into the `hamiltorch.sample()` function as `params_init=params_init`.
* Set the number of samples `num_samples=N` corresponding to the number of momentum resampling steps/the number of trajectories to sample.
* Set the step size and trajectory length via `step_size=step_size, num_steps_per_sample=L`.
```
# HMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
params_hmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size, num_steps_per_sample=L)
```
### Sample using the No-U-Turn Sampler (NUTS)
* As in Hoffman and Gelman 2011.
* This is set using the additional parameter `sampler=hamiltorch.Sampler.HMC_NUTS`.
* The step size is adapted with the objective of a desired acceptance rate `desired_accept_rate=0.8`.
* The step size is fixed after the burn stage `burn=burn` and we define `N_nuts = burn + N`
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3) + 5
burn=500
N_nuts = burn + N
params_hmc_nuts = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init,
num_samples=N_nuts,step_size=step_size,num_steps_per_sample=L,
sampler=hamiltorch.Sampler.HMC_NUTS, burn=burn,
desired_accept_rate=0.8)
```
### Sample using implicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* As in Girolami and Calderhead 2011.
* Switch the sampler via setting `sampler=hamiltorch.Sampler.RMHMC` and the integrator via `integrator=hamiltorch.Integrator.IMPLICIT`.
* Limit the number of fixed point iterations in the generalised leapforg via `fixed_point_max_iterations=1000` and set the convergence threshold for 'breaking out' of the while loop via `fixed_point_threshold=1e-05`.
```
# Implicit RMHMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
params_irmhmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size,num_steps_per_sample=L, sampler=hamiltorch.Sampler.RMHMC,
integrator=hamiltorch.Integrator.IMPLICIT, fixed_point_max_iterations=1000,
fixed_point_threshold=1e-05)
```
### Sample using explicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* As in Cobb et. al. 2019
* Switch the integrator to explicit via `integrator=hamiltorch.Integrator.EXPLICIT`. Note that the sampler is still set to RMHMC.
* Introduce and set the binding term via `explicit_binding_const=omega`. This can be subsequently optimised for the highest acceptance rate.
```
# Explicit RMHMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
omega = 100.
params_ermhmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size,num_steps_per_sample=L, sampler=hamiltorch.Sampler.RMHMC,
integrator=hamiltorch.Integrator.EXPLICIT, explicit_binding_const=omega)
# Hyperelliptical RMHMC
hamiltorch.set_random_seed(123)
params_init = torch.zeros(3)
omega = 100.
params_hermhmc = hamiltorch.sample(log_prob_func=log_prob, params_init=params_init, num_samples=N,
step_size=step_size, num_steps_per_sample=L, sampler=hamiltorch.Sampler.RMHMC,
integrator=hamiltorch.Integrator.EXPLICIT, explicit_binding_const=omega, metric = hamiltorch.Metric.HYPERELLIP)
```
### Convert samples to numpy arrays to plot using matplotlib
```
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_irmhmc).reshape(len(params_irmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_ermhmc).reshape(len(params_ermhmc),-1).numpy()
coords_h_e_rmhmc = torch.cat(params_hermhmc).reshape(len(params_hermhmc),-1).numpy()
xlim = [-5,5]
ylim = [-5,5]
fs=16
mean = torch.tensor([0.,0.,0.])
stddev = torch.tensor([.5,1.,2.])
fig, axs = plt.subplots(3, 1, figsize=(15,15))
axs[0].scatter(coords_hmc[:,0], coords_hmc[:,1],s=5,alpha=0.3,label='HMC')
axs[0].scatter(coords_nuts[:,0], coords_nuts[:,1],s=5,alpha=0.3,label='NUTS')
axs[0].scatter(coords_i_rmhmc[:,0], coords_i_rmhmc[:,1],s=5,alpha=0.3,label='Implicit RMHMC')
axs[0].scatter(coords_e_rmhmc[:,0], coords_e_rmhmc[:,1],s=5,alpha=0.3,label='Explicit RMHMC')
axs[0].scatter(coords_h_e_rmhmc[:,0], coords_h_e_rmhmc[:,1],s=5,alpha=0.3,label='Hyper Explicit RMHMC')
axs[0].scatter(mean[0],mean[1],marker = '*',color='C3',s=100,label='True Mean')
axs[0].legend(fontsize=fs)
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].set_ylim(ylim)
axs[1].scatter(coords_hmc[:,0], coords_hmc[:,2],s=5,alpha=0.3,label='HMC')
axs[1].scatter(coords_nuts[:,0], coords_nuts[:,2],s=5,alpha=0.3,label='NUTS')
axs[1].scatter(coords_i_rmhmc[:,0], coords_i_rmhmc[:,2],s=5,alpha=0.3,label='Implicit RMHMC')
axs[1].scatter(coords_e_rmhmc[:,0], coords_e_rmhmc[:,2],s=5,alpha=0.3,label='Explicit RMHMC')
axs[1].scatter(coords_h_e_rmhmc[:,0], coords_h_e_rmhmc[:,2],s=5,alpha=0.3,label='Hyper Explicit RMHMC')
axs[1].scatter(mean[0],mean[2],marker = '*',color='C3',s=100,label='True Mean')
axs[1].legend(fontsize=fs)
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].set_ylim(ylim)
axs[2].scatter(coords_hmc[:,1], coords_hmc[:,2],s=5,alpha=0.3,label='HMC')
axs[2].scatter(coords_nuts[:,1], coords_nuts[:,2],s=5,alpha=0.3,label='NUTS')
axs[2].scatter(coords_i_rmhmc[:,1], coords_i_rmhmc[:,2],s=5,alpha=0.3,label='Implicit RMHMC')
axs[2].scatter(coords_e_rmhmc[:,1], coords_e_rmhmc[:,2],s=5,alpha=0.3,label='Explicit RMHMC')
axs[2].scatter(coords_h_e_rmhmc[:,1], coords_h_e_rmhmc[:,2],s=5,alpha=0.3,label='Hyper Explicit RMHMC')
axs[2].scatter(mean[1],mean[2],marker = '*',color='C3',s=100,label='True Mean')
axs[2].legend(fontsize=fs)
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].set_ylim(ylim)
plt.tight_layout()
# plt.savefig('../../Gaussian_plots.png',bbox_inches='tight')
plt.show()
```
### KL divergence:
* Calculated the KL divergence as a measure of how well we have approximated the target distribution (the Gaussian).
```
p = torch.distributions.MultivariateNormal(mean, stddev.diag()**2)
q_hmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_hmc.mean(0)),torch.diag(torch.FloatTensor(coords_hmc.var(0))))
q_nuts = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_nuts.mean(0)),torch.diag(torch.FloatTensor(coords_nuts.var(0))))
q_i_rmhmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_i_rmhmc.mean(0)),torch.diag(torch.FloatTensor(coords_i_rmhmc.var(0))))
q_e_rmhmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_e_rmhmc.mean(0)),torch.diag(torch.FloatTensor(coords_e_rmhmc.var(0))))
q_h_e_rmhmc = torch.distributions.MultivariateNormal(torch.FloatTensor(coords_h_e_rmhmc.mean(0)),torch.diag(torch.FloatTensor(coords_e_rmhmc.var(0))))
print('HMC kl: ',torch.distributions.kl.kl_divergence(p, q_hmc))
print('NUTS kl: ',torch.distributions.kl.kl_divergence(p, q_nuts))
print('Implicit RMHMC kl: ',torch.distributions.kl.kl_divergence(p, q_i_rmhmc))
print('Explicit RMHMC kl: ',torch.distributions.kl.kl_divergence(p, q_e_rmhmc))
print('HyperEllip Explicit RMHMC kl: ',torch.distributions.kl.kl_divergence(p, q_h_e_rmhmc))
```
Experiment until here (02.28)
# Sampling from a more complicated distribution: funnel distribution
* We now define the funnel distribution as in Neal et al. 2003:
$$\prod_i\mathcal{N}(\mathbf{x}_i\vert 0, \exp\{-v\})\mathcal{N}(v\vert 0, 9). $$
* This is our new `log_prob_func`.
```
D = 10
def funnel_ll(w, dim=D):
v_dist = torch.distributions.Normal(0,3)
ll = v_dist.log_prob(w[0])
x_dist = torch.distributions.Normal(0,torch.exp(-w[0])**0.5)
ll += x_dist.log_prob(w[1:]).sum()
return ll
```
### Sample using standard HMC
* As we did above for the multivariate Gaussian.
```
# HMC
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.2
num_samples = 1000 # For results in plot num_samples = 10000
L = 25
params_hmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
step_size=step_size, num_steps_per_sample=L)
```
### Sample using the No-U-Turn Sampler (NUTS)
* Again, as we did above.
* Note that this log probability is badly defined in certain parts of the parameter space. Therefore you will see invalid log probabilities printed out as the model samples. (Especially in the burn in stage.)
* Do not worry about print statements of `Invalid log_prob`!
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.01
num_samples = 1200 # For results in plot num_samples = 12000
L = 25
burn = 200 # For results in plot burn = 2000
params_hmc_nuts = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
step_size=step_size, num_steps_per_sample=L,desired_accept_rate=0.75,sampler=hamiltorch.Sampler.HMC_NUTS,burn=burn)
```
### Sample using implicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* We change sampler flag and integrator flag as before.
* For the funnel distribution our metric tensor is no longer guaranteed to be positive semi-definite (PSD) if we use the Hessian as above. Therefore we introduce a new flag and set is as `metric=hamiltorch.Metric.SOFTABS`. This forces our metric to be PSD as in Betancourt 2013.
* As is common in practice, we must often add jitter along the diagonal of the metric tensor to ensure we can invert it (it also allows us to differentiate through it using `torch.symeig`). We do this via `jitter=jitter`.
```
# Implicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 10 # For results in plot num_samples = 1000, but this takes a while! Setting to 100 is also reasonable.
L = 25
threshold = 1e-3
softabs_const=10**6
fixed_point_max_iterations=1000
jitter= 0.001
params_i_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.IMPLICIT,
metric=hamiltorch.Metric.SOFTABS, fixed_point_threshold=threshold, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, softabs_const=softabs_const,
fixed_point_max_iterations=fixed_point_max_iterations)
```
### Sample using explicit Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
* We use our faster integrator with the SOFTABS metric to get a similar result to the implicit integrator.
```
# Explicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 100 # For results in plot num_samples = 1000
L = 25
omega=10
softabs_const=10**6
jitter=0.001
params_e_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.EXPLICIT,
metric=hamiltorch.Metric.SOFTABS, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, explicit_binding_const=omega,
softabs_const=softabs_const)
# Explicit RMHMC with HYPERELLIP
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 100 # For results in plot num_samples = 1000
L = 25
omega=10
softabs_const=10**6
jitter=0.001
params_e_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.EXPLICIT,
metric=hamiltorch.Metric.HYPERELLIP, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, explicit_binding_const=omega,
) #softabs_const=softabs_const
```
### Convert to numpy arrays for plotting
```
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_hmc_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_i_rmhmc).reshape(len(params_i_rmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_e_rmhmc).reshape(len(params_e_rmhmc),-1).numpy()
# One that I made earlier!
params_i_rmhmc = torch.load('../../data/funnel/params_i_rmhmc_10D_funnel_1000.npy')
params_e_rmhmc = torch.load('../../data/funnel/params_e_rmhmc_10D_funnel_1000.npy')
params_hmc = torch.load('../../data/funnel/params_hmc_10D_funnel_10000.npy')
params_hmc_nuts = torch.load('../../data/funnel/params_hmc_nuts_10D_funnel_10000.npy')
coords_hmc = torch.cat(params_hmc).reshape(len(params_hmc),-1).numpy()
coords_hmc_nuts = torch.cat(params_hmc_nuts).reshape(len(params_hmc_nuts),-1).numpy()
coords_i_rmhmc = torch.cat(params_i_rmhmc).reshape(len(params_i_rmhmc),-1).numpy()
coords_e_rmhmc = torch.cat(params_e_rmhmc).reshape(len(params_e_rmhmc),-1).numpy()
xlim = [-4,4]
ylim = [0,7]#[-2,9]
text_x = -1.5
text_y = 8
font_size_text = 20
fs=17
vxx = torch.linspace(xlim[0],xlim[1],300)
p = torch.distributions.Normal(0,3)
v_pdf = torch.exp(p.log_prob(vxx))
fig, axs = plt.subplots(1, 4, figsize=(20,5), sharey=True)
axs[0].scatter(coords_hmc[:,1], coords_hmc[:,0],s=5,alpha=0.3,rasterized=True, color='C0', label='HMC')
l = axs[0].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].set_ylim(ylim)
axs[0].tick_params(axis='both', labelsize=fs)
axs[0].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[0].set_ylabel(r'$v$',fontsize=font_size_text,rotation=0,labelpad=30)
axs[1].scatter(coords_hmc_nuts[:,1], coords_hmc_nuts[:,0],s=5,alpha=0.3,label='NUTS',rasterized=True,color='C5')
l = axs[1].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].set_ylim(ylim)
axs[1].tick_params(axis='both', labelsize=fs)
axs[1].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[2].scatter(coords_i_rmhmc[:,1], coords_i_rmhmc[:,0],s=5,alpha=0.3,rasterized=True, color='C1',label='Implicit\nRMHMC')
l = axs[2].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].set_ylim(ylim)
axs[2].tick_params(axis='both', labelsize=fs)
axs[2].set_xlabel(r'$x_1$',fontsize=font_size_text)
axs[3].scatter(coords_e_rmhmc[:,1], coords_e_rmhmc[:,0],s=5,alpha=0.3,rasterized=True, color='C2', label='Explicit\nRMHMC')
l = axs[3].legend(loc=0,fontsize=fs)
l.legendHandles[0]._sizes = [100]
axs[3].grid()
axs[3].set_xlim(xlim)
axs[3].set_ylim(ylim)
axs[3].tick_params(axis='both', labelsize=fs)
axs[3].set_xlabel(r'$x_1$',fontsize=font_size_text)
plt.tight_layout()
# plt.savefig('../../data/funnel/funnel_hist_plots_scatter.pdf',bbox_inches='tight')
plt.show()
```
### Marginal distribution $p(v)$
We can also plot the marginal distributions of $v$ by representing them in histograms. We plot the known Gaussian distribution in each figure for comparison. The KL divergence is also included to measure how close the empirical distribution is from the true one.
```
p = torch.distributions.Normal(0,3)
q_hmc = torch.distributions.Normal(coords_hmc[:,0].mean(),coords_hmc[:,0].std())
q_hmc_nuts = torch.distributions.Normal(coords_hmc_nuts[:,0].mean(),coords_hmc_nuts[:,0].std())
q_i_rmhmc = torch.distributions.Normal(coords_i_rmhmc[:,0].mean(),coords_i_rmhmc[:,0].std())
q_e_rmhmc = torch.distributions.Normal(coords_e_rmhmc[:,0].mean(),coords_e_rmhmc[:,0].std())
kl_hmc = torch.distributions.kl.kl_divergence(p, q_hmc)
kl_hmc_nuts = torch.distributions.kl.kl_divergence(p, q_hmc_nuts)
kl_i_rmhmc = torch.distributions.kl.kl_divergence(p, q_i_rmhmc)
kl_e_rmhmc = torch.distributions.kl.kl_divergence(p, q_e_rmhmc)
print('HMC kl: ',kl_hmc)
print('NUTS HMC kl: ',kl_hmc_nuts)
print('Implicit RMHMC kl: ',kl_i_rmhmc)
print('Explicit RMHMC kl: ',kl_e_rmhmc)
xlim = [-9,9]
ylim = [0,.25]
text_x = -4.5
text_y = .233
font_size_text = 20
fs=17
vxx = torch.linspace(xlim[0],xlim[1],300)
p = torch.distributions.Normal(0,3)
v_pdf = torch.exp(p.log_prob(vxx))
fig, axs = plt.subplots(1, 4, figsize=(20,5),sharey=True)
axs[0].hist(coords_hmc[:,0], color='C0', bins=20,density=True, alpha=0.5, label='HMC',range=xlim)
axs[0].plot(vxx.numpy(), v_pdf.numpy(),'C3',label='$p(v)$')
axs[0].legend(loc=0,fontsize=fs)
axs[0].grid()
axs[0].set_xlim(xlim)
axs[0].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_hmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[0].set_ylim(ylim)
axs[0].tick_params(axis='both', labelsize=fs)
axs[0].set_xlabel(r'$v$',fontsize=font_size_text)
axs[0].set_ylabel(r'$p(v)$',fontsize=font_size_text,rotation=0,labelpad=30)
axs[1].hist(coords_hmc_nuts[:,0], color='C5',bins=20,density=True, alpha=0.5,label='NUTS',range=xlim)
axs[1].plot(vxx.numpy(), v_pdf.numpy(),'C3', label='$p(v)$')
axs[1].legend(loc=0,fontsize=fs)
axs[1].grid()
axs[1].set_xlim(xlim)
axs[1].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_hmc_nuts), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[1].set_ylim(ylim)
axs[1].tick_params(axis='both', labelsize=fs)
axs[1].set_xlabel(r'$v$',fontsize=font_size_text)
axs[2].hist(coords_i_rmhmc[:,0], color='C1',bins=20,density=True, alpha=0.5,label='Implicit\nRMHMC')
axs[2].plot(vxx.numpy(), v_pdf.numpy(),'C3', label='$p(v)$')
axs[2].legend(loc=1,fontsize=fs)
axs[2].grid()
axs[2].set_xlim(xlim)
axs[2].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_i_rmhmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[2].set_ylim(ylim)
axs[2].tick_params(axis='both', labelsize=fs)
axs[2].set_xlabel(r'$v$',fontsize=font_size_text)
axs[3].hist(coords_e_rmhmc[:,0], color='C2',bins=20,density=True, alpha=0.5, label='Explicit\nRMHMC')
axs[3].plot(vxx.numpy(), v_pdf.numpy(),'C3',label='$p(v)$')
axs[3].legend(loc=0,fontsize=fs)
axs[3].grid()
axs[3].set_xlim(xlim)
axs[3].text(text_x, text_y, "$\mathrm{D_{KL}} = $" + '{:.3f}'.format(kl_e_rmhmc), size=font_size_text, rotation=0.,
ha="center", va="center",
bbox=dict(boxstyle="round",
ec="k", # Outer colour
fc='w',
)
)
axs[3].set_ylim(ylim)
axs[3].tick_params(axis='both', labelsize=fs)
axs[3].set_xlabel(r'$v$',fontsize=font_size_text)
plt.tight_layout()
# plt.savefig('../../data/funnel/funnel_hist_plots_nuts.pdf',bbox_inches='tight')
plt.show()
```
### DEBUG MODE
* For `hamiltorch.sample()` we can pass `debug=True`. This is useful for checking how many iterations RMHMC takes to converge and also to look at the values of the Hamiltonian.
* Also, for NUTS, debug mode returns an EXTRA output corresponding to the adapted step size.
```
# HMC NUTS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.01
num_samples = 6 #In paper: 12000
L = 25
burn = 3 #2000
############
debug = True
############
params_hmc_nuts, adapted_step_size = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init,
num_samples=num_samples, step_size=step_size,
num_steps_per_sample=L,desired_accept_rate=0.75,
sampler=hamiltorch.Sampler.HMC_NUTS,burn=burn, debug=debug)
```
* DEBUG Mode for implicit RMHMC.
```
# Implicit RMHMC with SOFTABS
hamiltorch.set_random_seed(123)
params_init = torch.ones(D + 1)
params_init[0] = 0.
step_size = 0.14
num_samples = 2
L = 25
threshold = 1e-3
softabs_const=10**6
fixed_point_max_iterations=1000
jitter= 0.001
############
debug = True
############
params_i_rmhmc = hamiltorch.sample(log_prob_func=funnel_ll, params_init=params_init, num_samples=num_samples,
sampler=hamiltorch.Sampler.RMHMC, integrator=hamiltorch.Integrator.IMPLICIT,
metric=hamiltorch.Metric.SOFTABS, fixed_point_threshold=threshold, jitter=jitter,
num_steps_per_sample=L, step_size=step_size, softabs_const=softabs_const,
fixed_point_max_iterations=fixed_point_max_iterations,debug=debug)
```
The print statements below show how many iterations each implicit Euler integrator took before breaking out the loop:
```Converged (params), iterations: 3, params_diff: 0.00030622229678556323
Converged (momentum), iterations: 2, momenta_diff: 0.0001082777525880374```
Therefore here, the `params` fixed-point step took 3 iterations and the `momentum` took 2.
| github_jupyter |
# Learning Virtual Values
In this tutorial, we will extend the ideas from the [previous tutorial](learning-position-auctions.ipynb). We will consider position auctions, like those found in paid search marketplaces, but focus on virtual value transformations rather than payment and allocation networks.
## Motivating example
Consider a two-bidder, two-slot position auction where the values for the two bidders are correlated. There is a signal $c\sim U[0,1]$, which we intepret as a _conversion rate_. The value of the item for bidder 1 is a random variable $v_1 = x_1 c$ where $x_1 \sim U[0,1]$, similarly for bidder 2.
The first slot has a click-through-rate (quality) of 1. The second slot has a click-through-rate of 0.5. A bidder may purchase one slot only, so we can consider this a special case of a multi-item, unit-demand scenario.
## Architectures and supporting functions
In this tutorial, we will make use of Monotonic networks.
### Preliminaries
We will make heavy use of numpy, pandas, and pytorch.
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
device = torch.device("cuda:1") if torch.cuda.is_available() else torch.device('cpu')
```
We will also make use of matplotlib and seaborn for visualization:
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
### Common components
We add common `dmch` components:
```
import dmch
from dmch import Mechanism
from dmch import SequentialMechanism
from dmch import build_spa, create_spa_mechanism
```
Now we define the auction scenario:
```
# Number of bidders
bidders = 2
# Pr(click|position)
slot_weights = [1, 0.5]
# Number of slots
slots = len(slot_weights)
```
## GSP
For comparison, we define the GSP mechanism by using sequantial second-price auctions (SPA):
```
def create_gsp_mechanism():
mbuilder = dmch.build_spa(bidders, context_features=1)
return mbuilder.build_sequential(slots,weights=slot_weights)
```
## MyersonNet
The allocation network is defined as follows:
```
def create_myerson_net(context_features=0, hidden_features=100, linear_functions=1, groups=1):
mbuilder = dmch.build_spa(bidders, context_features=context_features)
mbuilder.set_virtual_function(
hidden_features=hidden_features,
linear_functions=linear_functions,
groups=groups)
return mbuilder.build_sequential(slots,weights=slot_weights)
```
## Auction for the motivating example
The networks will train on data that is sampled from the value distribution, which is loaded into a `DataLoader`.
```
import torch.utils.data as data_utils
epochs = 500
sample_size = 2**15
batch_size = 2**12
indepedent_components = torch.rand(sample_size, bidders)
common_components = torch.rand(sample_size, 1)
inputs = indepedent_components * common_components
inputs_with_common = torch.cat([indepedent_components * common_components, common_components], dim=1)
inputs_loader=data_utils.DataLoader(
data_utils.TensorDataset(inputs),
batch_size=batch_size)
inputs_with_common_loader=data_utils.DataLoader(
data_utils.TensorDataset(inputs_with_common),
batch_size=batch_size)
```
Before training the networks, let's establish a GSP baseline.
```
gsp = create_gsp_mechanism()
gsp_report = pd.DataFrame(
dmch.evaluate(
gsp,
inputs_loader,
bidders,
epochs=epochs,
device=device,
misreport_lr=1e-1,
misreport_epochs=100))
```
We now create a simple MyersonNet instance.
```
myerson_net = create_myerson_net().to(device)
```
We loop over the data for a number of epochs and record traces of the networks learning.
```
myerson_net_report = pd.DataFrame(dmch.train(
myerson_net, # the mechanism
inputs_loader, # the bid inputs
bidders, # the number of bidders
epochs=epochs, # the total number of loops over the data
device=device, # the device
rho=1e2, # the rho parameter for the augmented Lagrangian method
mechanism_lr=1e-3, # the learning rate for the mechanism networks
consider_dsic=False,
consider_ir=False))
```
We can also test supplying MyersonNet with the common signal:
```
myerson_net_with_common = create_myerson_net(context_features=1).to(device)
myerson_net_with_common_report = pd.DataFrame(dmch.train(
myerson_net_with_common, # the mechanism
inputs_with_common_loader, # the bid inputs
bidders, # the number of bidders
epochs=epochs, # the total number of loops over the data
device=device, # the device
rho=1e2, # the rho parameter for the augmented Lagrangian method
mechanism_lr=1e-3, # the learning rate for the mechanism networks
consider_dsic=False,
consider_ir=False))
```
Let's review the revenue of the network: _MyersonNet_ exceeds _GSP_ revenue and _MyersonNet with common_. Adding the common signal as context allows MyersonNet to condition on that context, which reduces a problem of finding an optimal reserve for two IID bidders from U[0,common].
```
fig, ax = plt.subplots(figsize=(8,6));
ax.axhline(y=gsp_report.mean()[['revenue']].values, color='g', label='GSP');
ax.plot(myerson_net_report.groupby('epoch')[['revenue']].mean(), label='MyersonNet');
ax.plot(myerson_net_with_common_report.groupby('epoch')[['revenue']].mean(), label='MyersonNet with common');
ax.legend();
def plot_mechanism(mechanism, common=1):
X, Y = np.meshgrid(
np.arange(0.0, common, 0.01),
np.arange(0.0, common, 0.01))
n = X.shape[0]*X.shape[1]
inputs = torch.cat(
(torch.from_numpy(np.reshape(X, (n,1))),
torch.from_numpy(np.reshape(Y, (n,1)))),
dim=1).float().to(device)
inputs = torch.cat((inputs,torch.zeros(n,1).float().to(device)),dim=1)
inputs[:,2] = common
allocation, payment = mechanism(inputs)
allocation_levels = np.arange(0, 1.5, 0.01)
bid_levels = np.arange(0, 1.0, 0.01)
fig, axes = plt.subplots(nrows=2, ncols=bidders+1, figsize=(20,10));
def plot_contour(tensor,axis_index,bidder_title,main_title,levels):
for bidder in range(bidders):
CS = axes[axis_index,bidder].tricontourf(
inputs[:,0].cpu().numpy(),
inputs[:,1].cpu().numpy(),
tensor[:,bidder].detach().cpu().numpy(),
levels=levels,
cmap="RdBu_r",
extend='both');
fig.colorbar(CS, ax=axes[axis_index,bidder]);
axes[axis_index,bidder].set_title(bidder_title+str(bidder));
CS = axes[axis_index,bidders].tricontourf(
inputs[:,0].cpu().numpy(),
inputs[:,1].cpu().numpy(),
tensor.sum(dim=1).detach().cpu().numpy(),
levels=levels,
cmap="RdBu_r",
extend='both');
fig.colorbar(CS, ax=axes[axis_index,bidders]);
axes[axis_index,bidders].set_title(main_title);
plot_contour(allocation,0,'Allocation to bidder ','Allocation to all bidders',allocation_levels)
plot_contour(payment,1,'Payment from bidder ','Payment from all bidders',allocation_levels)
plot_mechanism(gsp,common=1)
plot_mechanism(myerson_net,common=1)
plot_mechanism(myerson_net_with_common,common=1)
plot_mechanism(gsp,common=0.66)
plot_mechanism(myerson_net,common=0.66)
plot_mechanism(myerson_net_with_common,common=0.66)
plot_mechanism(gsp,common=0.33)
plot_mechanism(myerson_net,common=0.33)
plot_mechanism(myerson_net_with_common,common=0.33)
```
| github_jupyter |
```
import logging
import pandas as pd
import seaborn as sns
from scipy import stats
import divisivenormalization.utils as helpers
from divisivenormalization.data import Dataset, MonkeySubDataset
helpers.config_ipython()
logging.basicConfig(level=logging.INFO)
sns.set()
sns.set_style("ticks")
# adjust sns paper context rc parameters
font_size = 8
rc_dict = {
"font.size": font_size,
"axes.titlesize": font_size,
"axes.labelsize": font_size,
"xtick.labelsize": font_size,
"ytick.labelsize": font_size,
"legend.fontsize": font_size,
"figure.figsize": (helpers.cm2inch(8), helpers.cm2inch(8)),
"figure.dpi": 300,
"pdf.fonttype": 42,
"savefig.transparent": True,
"savefig.bbox_inches": "tight",
}
sns.set_context("paper", rc=rc_dict)
class args:
num_best = 10
fname_best_csv = "df_best.csv"
weights_path = "weights"
train_logs_path = "train_logs"
stim_full_size = 140 # full size of stimulus w/o subsampling and cropping
stim_subsample = 2
crop = 10
```
### Load data
```
results_df = pd.read_csv("results.csv")
# Save a simplified version of the csv file, sorted by validation set performance
df_plain = helpers.simplify_df(results_df)
df_plain.to_csv("results_plain.csv")
data_dict = Dataset.get_clean_data()
data = MonkeySubDataset(data_dict, seed=1000, train_frac=0.8, subsample=args.stim_subsample, crop=args.crop)
```
### Get and save FEV performance on test set
Use the 10 best models for analysis. As this operation requires model loading, we do it only if it
was not done before.
```
try:
df_best = pd.read_csv(args.fname_best_csv)
logging.info("loaded data from " + args.fname_best_csv)
except FileNotFoundError:
df_best = df_plain[0 : args.num_best].copy()
fev_lst = []
for i in range(args.num_best):
run_no = df_best.iloc[i]["run_no"]
logging.info("load run no " + str(run_no))
model = helpers.load_dn_model(run_no, results_df, data, args.train_logs_path)
fev = model.evaluate_fev_testset()
fev_lst.append(fev)
feve = model.evaluate_fev_testset_per_neuron()
helpers.pkl_dump(feve, run_no, "feve.pkl", args.weights_path)
with model.session.as_default():
u = model.u.eval()
helpers.pkl_dump(u, run_no, "u.pkl", args.weights_path)
df_best["fev"] = fev_lst
df_best.to_csv(args.fname_best_csv)
fev = df_best.fev.values * 100
print("Mean FEV", fev.mean())
print("SEM", stats.sem(fev, ddof=1))
print("max FEV", fev.max())
print("FEV of model with max correlation on validation set", fev[0])
```
| github_jupyter |
# Talktorial 10
# Binding site similarity and off-target prediction
#### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin
Angelika Szengel, Marvis Sydow and Dominique Sydow
**Note**: Please run this notebook cell by cell. Running all cells in one is possible also, however, a few PyMol images might not turn out as intended.
## Aim of this talktorial
In this talktorial, we use the structural similarity of whole proteins and binding sites to predict off-targets, i.e. proteins that are not intended targets of a drug, which may lead to unwanted side effects or enable desired alternate applications of a drug (drug repositioning).
We discuss the main steps for binding site comparison and implement a basic method, i.e. the geometrical variation between structures (the root mean square deviation of two structures).
## Learning goals
### Theory
* Off-target proteins
* Computational off-target prediction: binding site comparison
* Pairwise RMSD as simple measure for similarity
* Imatinib, a tyrosine kinase inhibitor
### Practical
* Load and visualize the ligand of interest (Imatinib/STI)
* Get all protein-STI complexes from the PDB
* Query the PDB
* Filter the PDB data set
* Save the filtered PDB IDs
* Visualize the PDB structures
* Align the PDB structures (whole protein)
* Get pairwise RMSD (whole protein)
* Align the PDB structures (binding site)
* Get pairwise RMSD (binding site)
## References
Binding site comparison:
* Binding site comparison reviews:
([<i>Curr. Comput. Aided Drug Des. </i> (2008), <b>4</b>, 209-20](https://www.eurekaselect.com/67606/article/how-measure-similarity-between-protein-ligand-binding-sites))
and
([<i>J. Med. Chem. </i> (2016), <b>9</b>, 4121-51](https://pubs.acs.org/doi/10.1021/acs.jmedchem.6b00078))
* Documentation on PyMol `align` command
([PyMolWiki: `align`](https://pymolwiki.org/index.php/Align))
* Wikipedia article on root mean square deviation (RMSD)
([Wikipedia: RMSD](https://en.wikipedia.org/wiki/Root-mean-square_deviation_of_atomic_positions)) and structural superposition ([Wikipedia: structural superposition](https://en.wikipedia.org/wiki/Structural_alignment))
* Structural superposition ([Book chapter: Algorithms, Applications, and Challenges of Protein Structure Alignment in *Advances in Protein Chemistry and Structural Biology* (2014), **94**, 121-75](https://www.sciencedirect.com/science/article/pii/B9780128001684000056?via%3Dihub))
Imatinib:
* Review on Imatinib
([<i>Nat. Rev. Clin. Oncol.</i> (2016), <b>13</b>, 431-46](https://www.nature.com/articles/nrclinonc.2016.41))
* Promiscuity of imatinib
([<i>J. Biol.</i> (2009), <b>8</b>, 10.1186/jbiol134](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2689438/))
* ChEMBL information on Imatinib
([ChEMBL: Imatinib](https://www.ebi.ac.uk/chembl/compound/inspect/CHEMBL941))
* PDB information on Imatinib
([PDB: STI](https://www3.rcsb.org/ligand/STI))
* Side effects of Imatinib
([Drugs.com: Imatinib](https://www.drugs.com/cdi/imatinib.html))
* Side effects of Imatinib
([<i>BMC Struct. Biol.</i> (2009), <b>9</b>, 10.1186/1472-6807-9-7](https://bmcstructbiol.biomedcentral.com/articles/10.1186/1472-6807-9-7))
## Theory
### Off-target proteins
An off-target can be any protein which interacts with a drug or (one of) its metabolite(s) without being the designated target protein.
The molecular reaction caused by the off-target can lead to unwanted side effects, ranging from a rather harmless to extremely harmful impact.
Off-targets mainly occur because on- and off-targets share similar structural motifs with each other in their binding site and therefore can bind similar ligands.
### Computational off-target prediction: binding site comparison
Computation-aided prediction of potential off-targets is aimed at minimizing the risk of developing potentially dangerous substances for medical treatment.
There are several algorithmic approaches to assess binding site similarity but they always consist of three main steps:
1. **Binding site encoding**: binding sites are encoded using different descriptor techniques and stored in a target database.
2. **Binding site comparison**: a query binding site is compared with the target database, using different similarity measures.
3. **Target ranking**: targets are ranked based on a suitable scoring approach.
For detailed information on different similarity measures and existing tools, we refer to two excellent reviews on binding site comparison ([<i>Curr. Comput. Aided Drug Des. </i> (2008), <b>4</b>, 209-20](https://www.eurekaselect.com/67606/article/how-measure-similarity-between-protein-ligand-binding-sites) and [<i>J. Med. Chem. </i> (2016), <b>9</b>, 4121-51](https://pubs.acs.org/doi/10.1021/acs.jmedchem.6b00078)).
<img src="images/binding_site_comparison_steps.png" align="above" alt="Image cannot be shown" width="700">
<div align="center"> Figure 1: Main steps of binding site comparison methods (figure by Dominique Sydow).</div>
### Pairwise RMSD as simple measure for similarity
A simple and straightforward method for scoring the similarity is to use the calculated root mean square deviation (RMSD), which is the square root of the mean of the square of the distances between the atoms of two aligned structures ([Wikipedia: RMSD](https://en.wikipedia.org/wiki/Root-mean-square_deviation_of_atomic_positions)).
In order to find the respective atoms that are compared between two structures, they need to be aligned first based on sequence-based or sequence-independent alignment algorithms ([Book chapter: Algorithms, Applications, and Challenges of Protein Structure Alignment in *Advances in Protein Chemistry and Structural Biology* (2014), **94**, 121-75](https://www.sciencedirect.com/science/article/pii/B9780128001684000056?via%3Dihub)).
### Imatinib, a tyrosine kinase inhibitor
Kinases transfer a phosphate group from ATP to proteins, and thereby regulate various cellular processes such as signal transduction, metabolism, and protein regulation.
If these kinases are constitutively active (due to genomic mutations), they can distort regulation processes and cause cancer.
An example for cancer treatment is Imatinib ([<i>Nat. Rev. Clin. Oncol.</i> (2016), <b>13</b>, 431-46](https://www.nature.com/articles/nrclinonc.2016.41)), a small molecule tyrosine kinase inhibitor used to treat cancer, more specifically chronic myeloid leukaemia (CML) and gastrointestinal stromal tumour (GIST).
Imatinib was shown to be not entirely specific and to target tyrosine kinases other than its main target. This was used for drug repositioning, i.e. Imatinib was approved for alternate cancer types, ([<i>J. Biol.</i> (2009), <b>8</b>, 10.1186/jbiol134](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2689438/)), however can also show unwanted side effects such as signs of an allergic reaction, infection, bleeding, or headache ([Drugs.com: Imatinib](https://www.drugs.com/cdi/imatinib.html)).
## Practical
In the following, we will fetch and filter PDB structures that bind Imatinib. We will investigate the structure similarity of Imatinib-binding proteins (those with a solved protein structure).
The similarity measure used is a pairwise RMSD calculation (as a simple similarity measure), in order to show that this simple method can be used as an initial test for potential off-targets.
```
import os
import pprint
import pickle
import glob
import time
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw, AllChem
#from rdkit.Chem import PyMol
IPythonConsole.ipython_useSVG=True
import nglview as nv
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
import seaborn as sns
from pypdb import *
from biopandas.pdb import PandasPdb
```
### Load and visualize the ligand of interest (Imatinib/STI)
The SMILES format for Imatinib (common abbreviation: STI) can be retrieved from e.g. the ChEMBL database
([ChEMBL: Imatinib](https://www.ebi.ac.uk/chembl/compound/inspect/CHEMBL941))
or the PDB database by its common abbreviation STI
([PDB: STI](https://www3.rcsb.org/ligand/STI)).
We simply copy the string from the "Isomeric SMILES" entry of the Chemical Component Summary table, and load the ligand here by hand.
```
sti = Chem.MolFromSmiles('CN1CCN(Cc2ccc(cc2)C(=O)Nc2ccc(C)c(Nc3nccc(n3)-c3cccnc3)c2)CC1')
Draw.MolToImage(sti)
```
In order to inspect the 3D structure of STI, we use the open source tool PyMol (see introduction in talktorial T8).
Before we can view STI in PyMol, we need to compute its 3D coordinates.
First, we add hydrogen atoms to the molecule, which are not always explicitly denoted in the SMILES format.
Second, we use the distance geometry to obtain initial coordinates for the molecule and optimize the structure of the molecule using the force field UFF (Universal Force Field).
```
sti_mol = Chem.AddHs(sti)
sti_mol
AllChem.EmbedMolecule(sti_mol)
AllChem.UFFOptimizeMolecule(sti_mol)
sti_mol
```
Now, we are ready to roll in nglview!
```
nv.show_rdkit(sti_mol)
```
### Get all protein-STI complexes from the PDB
We can look up Imatinib/STI in open databases like the PDB and search for proteins which are reported targets. In the PDB, Imatinib is usually abbreviated by STI. We will search for both terms and merge the results in the following.
#### Query the PDB
First, we retrieve all proteins from the PDB that bind the ligand of interest (STI).
```
search_dict = make_query('STI') # Query PDB for proteins bound to the ligand STI
found_pbd_ids = do_search(search_dict)
print(found_pbd_ids)
print("\nNumber of structures connected with STI in the PDB: " + str(len(found_pbd_ids)))
```
Note that the query results can differ depending on the term used for the query ligand (here: Imatinib).
```
search_dict2 = make_query('Imatinib') # Query PDB for proteins bound to the ligand Imatinib
found_pbd_ids2 = do_search(search_dict2)
print(found_pbd_ids2)
print("\nNumber of structures connected with Imatinib in the PDB: " + str(len(found_pbd_ids2)))
```
We merge both query results and keep only unique entries.
```
pdb_ids = list(set(found_pbd_ids + found_pbd_ids))
print("Number of structures connected with STI/Imatinib in the PDB: " + str(len(pdb_ids)))
```
#### Filter the PDB data set
We retrieve meta information on the PDB structures using the `pypdb` function `get_entity_info`, in order to filter the data set based on the following criteria:
1. Filter by experimental method (`xray`).
2. Filter by resolution (equal or lower than 3 Å).
3. Retain only PDB structures with a single chain (for simplicity).
4. Retain only Imatinib-bound structures (e.g. some PDB structures are returned that are associated with Imatinib but not bound to it).
5. Retain only PDB IDs deposited before 2019 (data set resource at the time of the talktorial publication).
For more info on how to query the PDB see **talktorial 8**.
```
# Get meta information from the PDB
entity_info = []
for i in pdb_ids:
entity_info.append(get_entity_info(i))
entity_info[0]
# Transform list to DataFrame
entity_info_pd = pd.DataFrame(entity_info)
a = [int(i.split(" ")[5]) for i in entity_info_pd["release_date"].tolist()]
entity_info_pd.head()
# 1. Filter by experimental method
entity_info_pd = entity_info_pd[entity_info_pd["Method"] == {'@name': 'xray'}]
# 2. Filter by resolution
entity_info_pd = entity_info_pd[entity_info_pd["resolution"].astype(float) <= 3.0]
# 3. Retain only structures with a single chain (for simplicity)
entity_info_pd = entity_info_pd[[type(i) == dict for i in entity_info_pd["Entity"]]]
entity_info_pd = entity_info_pd[[type(i["Chain"]) == dict for i in entity_info_pd["Entity"]]]
pdb_ids = entity_info_pd["structureId"].tolist()
print("Number of structures after filtering: " + str(len(pdb_ids)))
```
In the following, we will use a package called `BioPandas`, which provides useful functions to load molecular structures of biological macromolecules (from PDB and MOL2 files) in pandas DataFrames. We will use the `PandasPdb` object to facilitate our work with PDB files.
```
# 4. Retain only Imatinib-bound structures
def check_if_ligand_present(pdb_id, ligand_name):
ppdb = PandasPdb().fetch_pdb(pdb_id) # Fetch PDB (atom info, coordinates)
return sum(ppdb.df["HETATM"]["residue_name"] == ligand_name) > 0 # Check for existence of STI entries
entity_info_pd = entity_info_pd[[check_if_ligand_present(pdb_id, "STI") for pdb_id in pdb_ids]] # Apply function
pdb_ids = entity_info_pd["structureId"].tolist()
print("Number of structures after filtering: " + str(len(pdb_ids)))
# 5. Retain only PDB IDs deposited before 2019
entity_info_pd = entity_info_pd[[int(i.split()[5]) < 2019 for i in entity_info_pd["release_date"].tolist()]]
pdb_ids = entity_info_pd["structureId"].tolist()
print("Number of structures after filtering: " + str(len(pdb_ids)))
# 6. After manual visual inspection remove 3GVU (contains 2x STI: not feasable for automatic workflow below)
pdb_ids.remove("3GVU")
pdb_ids
#random.shuffle(pdb_ids) # In case you would like to change the order of IDs
```
#### Save the filtered PDB IDs
We save the PDB IDs of the filtered data set for further analysis (we will use PyMol python scripts later on that will process PDB IDs according to this file).
```
pickle.dump(pdb_ids, open("../data/T10/pdb_ids.p", "wb"))
```
### Visualize the PDB structures
First, we load all structures to PyMol for visual inspection of the 3D structure of the protein data set.
Besides the visualization here in this Jupyter notebook in form of fixed images of a PyMol frame, it is advised to also view and interact with the structures in 3D directly within the PyMol application, which should be opened and manipulated in parallel to this talktorial.
```
pdb_ids
w = nv.NGLWidget()
for pdb_id in pdb_ids:
w.add_pdbid(pdb_id)
w
```
Though this image is beautifully colorful and curly, it is not informative yet. We align the structures to each other in the next step.
### Align the PDB structures (whole protein)
PyMol offers different alignment methods suitable for different levels of sequence and structural similarity:
* The [`super`](https://pymolwiki.org/index.php/Super) command is preferred for proteins with *decent structural similarity* (sequence-independent).
* The [`align`](https://pymolwiki.org/index.php/Align) command is preferred for proteins with *decent sequence similarity* (sequence-dependent).
* The [`cealign`](https://pymolwiki.org/index.php/Cealign) command is very robust for proteins with *little to no sequence similarity* (twilight zone).
In this talktorial, we choose the `align` command to superimpose the structures (based on sequence) by minimizing their RMSD.
Note: This approach biases the analysis towards structures with similar sequences (the `align` command perform better for protein pairs with decent sequence similarity). For some comparisons with lower sequence similarity the `super` or `cealign` command could be a better choice. For an automated workflow (where we do not know the sequence or structural similarity of protein pairs) a solution could be to calculate the RMSD based on all three measures and retain the best for further analysis.
First, we show the alignment of all structures to the first structure in the list `pdb_ids`.
```
from biotite import sequence
from biotite.sequence import align
from MDAnalysis.analysis import align as mda_align
def get_sequence(protein):
# protein - mda.AtomGroup
# return string representation of protein
seq = protein.residues.sequence()
return str(seq.seq)
def align_sequences(s1, s2):
# s1, s2 - string of sequence
# returns biotite.Alignment (first/best)
alignments = align.align_optimal(sequence.ProteinSequence(s1),
sequence.ProteinSequence(s2),
align.SubstitutionMatrix.std_protein_matrix(),
gap_penalty=(-10, -1), terminal_penalty=True,
)
return alignments[0]
def get_aligned(ref, mobile):
# ref, mobile - mda.AtomGroup
# returns filtered versions of ref & mobile that are now sequence aligned
s1, s2 = get_sequence(ref), get_sequence(mobile)
a = align_sequences(s1, s2)
# indices of residue alignment
trace = a.trace
# grab only rows where sequences are aligned
trace = trace[~(trace == -1).any(axis=1)]
aref = ref.residues[trace[:, 0]]
amob = mobile.residues[trace[:, 1]]
return aref.atoms, amob.atoms
def align_protein(ref, mobile):
# sequence align then shift mobile to align to ref
aref, amob = get_aligned(ref, mobile)
mda_align.alignto(amob, aref, strict=False, select='name CA')
return aref, amob
# download pdbids (again) into MDAnalysis
structures = [mda.fetch_mmtf(pdb_id) for pdb_id in pdb_ids[1:]]
# strip solvent etc
proteins = [s.select_atoms('protein') for s in structures]
# choose first protein as reference
ref = proteins[0]
# align all but first protein to first protein
for mobile in proteins[1:]:
align_protein(ref, mobile)
view = nv.NGLWidget()
for protein in proteins:
view.add_component(protein)
view
```
One of the proteins, i.e. 3FW1, is poorly aligned in comparison to the other proteins. We hide this protein, in order to visually show the well aligned proteins.
```
view = nv.NGLWidget()
for protein in proteins[1:]:
view.add_component(protein)
view
```
The structural alignment for many e.g. helices is high, whereas lower or poor for others.
```
from MDAnalysis.analysis import rms
def calc_rmsd(A, B):
# sequence alignment
A, B = get_aligned(A, B)
# select backbone
A = A.select_atoms('name CA')
B = B.select_atoms('name CA')
A, B = mda_align.get_matching_atoms(A, B)
return rms.rmsd(A.positions, B.positions, superposition=False)
rmsd_matrix = np.zeros((6, 6))
for i, A in enumerate(proteins):
for j, B in enumerate(proteins):
rmsd_matrix[i, j] = calc_rmsd(A, B)
rmsd_matrix
```
### Get pairwise RMSD (whole protein)
Since we could not find a function to return the RMSD values from PyMol in this Jupyter notebook (which *is* possible within PyMol and within a PyMol python script), we call here a PyMol python script: the superposition and final RMSD refinement is performed for all protein pairs. The resulting RMSD values are saved for further analysis.
First, we take a look at the python script for PyMol, which we will use for this task.
```
f = open("./pymol_scripts/pymol_align_proteins.py")
file_content = f.readlines()
for i in file_content:
print(i, end="")
os.popen("python ./pymol_scripts/pymol_align_proteins.py")
```
This PyMol python script saves all pairwise RMSD results to a file, which we load now to the Jupyter notebook.
```
align_df_proteins = pickle.load(open("../data/T10/align_df_proteins.p", "rb"))
```
`align_df_proteins` is a DataFrame and contains for each pairwise comparison the return values of the `align` command in PyMol, i.e. a tuple with 7 items:
1. RMSD after refinement
2. Number of aligned atoms after refinement
3. Number of refinement cycles
4. RMSD before refinement
5. Number of aligned atoms before refinement
6. Raw alignment score
7. Number of residues aligned
We familiarize ourselves with this data structure for an example protein pair:
```
print("Structures in the data set:")
cols = align_df_proteins.columns
rows = align_df_proteins.index
print(cols)
print(rows)
example_col = cols[0]
example_row = rows[2]
print("\nExample align return values for the {}-{} pair: ".format(example_col, example_row))
print(align_df_proteins.loc[example_col, example_row])
print("\nRMSD value (in Angström) after refinement for the {}-{} pair: ".format(example_col, example_row))
print(align_df_proteins.loc[example_col, example_row][0])
```
Now, we extract the RMSD after refinement (or any other position of the `align` return value) for all pairs in form of a pandas DataFrame for further analysis.
Therefore, we define a function that takes as input the `align` function return values for all pairs `pymol_align_results` and the position of the return value of interest `position`.
```
def extract_align_info(align_df, position):
if position in [0, 3, 5]:
return align_df.applymap(lambda x: np.float(x[position]))
if position in [1, 2, 4, 6]:
return align_df.applymap(lambda x: np.int(x[position]))
else:
print("Position not available.")
```
For example, we can check the number of aligned residues per protein pair (last position of the `align` return value).
```
extract_align_info(align_df_proteins, 6)
```
Here, we can see that for most protein pairs a large proportion of their residues could be aligned (EGFR sequence lengths range here from about 270 to 350). However, 3FW1 shows only low sequence alignment.
FYI: You can check the sequence alignment of all pairs here: `../data/T10/alignment/`. An example is shown below:
```
example_alignment_path = glob.glob("../data/T10/alignment/*")[3]
f = open(example_alignment_path)
file_content = f.readlines()
for i in file_content:
print(i, end="")
```
In the next step, we generate a DataFrame for the RMSD values after refinement (first position of the `align` return value).
```
rmsd = extract_align_info(align_df_proteins, 0)
rmsd
```
We visualize the results of this RMSD refinement as heatmap.
```
sns.heatmap(rmsd, linewidths=1, annot=True, cbar_kws={"label": "RMSD ($\AA$)"}, cmap="Blues")
plt.show()
```
We cluster the heatmap in order to see protein similarity based on the RMSD refinement.
```
def plot_clustermap(rmsd, title):
g = sns.clustermap(rmsd, linewidths=1, annot=True, cbar_kws={"label": "RMSD ($\AA$)"}, cmap="Blues")
plt.setp(g.ax_heatmap.get_xticklabels(), rotation=0)
plt.setp(g.ax_heatmap.get_yticklabels(), rotation=0)
sns.set(font_scale=1.5)
# Save plot - use bbox_inches to include text boxes:
# https://stackoverflow.com/questions/44642082/text-or-legend-cut-from-matplotlib-figure-on-savefig?rq=1
plt.savefig("../data/T10/bsc_{}.png".format(title), dpi=300, bbox_inches="tight", transparent=True)
plt.show()
plot_clustermap(rmsd, "protein")
```
The RMSD comparison shows that one protein differs from the other proteins, i.e. 3FW1 (as already discussed based on the visual 3D inspection of the alignment and on the number of aligned residues).
Proteins are classified by the chemical reactions they catalyze with so called EC (Enzyme Commission) numbers, which we will use here to check the enzymatic groups the proteins belong to.
```
# Get EC numbers for PDB IDs from PDB
pdb_all_info = [get_all_info(pdb_id) for pdb_id in pdb_ids]
ec_numbers = [i["polymer"]["enzClass"]["@ec"] for i in pdb_all_info]
target_set = {"pdb_id": pdb_ids,
"ec_number": ec_numbers}
target_set = pd.DataFrame(target_set)
target_set
```
We can see that 3FW1, the human quinone reductase 2 (NQO2), belongs to EC class 1, i.e. oxidoreductases, whereas the other proteins belong to class 2.7, i.e. phosphorus transferases, which contain the tyrosine kinases (EC 2.7.10.2), the designated targets for Imatinib. 3FW1 is a reported off-target "with potential implications for drug design and treatment of chronic myelogenous leukemia in patients" ([<i>BMC Struct. Biol.</i> (2009), <b>9</b>, 10.1186/1472-6807-9-7](https://bmcstructbiol.biomedcentral.com/articles/10.1186/1472-6807-9-7)).
### Align PDB structures (binding sites)
So far we have used the whole protein structure for the alignment and RMSD refinement. However, the ligand binds only at the protein binding site and therefore the similarity of binding sites rather than of whole protein structures is a more putative basis for off-target prediction.
We define a binding site of a protein by selecting all residues that are within 10 Å of any ligand atom. These binding site residues are used for alignment and only their Cɑ atoms (protein backbone) are used for the RMSD refinement. Here, we show the alignment of all structures to the first structure in the list `pdb_ids`.
```
# Reinitialize PyMol
objPMV.server.do("reinitialize")
# Load proteins files
for pdb_id in pdb_ids:
cmd = "fetch " + pdb_id
objPMV.server.do(cmd)
# Show proteins as cartoon (may be necessary depending on your pymol version)
#objPMV.server.do('cmd.show("cartoon","all")')
# Hide objects
objPMV.server.do("hide polar_contacts")
# Set background to white
objPMV.server.do("bg_color white")
# Remove water and ions
objPMV.server.do("remove solvent")
# Align binding sites
immobile_pdb_id = pdb_ids[0]
for mobile_pdb_id in pdb_ids[1:]:
# Select atoms within a certain radius of any atom of STI and extend selection to full residues
objPMV.server.do("select mobile_bs, byres " + mobile_pdb_id + " within 10 of (" + mobile_pdb_id + " and resn STI)")
objPMV.server.do("select immobile_bs, byres " + immobile_pdb_id + " within 10 of (" + immobile_pdb_id + " and resn STI)")
# Perform alignment
objPMV.server.do("align mobile_bs, immobile_bs")
# Center and zoom
objPMV.server.do("center all")
objPMV.server.do("zoom all")
objPMV.server.do("ray 400,400")
# Display PyMol frame in Jupyter notebook
objPMV.GetPNG(h=500)
```
### Get pairwise RMDS (binding sites)
As shown before for the alignment and RMSD refinement of the whole protein structure, we here run a PyMol python script that calculates the alignment and RMSD for all protein binding site pairs as describe above.
The PyMol commands used are explained within the PyMol python script.
```
f = open("./pymol_scripts/pymol_align_bindingsites.py")
file_content = f.readlines()
for i in file_content:
print(i, end="")
```
We run the PyMol python script via the terminal (ligand-protein radius as input for binding site atom definition).
```
# Perform binding site comparison using the align function in PyMol
os.popen("python ./pymol_scripts/pymol_align_bindingsites.py 10")
```
We load the `align` DataFrame for binding site comparisons.
```
align_df_bindingsites = pickle.load(open("../data/T10/align_df_bindingsites.p", "rb"))
```
We extract the RMSD values from that DataFrame.
```
rmsd_bs = extract_align_info(align_df_bindingsites, 0)
extract_align_info(align_df_bindingsites, 6)
```
We show the clustered heatmap for the RMSD results.
```
# Show the pairwise RMSD values as clustered heatmap
plot_clustermap(rmsd_bs, "bs")
```
RMSD values of aligned binding sites shows a dissimilarity of 3FW1 (EC number 1.10.5.1) within the dataset (EC number 2.7) - visual inspection in PyMol shows that STI is binding to the surface of the protein. The pairs 1XBB-4CVS and 1XBB-3HEC also show dissimilarities, whereas the rest of the dataset shows low RMSD values.
RMSD values as calculated here are dependend on the residue selection (binding site definition) and the quality of the a priori sequence alignment.
```
# Clean up directory (remove PDB downloads and PyMol alignment files)
os.popen("rm ./*.cif")
os.popen("rm ../data/T10/alignment/*.aln")
```
## Discussion
In this talktorial, we have used sequence alignment and subsequent RMSD refinement of whole protein structures and binding sites to assess the similarity and dissimilarity of a set of Imatinib-binding proteins.
However, off-target prediction for Imatinib requires to compare the binding site of an intended target of Imatinib (a tyrosine kinase) with a large database of resolved structures (PDB).
Since this results in the comparison of sequences also with low similarity, more sophisticated methods should be invoked
that use a sequence-independent alignment algorithm and that include the physico-chemical properties of the binding site in order to enable a more sophisticated search.
## Quiz
1. Explain the terms on- and off-targets of a drug.
2. Explain why binding site similarity can be used to find off-targets to a query target.
3. Discuss how useful the RMSD value of (i) whole proteins and (ii) protein binding sites is for off-target prediction.
4. Think of alternate approaches for binding site information (how to encode a binding site for binding site comparison?).
| github_jupyter |
# Vladislav Abramov and Sergei Garshin DSBA182
## The Task
### Что ждем от туториала?
1. Оценить конкретную модель заданного класса. Не только сделать .fit, но и выписать полученное уравнение!
2. Автоматически подобрать модель (встроенный подбор)
3. Построить графики прогнозов, интервальные прогнозы где есть.
4. Сравнить несколько (две-три) модели данного класса с помощью скользящего окна.
5. Творчество, любые дополнения, мемасики :)
### Класс выбираем: ETS, ARIMA, BATS + TBATS, PROPHET, случайный лес + создание признаков, GARCH, своё предложить
### Цель: когда через год будут люди спрашивать "как в питоне оценить ets/arima?" ответ должен быть "читайте туториалы от нашего курса!"
---
---
---
# Real Data Analysis with ARIMA models
Let's begin with collecting stock data
```
import pandas as pd
import yfinance as yf
from matplotlib import pyplot as plt
import numpy as np
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from pmdarima.arima import auto_arima, ARIMA, ADFTest
from sklearn.metrics import mean_squared_error
from math import sqrt
from tqdm import tqdm
from sklearn.metrics import r2_score
import warnings
warnings.filterwarnings('ignore')
def should_diff(data):
adf_test = ADFTest(alpha = 0.05)
return adf_test.should_diff(data)
def get_stock_data(ticker, start, end):
tickerData = yf.Ticker(ticker)
tickerDf = tickerData.history(period='1d', start = start, end = end)
return tickerDf
def train_test_devision(n, data):
train = data[:-n]
test = data[-n:]
return train, test
def differentiate_data(dataset, interval=1):
diff = list()
for i in range(interval, len(dataset)):
value = dataset[i] - dataset[i - interval]
diff.append(value)
return diff
def autocorrelation_plot(data):
data = np.array(data)**2
plot_acf(data)
plt.show()
def p_autocorrelation_plot(data):
data = np.array(data)**2
plot_pacf(data)
plt.show()
data = get_stock_data('AAPL', '2015-1-1', '2021-2-1')
data.head(10)
```
---
Here we may observe the graph of stock price for Apple Inc. on the perios 1st Jan 2015 till 1st Feb 2021
```
plt.plot(data['Close'])
plt.title('Close Stock Prices')
```
---
Looking at the graph it is obvious that data is not stationary and has a strong trend. However, lets make sure that data is not stationary by Autocorrelation plot and Augmented Dickey-Fuller test.
```
print('Should differentiate? :', should_diff(data['Close']))
print()
print('ACF of undifferentiated data')
autocorrelation_plot(data['Close'])
```
---
As we can see, we were right, the data is not stationary!
## Stationarity check & convertion data to stationary
For now, lets differentiate our initial stock data to build a stationary graph of deltas
```
X = pd.DataFrame()
X['Diff_Close'] = differentiate_data(data['Close'])
plt.plot(X['Diff_Close'])
plt.title('Stationary stock data plot')
```
As we may notice we have vanished trend and made the data much more stationary than it was before, for the next step, lets check the stationary feature by Autocorrelation, Partial Autocorrelation plot and Augmented Dickey-Fuller test again.
```
print('Should differentiate? :', should_diff(X['Diff_Close']))
print()
print('ACF of differentiated data')
autocorrelation_plot(X['Diff_Close'])
print('PACF of differentiated data')
p_autocorrelation_plot(X['Diff_Close'])
```
Wow! The data has become stationary! We may go further!
---
## Train / Test devision
On this step we have devide our data into two parts, train and test. Our model will use the training set to make predictions and compare them with testing set.
```
n = 50
train, test = train_test_devision(n, data['Close'])
fig, ax = plt.subplots()
ax.plot(train, label = 'Train Set')
ax.plot(test, label = 'Test Set')
fig.set_figheight(6)
fig.set_figwidth(10)
ax.legend()
```
---
# Manual Model
In this part we have decided to train ARIMA(3,1,2) model, where p = 3 AR parts, d = 1 as we need 1 differentiation and q = 2 MA parts
```
X = data['Close'].values
size = len(train.values)
train, test = train.values, test.values
history = [x for x in train]
predictions, CI = [],[]
for t in tqdm(range(len(test))):
model = ARIMA((3,1,2))
model.fit(history)
y_hat, conf_int = model.predict(n_periods = 1, return_conf_int = True, alpha=0.05)
predictions.append(y_hat)
CI.append(conf_int)
obs = test[t]
history.append(obs)
# print('predicted=%f, expected=%f' % (yhat, obs))
rmse = sqrt(mean_squared_error(test, predictions))
r_squared = r2_score(test, predictions)
print('Test RMSE: %.3f' % rmse)
print('Test R^2: %.3f' % r_squared)
fig, ax = plt.subplots(figsize=(15,8))
ax.plot(test, label = 'Test Set')
ax.plot(predictions, label = 'Prediction Set')
ax.set_title('ARIMA (3,1,2)')
ax.set_xlabel('Price')
ax.set_ylabel('Day')
ax.legend()
model.summary()
```
## The ARIMA equation we got
$\Delta y_t = -0.0090 \Delta y_{t-1} -0.1220 \Delta y_{t-2} -0.0377 \Delta y_{t-3} -0.1042 \varepsilon_{t-1} -0.1690 y_{t-2}\varepsilon_{t-2}$
where $\\ \Delta y_t = y_t - y_{t-1}$
As we may se the model works pretty well
---
## Automatic choice of the model
In this section we would like to play with autosetting parameters, which also include sesonal dependency
```
n = 50
train, test = train_test_devision(n, data['Close'])
model = auto_arima(train, start_p=1, start_q=1,
max_p=3, max_q=3, m=12,
start_P=0, seasonal=True,
d=1, D=1, trace = True,
error_action='ignore',
suppress_warnings = True,
stepwise = True)
model.summary()
y_hat, conf_int = model.predict(n_periods = n, return_conf_int = True, alpha=0.05)
predictions = pd.DataFrame(y_hat, index = test.index, columns = ['Prediction'])
CI = pd.DataFrame({'CI lower': conf_int[:, 0], 'CI upper': conf_int[:, 1]}, index = test.index)
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(20,8))
ax1.plot(train[1400:], label = 'Train Set')
ax1.plot(test, label = 'Test Set')
ax1.plot(predictions, label = 'Prediction Set')
ax1.plot(CI['CI lower'], label = 'CI lower', c = 'r')
ax1.plot(CI['CI upper'], label = 'CI upper', c = 'r')
ax1.set_title('Close look at the predictions')
ax1.set_xlabel('Price')
ax1.set_ylabel('Date')
ax1.legend()
ax2.plot(train[900:], label = 'Train Set')
ax2.plot(test, label = 'Test Set')
ax2.plot(predictions, label = 'Prediction Set')
ax2.plot(CI['CI lower'], label = 'CI lower', c = 'r')
ax2.plot(CI['CI upper'], label = 'CI upper', c = 'r')
ax2.set_title('Global look at the predictions')
ax2.set_xlabel('Price')
ax2.set_ylabel('Date')
ax2.legend()
```
To observe the data we have built two graphs, the left one catches more localy than the right one.
---
---
---
К сожалению, на вечер среды мы не успели выполнить все пункиы и дать подробное описание нашим шагам. Очень просим прокоммментировать выполненные этапы, дать советы и наставления :)
| github_jupyter |
[exercises](intro.ipynb)
```
import numpy as np
np.arange(6)
np.arange(0, 0.6, 0.1), np.arange(6) * 0.1 # two possibilities
np.arange(0.5, 1.1, 0.1), "<-- wrong result!"
np.arange(5, 11) * 0.1, "<-- that's right!"
np.linspace(0, 6, 7)
np.linspace(0, 6, 6, endpoint=False), np.linspace(0, 5, 6) # two possibilities
np.linspace(0, 0.6, 6, endpoint=False), np.linspace(0, 0.5, 6) # again two possibilities
np.linspace(0.5, 1.1, 6, endpoint=False), np.linspace(0.5, 1, 6) # and again ...
```
If the number of elements is known and the step size should be obtained automatically $\Rightarrow$ `np.linspace()`
If the step size is known an if it's an integer and the number of elements should be obtained automatically $\Rightarrow$ `np.arange()`
If the step size is not an integer:
* If the step size is a fraction of integers, you can use `np.arange()` with integers and divide the result accordingly.
* If that's not feasible, calculate the expected number of elements beforehand and use `np.linspace()`
```
dur, amp, freq, fs = 1, 0.3, 500, 44100
t = np.arange(np.ceil(dur * fs)) / fs
y = amp * np.sin(2 * np.pi * freq * t)
```
alternative (but inferior) methods to get $t$:
```
t1 = np.arange(0, dur, 1/fs) # implicit rounding of dur!
t2 = np.arange(0, np.round(dur), 1/fs) # still problematic: arange with floats
# wrong if dur isn't an integer multiple of 1/fs:
t3 = np.linspace(0, dur, np.round(dur * fs), endpoint=False)
```
Length of `y` must be *exactly* 44100 (using a half-open interval for $t$), not 44101 (which would be longer than 1 second).
Plotting: 2 ways to zoom (there are probably more): draw a rectangle, drag with the right mouse button in pan/zoom mode.
Clicks? Because of discontinuities (also in the derivatives) $\Rightarrow$ Fade in/out! See [tools.fade()](tools.py).
```
import sounddevice as sd
import tools
def myplay(data):
"""Apply fade in/out and play with 44.1 kHz."""
data = tools.fade(data, 2000, 5000)
sd.play(data, 44100)
myplay(y)
def mysine(frequency, amplitude, duration):
"""Generate sine tone with the given parameters @ 44.1 kHz."""
samplerate = 44100
times = np.arange(np.ceil(duration * samplerate)) / samplerate
return amplitude * np.sin(2 * np.pi * frequency * times)
z = mysine(440, 0.4, 3)
myplay(z)
%matplotlib
import matplotlib.pyplot as plt
def myplot(data):
"""Create a simple plot @ 44.1 kHz."""
samplerate = 44100
times = np.arange(len(data)) / samplerate
plt.plot(times, data)
plt.xlabel("Time / Seconds")
myplot(mysine(440, 0.4, 3))
import soundfile as sf
dur, amp = 1, 0.3
frequencies = 400, 500, 600 # Hz
fadetime = 2000 # samples
for freq in frequencies:
sig = mysine(freq, amp, dur)
sig = tools.fade(sig, fadetime)
sf.write("sine_{}hz.wav".format(freq), sig, 44100)
from scipy import signal
f0, f1 = 100, 5000 # Hz
amp = 0.2
dur = 2 # seconds
fadetime = 2000 # samples
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
for method in 'linear', 'log':
sweep = amp * signal.chirp(t, f0, dur, f1, method)
sweep = tools.fade(sweep, fadetime)
sf.write('sweep_{}.wav'.format(method), sweep, fs)
sinetone = mysine(frequency=500, amplitude=0.3, duration=1.5)
noise = np.random.normal(scale=0.1, size=len(sinetone))
sine_plus_noise = sinetone + noise
myplay(sine_plus_noise)
myplot(sine_plus_noise)
dur = 2
amp = 0.2
two_sines = mysine(500, amp, dur) + mysine(507, amp, dur)
myplay(two_sines)
myplot(two_sines)
```
Two sine tones with similar frequencies create "beats", see <http://en.wikipedia.org/wiki/Beat_(acoustics)>.
The sum of these two tones is equivalent to an amplitude modulation with a carrier frequency of $\frac{f_1+f_2}{2}$ and a modulation frequency of $\frac{f_1-f_2}{2}$.
$$\cos(2\pi f_1t)+\cos(2\pi f_2t) = 2\cos\left(2\pi\frac{f_1+f_2}{2}t\right)\cos\left(2\pi\frac{f_1-f_2}{2}t\right)$$
We don't really *hear* the modulation frequency itself, we only hear the envelope of the modulation, therefore the *perceived* beat frequency is $f_{\text{beat}} = f_1-f_2$.
```
stereo_sines = np.column_stack([mysine(400, amp, dur), mysine(600, amp, dur)])
myplay(stereo_sines)
```
The first column should be the left channel!
```
dur, amp = 1, 0.3
freq = 500 # Hz
delay = 0.5 # ms
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
times = np.column_stack((t, t - delay/1000))
sig = amp * np.sin(2 * np.pi * freq * times)
myplay(sig)
dur, amp = 0.5, 0.3
frequencies = 500, 1000, 2000 # Hz
delays = 0.6, 0.4, 0.2, 0, -0.2, -0.4, -0.6 # ms
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
for f in frequencies:
for delay in delays:
times = np.column_stack((t, t - delay/1000))
sig = amp * np.sin(2 * np.pi * f * times)
myplay(sig)
sd.wait()
```
This is supposed to illustrate [Lord Rayleigh's Duplex Theory](http://en.wikipedia.org/wiki/Interaural_time_difference#Duplex_theory) (at least the part about time differences).
```
dur, amp = 2, 0.3
frequencies = np.array([200, 400, 600, 800, 1000])
fs = 44100
t = np.arange(np.ceil(dur * fs)) / fs
t.shape = -1, 1
t
amplitudes = amp * 1 / np.arange(1, len(frequencies)+1)
amplitudes
five_sines = amplitudes * np.sin(2 * np.pi * frequencies * t)
five_sines.shape
sum_of_sines = five_sines.sum(axis=1)
myplot(sum_of_sines)
myplay(five_sines[:, [0, 1, 2, 3, 4]].sum(axis=1))
myplay(five_sines[:, [0, 1, 2, 3]].sum(axis=1))
myplay(five_sines[:, [0, 1, 2, 4]].sum(axis=1))
myplay(five_sines[:, [0, 1, 3, 4]].sum(axis=1))
myplay(five_sines[:, [0, 2, 3, 4]].sum(axis=1))
myplay(five_sines[:, [1, 2, 3, 4]].sum(axis=1))
```
<https://en.wikipedia.org/wiki/Harmonic_series_(music)>
```
f0 = 200 # Hz
partials = 20
frequencies = f0 * np.arange(1, partials + 1)
frequencies
amplitudes = amp * 1 / np.arange(1, len(frequencies)+1)
amplitudes
many_sines = amplitudes * np.sin(2 * np.pi * frequencies * t)
many_sines.shape
sawtooth = many_sines.sum(axis=1)
myplot(sawtooth)
myplay(sawtooth)
```
https://en.wikipedia.org/wiki/Sawtooth_wave
```
square = many_sines[:, ::2].sum(axis=1)
myplot(square)
myplay(square)
```
https://en.wikipedia.org/wiki/Square_wave
```
c = 343
samplerate = 44100
dur = 0.01
phat = 0.2
freq = 500
omega = 2 * np.pi * freq
kx = omega / c
x = 0
time = np.arange(np.ceil(dur * fs)) / fs
p = phat * np.exp(1j*(kx*x - omega*time))
plt.plot(time*1000, np.real(p))
plt.xlabel('$t$ / ms')
plt.ylabel('$\mathcal{R}\{p(x,t)\}$ / Pa')
plt.grid()
plt.title('$f = {}$ Hz, $T = {}$ ms'.format(freq, 1000/freq));
xrange = 3
dx = 0.001
time = 0
x = np.arange(np.ceil(xrange/dx)) * dx
p = phat * np.exp(1j*(kx*x - omega*time))
plt.plot(x*100, np.real(p))
plt.xlabel('$x$ / cm')
plt.ylabel('$\mathcal{R}\{p(x,t)\}$ / Pa')
plt.grid()
plt.title('$f = {}$ Hz, $\lambda = {}$ cm'.format(freq, c*100/freq));
```
<p xmlns:dct="http://purl.org/dc/terms/">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<span rel="dct:publisher" resource="[_:publisher]">the person who associated CC0</span>
with this work has waived all copyright and related or neighboring
rights to this work.
</p>
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## Load data
On connaît l'âge et l'expérience d'une personne, on veut pouvoir déduire si une personne est badass dans son domaine ou non.
```
df = pd.DataFrame({
'Age': [20,16.2,20.2,18.8,18.9,16.7,13.6,20.0,18.0,21.2,
25,31.2,25.2,23.8,23.9,21.7,18.6,25.0,23.0,26.2],
'Experience': [2.3,2.2,1.8,1.4,3.2,3.9,1.4,1.4,3.6,4.3,
4.3,4.2,3.8,3.4,5.2,5.9,3.4,3.4,5.6,6.3],
'Badass': [0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,1]
})
df
colors = np.full_like(df['Badass'], 'red', dtype='object')
colors[df['Badass'] == 1] = 'blue'
plt.scatter(df['Age'], df['Experience'], color=colors)
X = df.drop('Badass', axis=1).values
Y = df['Badass'].values
# Cas à prédire
x = [21.2, 4.3]
```
## Using sklearn
### Fit
```
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=1e20, solver='liblinear', random_state=0)
%time model.fit(X, Y)
print(model.intercept_, model.coef_)
```
### Plot Decision Boundary
<details>
<summary>Where does the equation come from? ↓</summary>
<img src="https://i.imgur.com/YxSDJZA.png?1">
</details>
```
b0 = model.intercept_[0]
b1 = model.coef_[0][0]
b2 = model.coef_[0][1]
plt.scatter(df['Age'], df['Experience'], color=colors)
# Decision boundary (with threshold 0.5)
_X = np.linspace(df['Age'].min(), df['Age'].max(),10)
_Y = (-b1/b2)*_X + (-b0/b2)
plt.plot(_X, _Y, '-k')
# Plot using contour
_X1 = np.linspace(df['Age'].min(), df['Age'].max(),10)
_X2 = np.linspace(df['Experience'].min(), df['Experience'].max(),10)
xx1, xx2 = np.meshgrid(_X1, _X2)
grid = np.c_[xx1.ravel(), xx2.ravel()]
preds = model.predict_proba(grid)[:, 1].reshape(xx1.shape)
plt.scatter(df['Age'], df['Experience'], color=colors)
plt.contour(xx1, xx2, preds, levels=[.5], cmap="Greys", vmin=0, vmax=.6)
```
### Predict
```
print('Probabilité de badass:', model.predict_proba([x])[0][1])
print('Prediction:', model.predict([x])[0])
```
## From scratch
### Fit
Source: https://github.com/martinpella/logistic-reg/blob/master/logistic_reg.ipynb
```
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def loss(h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def gradientDescent(X, y, theta, alpha, epochs, verbose=True):
m = len(y)
for i in range(epochs):
h = sigmoid(X.dot(theta))
gradient = (X.T.dot(h - y)) / m
theta -= alpha * gradient
if(verbose and i % 1000 == 0):
z = np.dot(X, theta)
h = sigmoid(z)
print('loss:', loss(h, y))
return theta
# Add intercept
m = len(X)
b = np.ones((m,1))
Xb = np.concatenate([b, X], axis=1)
# Fit
theta = np.random.rand(3)
theta = gradientDescent(Xb, Y, theta=theta, alpha=0.1, epochs=10000)
theta
```
### Plot
```
b0 = theta[0]
b1 = theta[1]
b2 = theta[2]
plt.scatter(df['Age'], df['Experience'], color=colors)
# Decision boundary (with threshold 0.5)
_X = np.linspace(df['Age'].min(), df['Age'].max(),10)
_Y = (-b1/b2)*_X + (-b0/b2)
plt.plot(_X, _Y, '-k')
```
### Predict
```
z = b0 + b1 * x[0] + b2 * x[1]
p = 1 / (1 + np.exp(-z))
print('Probabilité de badass:', p)
print('Prediction:', (1 if p > 0.5 else 0))
```
| github_jupyter |
# Creating Customer Segments
## Introduction
In this project, I analyze a dataset containing data on various customers' annual spending amounts of diverse product categories for internal structure.
The dataset: [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers).
I have excluded the features `'Channel'` and `'Region'`
# Import libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm as cm
from IPython.display import display # Allows the use of display() for DataFrames
%matplotlib inline
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace=True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
```
## Data Exploration
```
display(data.describe())
```
- we can see that the mean is larger than the medians in every category so we can conclude that there is a heavy positive skew in the data
### Selecting Samples
understanding the customers and how their data is distributed can be done by visualizing some of the data
```
# Select three indices to sample from the dataset
indices = [7,70,296]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop=True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
```
### Feature Relevance
to better understand the customers its worth noticing if there is one of the six categories is relevant for understanding the customer purchasing behavior or not.
```
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Set random seed to allow reproducible results
np.random.seed(408)
# Make a copy of the DataFrame
new_data = data.copy()
new_data.drop(['Grocery'], axis = 1, inplace=True)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Grocery'],
test_size=0.25, random_state=408)
# Create a decision tree regressor
regressor = DecisionTreeRegressor()
regressor.fit(X_train,y_train)
# Report the score of the prediction using the testing set
score = regressor.score(X_test,y_test)
print ("The decision tree regressor's r2_score is: {:.4f}".format(score))
```
- I used the grocery feature as a target at the beginning using regressor algorithm. the result was 0.6819. I can conclude that the grocery is not a factor to identify customers spending habits and it is a part of the other features
### Visualize Feature Distributions
I created a scatter matrix in order to visualize every data properly
```
# Produce a scatter matrix for each pair of features in the data
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# A more colorful visualization that allows for more convenient intepretation
# Taken, with modifications, from http://datascience.stackexchange.com/questions/10459/calculation-and-visualization-of-correlation-matrix-with-pandas
fig = plt.figure()
fig.set_size_inches(10, 10)
ax1 = fig.add_subplot(111)
cmap = cm.get_cmap('jet', 30)
cax = ax1.imshow(data.corr(), interpolation="nearest", cmap=cmap)
ax1.grid(True)
plt.title('Customer Spending Feature Correlation')
labels=['Placeholder','Fresh','Milk','Grocery','Frozen','Deterg./Paper','Deli.']
ax1.set_xticklabels(labels, fontsize=12)
ax1.set_yticklabels(labels, fontsize=12)
cbar = fig.colorbar(cax, ticks= np.linspace(0.0,1.0,5))
plt.show()
# Precise numeric breakdown
display(data.corr())
```
###### Conclusions on Correlations
- The following pairs of features exhibit a high degree of correlation, in order from greatest to least: `Grocery` and `Detergents_Paper`; `Grocery` and `Milk`, and `Detergents_Paper` and `Milk`. To be more precise, the correlation coefficients for these pairs are 0.924641, 0.728335, and 0.661816 respectively. This supports my previous claim that these three features are not orthonormal and individually significant in predicting customer spending habits. A new insight that this analysis does give is that while the features are highly correlated with each other, they are not particularly correlated with the other three features (`Fresh`, `Frozen`, and `Delicatessen`. This is especially true for `Detergents_Paper` and `Grocery`. This indicates that instead of discarding the three features outright, they should be "repackaged" into a new one that captures the information they contain. This new feature could maintain the predictive strength that they possess as a group, without being redundant.
- The data for all features exhibits extreme positive skew, shown in the distribution plots. This was also evident in the median and mean calculations, which show mean values far in excess of the median for each feature - a tell-tale indicator of positive skew. The interpretation is that the majority of the samples consists of minor customers that purchase relatively small amounts of product, with a few major customers that buy in bulk spread throughout. My intuition tells me that the former group are mostly small businesses such as restaurants and convenience stores, while the latter are perhaps big-box retail warehouses.
## Data Preprocessing
### Feature Scaling
scaling the features using logarithm
```
# Scale the data using the natural logarithm
log_data = np.log(data)
# Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# Numeric visualization
display(data.corr())
```
### Observation
After applying a natural logarithm scaling to the data, the distribution of each feature appears much more normal. Correlations between various pairs of features is still clearly evident after the log transform.
```
# Display the log-transformed sample data
display(log_samples)
```
### Outlier Detection
```
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# Calculate Q1 (25th percentile of the data) for the given feature
# Calculate Q3 (75th percentile of the data) for the given feature
Q1, Q3 = np.percentile(log_data[feature], 25), np.percentile(log_data[feature], 75)
# Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print("Data points considered outliers for the feature '{}':".format(feature))
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# Select the indices for data points to be removed
outliers = [65,66,75,128,154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop=True)
```
###### Outliers talks
- there were six data points that were flagged as outliers: 65,66,75,128,154. I made a decision to remove them as I am going to use the dataset later on in the algorithm and it will heavily impact this
## Feature Transformation
In this section I use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, I am looking for which compound combinations of features best describe customers.
### Visualization code
```
def pca_results(good_data, pca):
'''
Create a DataFrame of the PCA results
Includes dimension feature weights and explained variance
Visualizes the PCA results
'''
# Dimension indexing
dimensions = ['Dimension {}'.format(i) for i in range(1,len(pca.components_)+1)]
# PCA components
components = pd.DataFrame(np.round(pca.components_, 4), columns = good_data.keys())
components.index = dimensions
# PCA explained variance
ratios = pca.explained_variance_ratio_.reshape(len(pca.components_), 1)
variance_ratios = pd.DataFrame(np.round(ratios, 4), columns = ['Explained Variance'])
variance_ratios.index = dimensions
# Create a bar plot visualization
fig, ax = plt.subplots(figsize = (14,8))
# Plot the feature weights as a function of the components
components.plot(ax = ax, kind = 'bar');
ax.set_ylabel("Feature Weights")
ax.set_xticklabels(dimensions, rotation=0)
# Display the explained variance ratios
for i, ev in enumerate(pca.explained_variance_ratio_):
ax.text(i-0.40, ax.get_ylim()[1] + 0.05, "Explained Variance\n %.4f"%(ev))
# Return a concatenated DataFrame
return pd.concat([variance_ratios, components], axis = 1)
def cluster_results(reduced_data, preds, centers, pca_samples, border_data):
'''
Visualizes the PCA-reduced cluster data in two dimensions
Adds cues for cluster centers and student-selected sample data
'''
predictions = pd.DataFrame(preds, columns = ['Cluster'])
plot_data = pd.concat([predictions, reduced_data], axis = 1)
# Generate the cluster plot
fig, ax = plt.subplots(figsize = (14,8))
# Color map
cmap = cm.get_cmap('gist_rainbow')
# Color the points based on assigned cluster
for i, cluster in plot_data.groupby('Cluster'):
cluster.plot(ax = ax, kind = 'scatter', x = 'Dimension 1', y = 'Dimension 2', \
color = cmap((i)*1.0/(len(centers)-1)), label = 'Cluster %i'%(i), s=30);
# Plot centers with indicators
for i, c in enumerate(centers):
ax.scatter(x = c[0], y = c[1], color = 'white', edgecolors = 'black', \
alpha = 1, linewidth = 2, marker = 'o', s=200);
ax.scatter(x = c[0], y = c[1], marker='$%d$'%(i), alpha = 1, s=100);
# Plot transformed sample points
ax.scatter(x = pca_samples[:,0], y = pca_samples[:,1], \
s = 150, linewidth = 4, color = 'black', marker = 'x');
ax.scatter(x = border_data[:,0], y = border_data[:,1], \
color = 'green', edgecolors = 'black', marker = '*',
alpha = 1, s=80)
# Set plot title
ax.set_title("Cluster Learning on PCA-Reduced Data - Centroids Marked by Number\nTransformed Sample Data Marked by Black Cross");
def biplot(good_data, reduced_data, pca):
'''
Produce a biplot that shows a scatterplot of the reduced
data and the projections of the original features.
good_data: original data, before transformation.
Needs to be a pandas dataframe with valid column names
reduced_data: the reduced data (the first two dimensions are plotted)
pca: pca object that contains the components_ attribute
return: a matplotlib AxesSubplot object (for any additional customization)
This procedure is inspired by the script:
https://github.com/teddyroland/python-biplot
'''
fig, ax = plt.subplots(figsize = (14,8))
# scatterplot of the reduced data
ax.scatter(x=reduced_data.loc[:, 'Dimension 1'], y=reduced_data.loc[:, 'Dimension 2'],
facecolors='b', edgecolors='b', s=70, alpha=0.5)
feature_vectors = pca.components_.T
# we use scaling factors to make the arrows easier to see
arrow_size, text_pos = 7.0, 8.0,
# projections of the original features
for i, v in enumerate(feature_vectors):
ax.arrow(0, 0, arrow_size*v[0], arrow_size*v[1],
head_width=0.2, head_length=0.2, linewidth=2, color='red')
ax.text(v[0]*text_pos, v[1]*text_pos, good_data.columns[i], color='black',
ha='center', va='center', fontsize=18)
ax.set_xlabel("Dimension 1", fontsize=14)
ax.set_ylabel("Dimension 2", fontsize=14)
ax.set_title("PC plane with original feature projections.", fontsize=16);
return ax
def channel_results(reduced_data, outliers, pca_samples):
'''
Visualizes the PCA-reduced cluster data in two dimensions using the full dataset
Data is labeled by "Channel" and cues added for student-selected sample data
'''
# Check that the dataset is loadable
try:
full_data = pd.read_csv("customers.csv")
except:
print ("Dataset could not be loaded. Is the file missing?")
return False
# Create the Channel DataFrame
channel = pd.DataFrame(full_data['Channel'], columns = ['Channel'])
channel = channel.drop(channel.index[outliers]).reset_index(drop = True)
labeled = pd.concat([reduced_data, channel], axis = 1)
# Generate the cluster plot
fig, ax = plt.subplots(figsize = (14,8))
# Color map
cmap = cm.get_cmap('gist_rainbow')
# Color the points based on assigned Channel
labels = ['Hotel/Restaurant/Cafe', 'Retailer']
grouped = labeled.groupby('Channel')
for i, channel in grouped:
channel.plot(ax = ax, kind = 'scatter', x = 'Dimension 1', y = 'Dimension 2', \
color = cmap((i-1)*1.0/2), label = labels[i-1], s=30);
# Plot transformed sample points
for i, sample in enumerate(pca_samples):
ax.scatter(x = sample[0], y = sample[1], \
s = 200, linewidth = 3, color = 'black', marker = 'o', facecolors = 'none');
ax.scatter(x = sample[0]+0.25, y = sample[1]+0.3, marker='$%d$'%(i), alpha = 1, s=125);
# Set plot title
ax.set_title("PCA-Reduced Data Labeled by 'Channel'\nTransformed Sample Data Circled");
plt.axvline(x=1.8,color='r')
plt.axvline(x=-3,color='r')
```
### PCA
After removing the outliers I can now apply PCA to the remaining dataset in orde to discover which dimensions about the data best maximize the variance of features involved.
```
from sklearn.decomposition import PCA
# Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA()
pca.fit(good_data)
# Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = pca_results(good_data, pca)
```
###### PCA Analysis
- the first two principal explain a combined 70.68% of the variation in the dataset
- PC 1, composed primarily of `Detergents_Paper`, `Grocery`, and `Milk` represents prepacked goods that can easily be stocked on shelves and resold to consumers. This feature contains items that are meant to be used or consumed in isolation. The name for this feature could be "Packaged Goods." As products in `Fresh` and `Frozen` are commonly not consumed separately but instead jointly with other products, they have the same weight sign in this feature. PC 2, which consists of `Fresh`, `Frozen`, and `Delicatessen` represents food items that can be consumed either in isolation or in combination with one another. If one had to apply a name to this feature, it would be "Meals." For example, a Turkey sandwhich with fries is a meal that uses items from each of the three categories in the feature. Because `Milk`, `Grocery`, and `Detergents_Paper` (in the form of a napkin) are typically included in meals as well, all six original features have the same weight sign here.
- After the second principal component, interpretation becomes more confounded. A pattern that emerges however is that with each successive component, aspects of the data that the new features embody become more and more specific. As such, PC 1 is the most general and PC 6 is the most specific. The most outstanding aspect of the third component is the heavy negative weight assigned to `Fresh`. Perhaps this feature constitutes foods that have longer shelf lives than those captured in the second component. Even for those customers who order heavily in the "Meal" category defined previously, it is likely not the case that ALL of their orders need to be consumed immediately. Some can be stocked away and used at later dates. Cured meats (`Delicatessen`) and frozen pastries (`Frozen`), for example, would likely fall into such a new category. The negative weight on `Fresh`, which includes foods with very short shelf lives, makes sense with this interpretation as well. In the fourth component, `Frozen` stands tall above all the rest in its positive weight, while `Delicatessen` nearly balances it on the negative side. My interpretation of this feature is that it represents cheap food that can be stocked in refrigerators for long periods of time. In a sense, the fourth feature takes the third one and specializes it further. Whereas foods in `Delicatessen` are typically expensive and even luxurious sometimes, those in `Frozen` are usually processed and cheap. The first four principal components encapsulate variation in `Frozen` so well that it receives nearly no weight in the fifth and sixth components, as expected.
### Observation
The code below shows how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points.
```
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
```
### Dimensionality Reduction
```
# Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns=['Dimension 1', 'Dimension 2'])
```
### Observation
The code below shows how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions.
```
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns=['Dimension 1', 'Dimension 2']))
```
## Visualizing a Biplot
```
# Create a biplot
biplot(good_data, reduced_data, pca)
```
### Biplot Observations
With the original feature projection in red, it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the upper left corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories.
- In order from greatest to least, `Detergents_Paper`, `Grocery`, and `Milk`, have the the strongest correlation with the first principal component. In fact, `Detergent_Paper` is nearly parallel with the new axis, indicating almost perfect correlation. These results also make sense in terms of the pca_results plot as well as the correlation matrix displayed earlier which shows significant correlation between the three features.
- `Fresh`, `Frozen`, and `Delicatessen` are most strongly correlated with the second principal component, however not to the great extent of the features in the first component. `Fresh` is clearly the most dominant feature, while `Frozen` and `Delicatessen` are roughly equal, however pointing in different directions. This observation also agrees with the pca_results plot that I obtained earlier, which shows `Fresh` with the highest weight and the other two features tied for second and third.
## Clustering
In this section, I employ the Gaussian Mixture Model clustering algorithm to identify the various customer segments inherent in the data. I then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
###### The Case for Gaussian Mixture Models over K-Means
- Between K-Means and Gaussian Mixture Models, K-Means is more simple and easy to implement even for someone without a background in machine learning. By grouping data into globular shapes, K-Means produces clusters that often make intuitive sense. Of course if the clusters are not structured in this manner, then this benefit will be outweighed by K-Means' inability to capture alternative structures. K-Means establishes "hard clustering" which makes for easier interpretation as well: each data point belongs to one and only one cluster center. Through applying PCA and selecting a small number of clusters, K-Means can be computationally very fast as well. Time complexity scales linearly with the number of data points given a fixed cluster parameter.
- As mentioned, K-Means cannot model clusters that do not conform to spherical shapes. This is where the main advantage of Gaussian Mixture Models becomes apparent. GMMs can model clusters that take a variety of shapes, including those that are elongated or elliptical. GMM also allows mixed cluster membership by computing the probabilities that each data point belongs to each cluster. These two features make GMM a "soft clustering" approach that provides flexibility not granted in K-Means.
- Based on what I have observed, I've decided to use Gaussian Mixture Models to attempt to cluster the data. GMM assumes that data comes from two to `n_samples - 1` clusters (Gaussian distributions in this case) and assigns each data point to one of them based on probability. After log-transforming the data, the feature distributions appear approximately normal, which satisfies the Gaussian distribution assumption of GMM. Looking at the biplot, I can discern two rough clusters, but they are far from non-overlapping. The boundary between the two clusters is ambiguous at best, which makes a strong case for GMM. To demostrate this, I found 31 points in the data that GMM could not definitively classify into one of two clusters. The meaning of "definitive" here is subjective, but I define it as a point having greater than 60% probability of belonging to one of the clusters. I marked these points as green stars in the `cluster_results` plot as well. This clear distinction between "strong" and "weak" cluster members is not possible with K-Means, which makes it appear as though all cluster members belong equally, betraying the truth of the data. This reinforces why I chose to use GMM instead.
### Creating Clusters
```
from sklearn.mixture import GaussianMixture
from sklearn.metrics import silhouette_score
# Apply clustering algorithm to the reduced data
clusterer = GaussianMixture(n_components=2, n_init=10, random_state=42)
clusterer.fit(reduced_data)
# Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# Find the cluster centers
centers = clusterer.means_
# Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data,preds)
print ('Silhouette score using GMM: {:.3f}'.format(score))
```
The table below displays the silhouette scores associated with various GMM clustering setups:
| No. of Clusters | Silhouette Score |
| :---------------: | :---------------------: |
| 2 | 0.422 |
| 3 | 0.394 |
| 4 | 0.316 |
| 5 | 0.278 |
| 8 | 0.333 |
| 12 | 0.301 |
| 20 | 0.314 |
| 50 | 0.307 |
- `n_components=2` appears to be the optimal cluster size parameter for GMM with its silhouette score of 0.422. Furthermore, increasing the number of clusters does not appear to have a positive and consistent effect on the score, therefore two clusters was chosen as the optimal number of clusters.
### Cluster Visualization
```
# Added some functionality to display points that don't neatly fall into either cluster.
probs = pd.DataFrame(clusterer.predict_proba(reduced_data))
border_indexes = np.array(probs[(probs[0] >= 0.4) & (probs[0] <= 0.6)].index)
border_data = reduced_data.values[border_indexes]
# Display the results of the clustering from implementation
cluster_results(reduced_data, preds, centers, pca_samples, border_data)
```
### Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, I recover the representative customer spending from these data points by applying the inverse transformations.
```
# Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
```
###### Cluster Interpretations
- Comparing with the medians in the earlier statistical description, I can say that the true means that were extracted are much more reasonable than the previous means that were found prior to data processing. Below I compare the true means to the original quartiles to determine which product categories are most important to each segment. This helps to refine the kinds of customers that each segment contains so that I can speculate what each cluster represents. I note that `Delicatessen` is not very useful in demarcating the customer segments.
- Segment 0 represents small businesses. This category includes restaurants and farmers' markets who mainly order fresh and frozen food over the other product categories. Unlike Segment 1, they do not order large enough volumes of any product that they could reasonably resell directly to customers. Members of this segment are not bulk buyers, but instead use what they buy for their immediate needs. They buy small quantities of goods, use them as required, and likely need replenishing on a regular basis. Checking the true sales means of Segment 0 against the quartiles for each product, it can be seen that the average customer in this segment places in Q3, Q2, Q2, Q3, Q2, and Q2 for `Fresh`, `Milk`, `Grocery`, `Frozen`, `Detergents_Paper`, `Delicatessen` respectively. This further supports the assertion that `Fresh` and `Frozen` are most valuable categories for customers in Segment 0. These two categories are relatively more important to restaurants and farmers' markets compared to other kinds of establishments, giving additional support to the existence of this cluster.
- Segment 1 represents bulk buyers. Supermarkets, convenience stores, and retail warehouses would fall into this category. Their average orders in `Milk`, `Grocery`, and `Detergent_Papers` are so many multiples higher than those of Segment 0, that one can assume they are buying enough to resell directly to their constumers. These customers order moderate to large quantities of detergents, cleaners, groceries, and other goods, and stock them for extended periods of time. They likely do not have an urgent need for their products at any particular moment in time. Comparing this segment's true sales means to their respective quartiles shows that a typical customer's orders place in Q2, Q3, Q3, Q2, Q3, and Q2 for `Fresh`, `Milk`, `Grocery`, `Frozen`, `Detergents_Paper`, `Delicatessen` respectively. This demonstrates that `Milk`, `Grocery`, and `Detergents_Paper` are categories most integral to Segment 1. All of these products can be purchased in large quantities, stocked easily on shelves, and sold to consumers, validating the retail aspect of this cluster.
###### Sample Descriptions
```
# Display the predictions
for i, pred in enumerate(sample_preds):
print ("Sample point", i, "predicted to be in Cluster", pred)
```
- To help me classify the samples into the clusters generated by GMM, I devised the simple method outlined below. Each table displays the differences in the true centers for both segments and purchases by each sample. I interpret that the segment with the lowest distances is the one to which each sample belongs. According to this, Sample 0 should belong to Segment 1, while both Sample 1 and 2 should belong to Segment 0. This is consistent with the predictions generated above.
***Sample 0 Difference Table (True Centers - Customer Purchases)***
|| Fresh | Milk | Grocery | Frozen | Detergents_Paper | Delicatessen
|:---------:| :---------: | :---------: | :----------: | :---------: | :---------: | :---------:
| **Segment 0** | 1233 | 2904 | 6737 | 389 | 2984 | 1854
| **Segment 1**| 3263 | 1391 | 129 | 633 | 275 | 1621
***Sample 1 Difference Table (True Centers - Customer Purchases)***
|| Fresh | Milk | Grocery | Frozen | Detergents_Paper | Delicatessen
|:---------:| :---------: | :----------: | :---------: | :---------: | :---------: | :---------:
|**Segment 0**| 7893 | 15 | 513 | 8585 | 221 | 653
|**Segment 1**| 12389 | 4310 | 6353 | 9607 | 2930 | 420
***Sample 2 Difference Table (True Centers - Customer Purchases)***
|| Fresh | Milk | Grocery | Frozen | Detergents_Paper | Delicatessen
|:---------:| :---------: | :---------: | :---------: | :---------: | :---------: | :---------:
|**Segment 0**| 10275 | 748 | 954 | 987 | 373 | 186
|**Segment 1**| 14771 | 5043 | 5912 | 2009 | 2336 | 47
###### Handling New Customers
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), *'customer segment'* can be considered as an **engineered feature** for the data. Consider the situation where the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor could classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. The details for how this could be handled are explained below.
- The wholesale distributor could modify the customer segment data by adding a new column labeled "Segment." In this column, customers would be labeled "Segment 0" or "Segment 1." To determine the labels, one could first apply PCA and GMM as I have done above and classify each customer using GMM's prediction as the target label. At this point, the distributor could use the data plus labels with two dimensions or the original six for training various supervised learning algorithms. I would recommend starting with decision trees or random forests as it appears that the data is approximately separable with vertical and horizontal hyperplanes. A variety of supervised algorithms should then be trained, cross-validated, and tested on the original data using optimization methods such as grid search and randomized search. When an optimal model is discovered, the distributor can provide it with data on the ten new customers and allow it to classify them into the two segments. He or she can also visualize the new customers by plotting them on the PCA feature axes to see if their labels are reasonable. Once again, the closer the new customers to the cluster centers, the more confident one can be in the labels that the model assigns.
### Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature into the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
The code block belows assigns a label to each data point - either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` in the reduced space. In addition, the sample points are circled in the plot, which will identify their labeling.
```
# Display the clustering results based on 'Channel' data
channel_results(reduced_data, outliers, pca_samples)
```
| github_jupyter |
```
import pandas as pd
from datetime import datetime
import numpy as np
STATUS_LOG_PATH = '../../data/status_log.csv"
TICKETS_PATH = '../../data/tickets.csv"
TICKETS_FOLDER_PATH = '../../data/tickets/"
FAQ_QUESTIONS_PATH = '../../data/FAQ_questions.txt"
DATA_PATH = '../../data/tickets_postprp.pkl'
DE_FLAT_PATH = '../../data/de_flat.csv'
df = pd.read_csv(STATUS_LOG_PATH)
df.head()
df_quit = df[df["Status Text"] == "Quittiert"].reset_index()
df_neu = df[df["Status Text"] == "Neu"].reset_index()
df_quit = df_quit[df_quit["ID"].isin(df_neu["ID"].to_list())]
df_quit.shape
df_neu = df_neu[df_neu["ID"].isin(df_quit["ID"].to_list())]
df_neu.shape
df_neu["time"] = df_neu["Datum"] + " " + df_neu["Uhrzeit"]
df_quit["time"] = df_quit["Datum"] + " " + df_quit["Uhrzeit"]
df_neu["timeStart"] = df_neu["time"].apply(lambda s: datetime.strptime(s, '%Y.%m.%d %H:%M:%S'))
df_quit["timeFinish"] = df_quit["time"].apply(lambda s: datetime.strptime(s, '%Y.%m.%d %H:%M:%S'))
df_quit.drop(["index", "Datum", "Uhrzeit", "Geändert Von", "Status ID", "Status Text", "time"], axis=1, inplace=True)
df_neu.drop(["index", "Datum", "Uhrzeit", "Geändert Von", "Status ID", "Status Text", "time"], axis=1, inplace=True)
df = pd.merge(df_neu, df_quit, on='ID')
df["Time Taken"] = df["timeFinish"] - df["timeStart"]
df["Time Taken"] = df["Time Taken"].apply(lambda t: (t.total_seconds())/3600)
df
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(10,5))
plt.xlim(0,2000)
plt.xlabel('Time in hours')
plt.ylabel('Tickets')
sns.histplot(df['Time Taken'],bins=500,kde=False)
plt.show()
df_tickets = pd.read_csv(TICKETS_PATH)
df_tickets = df_tickets[df_tickets["Angelegt Von"] != "SOLMAN_BTC "]
df_tickets.shape
df = df[df["ID"].isin(df_tickets["ID"])]
df.shape
plt.figure(figsize=(10,5))
plt.xlim(0,2000)
plt.xlabel('Time in hours')
plt.ylabel('Tickets')
sns.histplot(df['Time Taken'],bins=500,kde=False)
plt.show()
df_tickets["Kategorie ID"].replace({" ": "AUTOMATED"}, inplace=True)
df_tickets[df_tickets["Meldender"] == " "]
```
# Data cleaning and encode non-numaric values
```
df = pd.read_pickle(DATA_PATH)
df["kategorie_id"] = df["kategorie_id"].astype('category')
df["kategorie_id"] = df["kategorie_id"].cat.codes
df["unterkategorie_id"] = df["unterkategorie_id"].astype('category')
df["unterkategorie_id"] = df["unterkategorie_id"].cat.codes
df["meldender"] = df["meldender"].astype('category')
df["meldender"] = df["meldender"].cat.codes
df["angelegt_von"] = df["angelegt_von"].astype('category')
df["angelegt_von"] = df["angelegt_von"].cat.codes
df["auftraggeber"] = df["auftraggeber"].astype('category')
df["auftraggeber"] = df["auftraggeber"].cat.codes
df
df = df.drop(columns=['id', 'beschreibung','kategorietext',
'unterkategorietext','bearbeiter', 'editors',
'num_editors', 'time_start', 'status', 'time_finish',
'num_messages', 'first_date', 'last_date', 'description', 'answer',
'initial_message', 'internal_note', 'similarity', 'differences',
'language', 'faq_index_max_sim',
'faq_index_min_dif'])
df.reset_index()
df
df_emb = pd.DataFrame(data=np.stack(df.embeddings.to_numpy()))
result = pd.concat([df, df_emb], axis=1, join="inner")
result = result.drop(columns=["embeddings"])
result
result.to_csv(DE_FLAT_PATH)
```
#Training
```
from google.colab import drive
drive.mount('/content/drive')
from sklearn import preprocessing, utils
from keras.models import Sequential
from keras.layers import Dense
import pandas as pd
import numpy as np
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
from keras.metrics import MeanAbsoluteError, MeanAbsolutePercentageError
from sklearn.model_selection import train_test_split
df = pd.read_csv(DE_FLAT_PATH)
df = df.drop(columns=["Unnamed: 0", "index"])
df = df.fillna(0)
dfLabel = df['time_taken']
df = df.drop(['time_taken'], axis=1)
scaler = preprocessing.MinMaxScaler(feature_range=(-0.0, 1, 2))
col_names = df.columns
d = scaler.fit_transform(df, )
df = pd.DataFrame(d, columns=col_names)
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.decomposition import PCA
pca = PCA(0.95)
pca.fit(df)
df = pca.transform(df)
print(df.shape)
poly_reg = PolynomialFeatures(degree=2)
df = poly_reg.fit_transform(df)
X_train, X_test, y_train, y_test = train_test_split(df, dfLabel, test_size=0.30, random_state=42)
pol_reg = Lasso(alpha=0.155)
pol_reg.fit(X_train, y_train)
pol_reg.score(X_train, y_train)
pol_reg.score(X_test, y_test)
```
### **Deep Learning**
```
model = Sequential()
model.add(Dense(2048, activation="relu"))
model.add(Dense(1024, activation="relu"))
model.add(Dense(2048, activation="softplus"))
model.add(Dense(512, activation="relu"))
model.add(Dense(1024, activation="selu"))
model.add(Dense(256, activation="relu"))
model.add(Dense(256, activation="softplus"))
model.add(Dense(126, activation="relu"))
model.add(Dense(126, activation="tanh"))
model.add(Dense(32, activation="relu"))
model.add(Dense(64, activation="softplus"))
model.add(Dense(16, activation="relu"))
model.add(Dense(32, activation="tanh"))
model.add(Dense(16, activation="relu"))
model.add(Dense(32, activation="relu"))
model.add(Dense(1))
callback = EarlyStopping(monitor='loss', patience=20)
model.compile(loss='mean_squared_error', optimizer=Adam(learning_rate=0.0001), metrics=[MeanAbsoluteError()])
model.fit(df, dfLabel, epochs=5000, shuffle=True, batch_size=500, callbacks=[callback])
```
| github_jupyter |
# Lecture 1: Interview Style of FLAG
### Decode Ways
https://leetcode.com/problems/decode-ways/description/
```
class Solution(object):
def numDecodings(self, s):
"""
:type s: str
:rtype: int
"""
n = len(s)
if n == 0 or s[0] == '0':
return 0
fn_2, fn_1, fn = 1, 1, 1
for i in range(1, n):
if s[i] == '0':
if int(s[i-1]) > 2 or int(s[i-1]) == 0:
return 0
else:
fn = fn_2
else:
fn = fn_1
if 10 < int(s[i-1] + s[i]) < 27:
fn += fn_2
fn_2, fn_1 = fn_1, fn
return fn
class Solution(object):
def numDecodings(self, s):
"""
:type s: str
:rtype: int
"""
v, w, p = 0, int(s > ''), ''
for c in s:
v, w, p = w, (c > '0') * w + (9 < int(p + c) < 27) * v, c
return w
```
### Decode Ways II
https://leetcode.com/problems/decode-ways-ii/description/
```
class Solution(object):
def numDecodings(self, s):
"""
:type s: str
:rtype: int
"""
Mod = 10**9 + 7
e0, e1, e2 = 1, 0, 0
for c in s:
if c == '*':
f0 = 9 * e0 + 9 * e1 + 6 * e2
f1 = e0
f2 = e0
else:
f0 = (c > '0') * e0 + e1 + (c < '7') * e2
f1 = (c == '1') * e0
f2 = (c == '2') * e0
e0, e1, e2 = f0 % Mod, f1, f2
return e0
```
• 怎样识别动态规划类题?
– 最优化,问方法数,不需要打印所有路径的大概率是
– 看下数据范围
• 动态规划的解题步骤:
0.考虑最后一步(怎么分解,怎么走) 直觉、解题灵感来源
1.定状态
2.写转移方程
3.初始条件、边界情况
### Rectangle Overlap
http://www.lintcode.com/en/problem/rectangle-overlap/
```
## consider about it from the opposite way
class Solution:
"""
@param: l1: top-left coordinate of first rectangle
@param: r1: bottom-right coordinate of first rectangle
@param: l2: top-left coordinate of second rectangle
@param: r2: bottom-right coordinate of second rectangle
@return: true if they are overlap or false
"""
def doOverlap(self, l1, r1, l2, r2):
# write your code here
if l1.x > r2.x or l2.x > r1.x:
return False
elif l1.y < r2.y or r1.y > l2.y:
return False
else:
return True
```
## Abbreviation
### Check Word Abbreviation
http://www.lintcode.com/en/problem/check-word-abbreviation/
https://leetcode.com/problems/valid-word-abbreviation/description/
* Corner case: c == '0'
```
class Solution:
"""
@param: word: a non-empty string
@param: abbr: an abbreviation
@return: true if string matches with the given abbr or false
"""
def validWordAbbreviation(self, word, abbr):
# write your code here
i, n = 0, '0'
for c in abbr:
if c.isdigit():
if c == n:
return False
n += c
else:
i += int(n)
if i >= len(word) or word[i] != c:
return False
i += 1
n = '0'
return len(word[i:]) == int(n)
```
### Words Abbreviation
http://www.lintcode.com/en/problem/words-abbreviation/
* glist[abrr] = list of words
* if len(glist[abrr]) > 1 --> solve until len == 1, record dic[word] = abbr
* return map(dic.get, dict): map(func, *iterables)
* traverse a dict: for key, values in dic.items():
```
import collections
class Solution:
"""
@param: dict: an array of n distinct non-empty strings
@return: an array of minimal possible abbreviations for every word
"""
def wordsAbbreviation(self, dict):
# write your code here
self.dic = {}
self.solve(dict, 0)
return map(self.dic.get, dict)
def abbr(self, word, size):
if len(word) - size <= 3:
return word
else:
return word[: size + 1] + str(len(word) - size - 2) + word[-1]
def solve(self, dict, size):
glist = collections.defaultdict(list)
for word in dict:
glist[self.abbr(word, size)].append(word)
for abbr, words in glist.items():
if len(words) == 1:
self.dic[words[0]] = abbr
else:
self.solve(words, size+1)
dict = ["like", "god", "internal", "me", "internet", "interval", "intension", "face", "intrusion"]
dic = {"like": "l2e", "face": "f2e"}
list(map(dic.get, dict))
dic.get("like")
def abbr(word, size):
if len(word) - size <= 3:
return word
return word[:size+1] + str(len(word) - size - 2) + word[-1]
glist = collections.defaultdict(list)
for word in dict:
glist[abbr(word, 0)].append(word)
glist
for abbr, words in glist.items():
print(abbr, words)
Solu = Solution()
Solu.wordsAbbreviation(dict)
list(Solu.wordsAbbreviation(dict))
list(Solu.wordsAbbreviation(dict))
```
### Generalized Abbreviation
https://leetcode.com/problems/generalized-abbreviation/description/
* nice problem!
* a letter can be represented as a word or a number, so totally 2^n possibilities.
```
class Solution(object):
def generateAbbreviations(self, word):
"""
:type word: str
:rtype: List[str]
"""
def helper(word, pos, count, cur, result):
if pos == len(word):
result.append(cur + (str(count) if count > 0 else ''))
else:
helper(word, pos + 1, count + 1, cur, result)
helper(word, pos + 1, 0, cur + (str(count) if count > 0 else '') + word[pos], result)
result = []
helper(word, 0, 0, '', result)
return result
```
### Unique Word Abbreviation
https://leetcode.com/problems/unique-word-abbreviation/description/
* check the last two lines!!
```
class ValidWordAbbr(object):
def __init__(self, dictionary):
"""
:type dictionary: List[str]
"""
self.dic = collections.defaultdict(set)
for word in dictionary:
self.dic[self.abbr(word)].add(word)
def abbr(self, word):
if len(word) <= 2:
return word
else:
return word[0] + str(len(word) -2) + word[-1]
def isUnique(self, word):
"""
:type word: str
:rtype: bool
"""
abbr = self.abbr(word)
if abbr not in self.dic:
return True
else:
return len(self.dic[abbr]) == 1 and word in self.dic[abbr]
# Your ValidWordAbbr object will be instantiated and called as such:
# obj = ValidWordAbbr(dictionary)
# param_1 = obj.isUnique(word)
```
### Minimum Unique Word Abbreviation
https://leetcode.com/problems/minimum-unique-word-abbreviation/description/
* use bit minipulation to record the differences!
* recover the abbreviation from bit record
* optimization
* min(abbrs, key=lambda x: len(x))
* do not forget to change num in abbr function: num >>= 1
```
class Solution(object):
def minAbbreviation(self, target, dictionary):
"""
:type target: str
:type dictionary: List[str]
:rtype: str
"""
def abbr(target, num):
cur, count = '', 0
for c in target:
if num & 1 == 1:
if count:
cur += str(count)
count = 0
cur += c
else:
count += 1
num >>= 1
if count:
cur += str(count)
return cur
m, diffs = len(target), []
for word in dictionary:
if len(word) != m:
continue
bits = 0
for i, c in enumerate(word):
if target[i] != c:
bits += 2 ** i
diffs += [bits]
if not diffs:
return str(m)
abbrs = []
for i in range(2 ** m):
if all(i & d for d in diffs):
abbrs.append(abbr(target, i))
return min(abbrs, key=lambda x: len(x))
import operator
word = '1e4'
target = '1r2'
any(map(operator.eq, word, target))
num = 5
num >>= 1
num
num >> 1
num
diffs = []
for i in [1,2,3]:
diffs += i,
diffs
?all
```
# Lecture 2: Simulation Algorithms & String Manipulation Skills
### Sliding window
http://www.lintcode.com/zh-cn/problem/sliding-window-average-from-data-stream/
https://leetcode.com/problems/moving-average-from-data-stream/description/
```
## straight foreward
class MovingAverage:
"""
@param: size: An integer
"""
def __init__(self, size):
# do intialization if necessary
self.idx = 0
self.sum = 0
self.list = []
self.size = size
"""
@param: val: An integer
@return:
"""
def next(self, val):
# write your code here
self.sum += val
self.idx += 1
if self.idx <= self.size:
self.list += [self.sum]
return float(self.list[-1]) / self.idx
else:
j = self.idx % self.size - 1
prev = self.list[j]
self.list[j] = self.sum
return float(self.list[j] - prev) / self.size
# Your MovingAverage object will be instantiated and called as such:
# obj = MovingAverage(size)
# param = obj.next(val)
```
* the usage of collcections, deque, deque.popleft(), deque.append(), float()
* moving sum to save memory
* mode to switch index & add new vals
• 如何快速求和? 前缀和数组(dummy 0)
• 如何节省储存空间呢? 链表/滚动
• 写滚动的技巧 先写程序最后加滚动
```
## data structure: queue
class MovingAverage(object):
def __init__(self, size):
"""
Initialize your data structure here.
:type size: int
"""
self.queue = collections.deque(maxlen=size)
self.size = size
self.sum = 0
def next(self, val):
"""
:type val: int
:rtype: float
"""
self.sum += val
if len(self.queue) == self.size:
self.sum -= self.queue.popleft()
self.queue.append(val)
return float(self.sum) / len(self.queue)
# Your MovingAverage object will be instantiated and called as such:
# obj = MovingAverage(size)
# param_1 = obj.next(val)
import collections
collections.deque.popleft
```
### One Edit Distance
https://leetcode.com/problems/one-edit-distance/description/
```
class Solution(object):
def isOneEditDistance(self, s, t):
"""
:type s: str
:type t: str
:rtype: bool
"""
m, n = len(s), len(t)
if abs(m - n) > 1:
return False
elif m < n:
return self.isOneEditDistance(t, s)
else:
for i in range(n):
if s[i] != t[i]:
if m == n:
return s[i+1:] == t[i+1:]
else:
return s[i+1:] == t[i:]
return m != n
s = '12dere'
n = len(s)
s[n:]
```
### Read N Characters Given Read4 (not clear)
https://leetcode.com/problems/read-n-characters-given-read4/description/
```
# The read4 API is already defined for you.
# @param buf, a list of characters
# @return an integer
# def read4(buf):
class Solution(object):
def read(self, buf, n):
"""
:type buf: Destination buffer (List[str])
:type n: Maximum number of characters to read (int)
:rtype: The number of characters read (int)
"""
idx = 0
while True:
buf4 = [""]*4
curr = min(read4(buf4), n-idx)
for i in range(curr):
buf[idx] = buf4[i]
idx += 1
if curr != 4 or idx == n:
return idx
buf4 = [""]*4
buf4
```
### Read Characters From File - multiple calls
https://leetcode.com/problems/read-n-characters-given-read4-ii-call-multiple-times/description/
* init function: def __inti__(self):
* differece between append and extend
难点:
• 如果只读3个,剩下的一个怎么处理?(读多的怎么留着给下次用?)
• Read4 这个函数只读了2个怎么办? (读到末尾时,没有读全4个)
```
# The read4 API is already defined for you.
# @param buf, a list of characters
# @return an integer
# def read4(buf):
class Solution(object):
def __init__(self):
self.queue = []
def read(self, buf, n):
"""
:type buf: Destination buffer (List[str])
:type n: Maximum number of characters to read (int)
:rtype: The number of characters read (int)
"""
idx = 0
while True:
buf4 = [""]*4
l = read4(buf4)
self.queue.extend(buf4)
curr = min(len(self.queue), n-idx)
for i in range(curr):
buf[idx] = self.queue.pop(0)
idx += 1
if curr == 0:
break
return idx
buf4 = [""]*4
queue1, queue2 = [], []
queue1.append(buf4)
queue2.extend(buf4)
queue1, queue2
```
### Strings Serialization
http://www.lintcode.com/zh-cn/problem/strings-serialization/
• abc def -> abc:;def:;
• ab:c def -> ab::c:;def:;
• ab:;c def -> ab::;c:;def:;
• 用‘:’表示转义,那么连接符就是’:;’ 表示‘:’本身就是‘::’
```
class Solution:
"""
@param: strs: a list of strings
@return: encodes a list of strings to a single string.
"""
def encode(self, strs):
# write your code here
res = ''
for s in strs:
for c in s:
if c == ":":
res += "::"
else:
res += c
res += ":;"
return res
"""
@param: str: A string
@return: dcodes a single string to a list of strings
"""
def decode(self, str):
# write your code here
res = []
s = ''
i = 0
while i < len(str) - 1:
if str[i] == ":":
if str[i+1] == ";":
res += [s]
s = ''
else:
s += str[i+1]
i += 2
else:
s += str[i]
i += 1
return res
```
### System Longest File Path
https://leetcode.com/problems/longest-absolute-file-path/description/
* splitlines() and lstrip('\t')
```
len("dir/subdir2/subsubdir2/file2.ext")
s = "dir\n\tsubdir1\n\tsubdir2\n\t\tfile.ext"
s.splitlines()
line = '\t\tfile.ext'
line.lstrip('\t')
class Solution(object):
def lengthLongestPath(self, input):
"""
:type input: str
:rtype: int
"""
maxlen = 0
pathlen = {0:0}
for line in input.splitlines():
name = line.lstrip('\t')
depth = len(line) - len(name)
if '.' in name:
maxlen = max(maxlen, pathlen[depth] + len(name))
else:
pathlen[depth+1] = pathlen[depth] + len(name) + 1
return maxlen
```
### Roman to Integer
https://leetcode.com/problems/roman-to-integer/description/
```
class Solution(object):
def romanToInt(self, s):
"""
:type s: str
:rtype: int
"""
roman = {"I": 1, "V": 5, "X": 10, "L": 50, "C": 100, "D": 500, "M": 1000}
if len(s) == 0:
return 0
res = roman[s[-1]]
idx = len(s) - 2
while idx >= 0:
if roman[s[idx]] < roman[s[idx+1]]:
res -= roman[s[idx]]
else:
res += roman[s[idx]]
idx -= 1
return res
```
### Integer to Roman
https://leetcode.com/problems/roman-to-integer/description/
```
class Solution(object):
def parse(self, digit, index):
nums = {1: "I", 2: "II", 3: "III", 4: "IV", 5: "V", 6: "VI", 7: "VII", 8: "VIII", 9: "IX"}
roman = {
"I": ["I", "X", "C", "M"],
"V": ["V", "L", "D", "?"],
"X": ["X", "C", "M", "?"]
}
s = nums[digit]
return s.replace("X", roman["X"][index]).replace("V", roman["V"][index]).replace("I", roman["I"][index])
def intToRoman(self, num):
"""
:type num: int
:rtype: str
"""
ans = ""
index = 0
while num > 0:
digit = num % 10
if digit != 0:
ans = self.parse(digit, index) + ans
num = int(num / 10)
index += 1
return ans
# # solution 1
# nums = [
# ["I", "II", "III", "IV", "V", "VI", "VII", "VIII", "IX"],
# ["X", "XX", "XXX", "XL", "L", "LX", "LXX", "LXXX", "XC"],
# ["C", "CC", "CCC", "CD", "D", "DC", "DCC", "DCCC", "CM"],
# ["M", "MM", "MMM"]
# ]
# ans = ""
# row = 0
# while num > 0:
# digit = num % 10
# if digit != 0:
# ans = nums[row][digit-1] + ans
# num = num / 10
# row += 1
# return ans
```
### Find the Celebrity
https://leetcode.com/problems/find-the-celebrity/description/
```
# The knows API is already defined for you.
# @param a, person a
# @param b, person b
# @return a boolean, whether a knows b
# def knows(a, b):
class Solution(object):
def findCelebrity(self, n):
"""
:type n: int
:rtype: int
"""
candidate = 0
for i in range(1, n):
if knows(candidate, i):
candidate = i
for i in range(candidate):
if not knows(i, candidate) or knows(candidate, i):
return -1
for i in range(candidate+1, n):
if not knows(i, candidate) or knows(candidate, i):
return -1
return candidate
```
# Lecture 3: Basic Algorithms & Data Structure I
### Missing Ranges
https://leetcode.com/problems/missing-ranges/description/
```
nums = [0, 1, 3, 50, 75]
for k, v in enumerate(nums):
print(k, v, nums[k])
class Solution(object):
def findMissingRanges(self, nums, lower, upper):
"""
:type nums: List[int]
:type lower: int
:type upper: int
:rtype: List[str]
"""
res = []
start = lower
for k, v in enumerate(nums): # 两端点和一头一尾形成的区间 + for循环扫描中间形成的区间
if v < start:
continue
elif v == start:
start += 1
continue
res.append(self.addRange(start, v-1))
start = v + 1
if start <= upper: # corner case
res.append(self.addRange(start, upper))
return res
def addRange(self, start, end): # – 利用函数让自己的代码更简洁 (见代码)
return str(start) if start == end else str(start) + "->" + str(end)
```
### Merge Intervals
https://leetcode.com/problems/merge-intervals/description/
```
starts = [2, 1, 15, 8]
idx = sorted(range(len(starts)), key=lambda x: starts[x])
idx
intervals = [[8,10], [2,6]]
intervals.sort()
intervals
# Definition for an interval.
# class Interval(object):
# def __init__(self, s=0, e=0):
# self.start = s
# self.end = e
class Solution(object):
def merge(self, intervals):
"""
:type intervals: List[Interval]
:rtype: List[Interval]
"""
intervals = sorted(intervals, key=lambda x: x.start) # sort skill !!!
res = []
for interval in intervals:
if len(res) == 0 or res[-1].end < interval.start: # when to add interval
res.append(interval)
else:
res[-1].end = max(res[-1].end, interval.end)
return res
```
### Insert Interval
https://leetcode.com/problems/insert-interval/description/
```
# Definition for an interval.
# class Interval(object):
# def __init__(self, s=0, e=0):
# self.start = s
# self.end = e
class Solution(object):
def insert(self, intervals, newInterval):
"""
:type intervals: List[Interval]
:type newInterval: Interval
:rtype: List[Interval]
"""
intervals = intervals + [newInterval]
intervals = sorted(intervals, key=lambda x: x.start)
res = []
for interval in intervals:
if len(res) == 0 or res[-1].end < interval.start:
res.append(interval)
else:
res[-1].end = max(res[-1].end, interval.end)
return res
```
### First Position Unique Character
http://www.lintcode.com/zh-cn/problem/first-position-unique-character/
```
dic = {'1':[1,5,8], '0':[2,4,3]}
for key, value in dic.items():
print(key, value)
class Solution:
"""
@param: s: a string
@return: it's index
"""
def firstUniqChar(self, s):
# write your code here
dic = {}
for c in s:
dic[c] = dic.get(c, 0) + 1
for i in range(len(s)):
if dic[s[i]] == 1:
return i
return -1
```
### Substring Anagrams
http://www.lintcode.com/zh-cn/problem/substring-anagrams/
https://leetcode.com/problems/find-all-anagrams-in-a-string/description/
```
ord('a'), chr(97)
abs([1,2])
from collections import Counter
Counter('aba')
dic = {'a': 2, 'b': 1}
del dic['a']
dic
# class Solution(object):
# def findAnagrams(self, s, p):
# """
# :type s: str
# :type p: str
# :rtype: List[int]
# """
# m, n = len(s), len(p)
# res = []
# if m < n:
# return res
# pc = collections.Counter(p)
# sc = collections.Counter(s[:n-1])
# for i in range(m-n+1):
# sc[s[n-1+i]] += 1
# if sc == pc:
# res.append(i)
# sc[s[i]] -= 1
# if sc[s[i]] == 0:
# del sc[s[i]]
# return res
class Solution:
"""
@param: s: a string
@param: p: a string
@return: a list of index
"""
def findAnagrams(self, s, p):
# write your code here
m, n = len(s), len(p)
res = []
if m < n:
return res
det = [0 for i in range(26)]
for i in range(n):
jp = ord(p[i]) - ord('a')
det[jp] -= 1
js = ord(s[i]) - ord('a')
det[js] += 1
sum = 0
for d in det:
sum += abs(d)
if sum == 0:
res.append(0)
for i in range(0, m-n):
jl = ord(s[i]) - ord('a')
sum -= abs(det[jl])
det[jl] -= 1
sum += abs(det[jl])
jr = ord(s[i+n]) - ord('a')
sum -= abs(det[jr])
det[jr] += 1
sum += abs(det[jr])
if sum == 0:
res.append(i+1)
return res
```
### Word Abbreviation Set
http://www.lintcode.com/zh-cn/problem/word-abbreviation-set/
https://leetcode.com/problems/unique-word-abbreviation/description/
```
set().union('he'), set().union(['he'])
len(set().union('he'))
def getAbbr(word):
if len(word) > 2:
return word[0] + str(len(word) - 2) + word[-1]
return word
d = collections.defaultdict(set)
for word in ['aa', 'bb', 'cc', 'cc','bb']:
d[getAbbr(word)].add(word)
d
collections.defaultdict?
class ValidWordAbbr(object):
def __init__(self, dictionary):
"""
:type dictionary: List[str]
"""
self.dic = collections.defaultdict(set)
for word in dictionary:
self.dic[self.getAbbr(word)].add(word)
def getAbbr(self, word):
return word if len(word) < 3 else word[0] + str(len(word)-2) + word[-1]
def isUnique(self, word):
"""
:type word: str
:rtype: bool
"""
abbr = self.getAbbr(word)
if abbr not in self.dic:
return True
else:
return len(self.dic[abbr]) == 1 and word in self.dic[abbr]
# class ValidWordAbbr(object):
# def __init__(self, dictionary):
# """
# :type dictionary: List[str]
# """
# self.dic = {}
# for word in dictionary:
# abrr = self.abrr(word)
# self.dic[abrr] = self.dic.get(abrr, set()).union([word])
# def abrr(self, word):
# return word if len(word) < 3 else word[0] + str(len(word)-2) + word[-1]
# def isUnique(self, word):
# """
# :type word: str
# :rtype: bool
# """
# abrr = self.abrr(word)
# if abrr in self.dic:
# if len(self.dic[abrr]) > 1 or word not in self.dic[abrr]:
# return False
# return True
# # Your ValidWordAbbr object will be instantiated and called as such:
# # obj = ValidWordAbbr(dictionary)
# # param_1 = obj.isUnique(word)
```
### Longest Consecutive Sequence
https://leetcode.com/problems/longest-consecutive-sequence/description/
```
class Solution(object):
def longestConsecutive(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
dic = {}
for n in nums:
dic[n] = False
maxlen = 0
for n in nums:
if dic[n]:
continue
length = 1
left, right = n - 1, n + 1
while left in dic:
length += 1
dic[left] = True
left -= 1
while right in dic:
length += 1
dic[right] = True
right += 1
maxlen = max(maxlen, length)
return maxlen
```
### Load Balancer
http://www.lintcode.com/en/problem/load-balancer/
```
a = set().union('he')
a = [0, 1, 2]
a.pop()
a
import random
for i in range(15):
print(random.randint(0, 5 - 1))
class LoadBalancer:
def __init__(self):
# do intialization if necessary
self.ids = []
self.index = {}
"""
@param: server_id: add a new server to the cluster
@return: nothing
"""
def add(self, server_id):
# write your code here
self.ids.append(server_id)
self.index[server_id] = len(self.ids) - 1
"""
@param: server_id: server_id remove a bad server from the cluster
@return: nothing
"""
def remove(self, server_id):
# write your code here
idx = self.index[server_id]
del self.index[server_id]
last_id = self.ids[-1]
self.index[last_id] = idx
self.ids[idx] = last_id
self.ids.pop()
"""
@return: pick a server in the cluster randomly with equal probability
"""
def pick(self):
# write your code here
import random
idx = random.randint(0, len(self.ids)-1)
return self.ids[idx]
```
# Lecture 4: Basic Algorithms & Data Structure II
### Binary Search
### Convert BST to Greater Tree
https://leetcode.com/problems/convert-bst-to-greater-tree/description/
```
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution(object):
def convertBST(self, root):
"""
:type root: TreeNode
:rtype: TreeNode
"""
self.sum = 0
self.dfs(root)
return root
def dfs(self, root):
if root is None:
return
if root.right:
self.dfs(root.right)
self.sum += root.val
root.val = self.sum
if root.left:
self.dfs(root.left)
```
### Inorder Successor in BST
https://leetcode.com/problems/inorder-successor-in-bst/description/
* [Depth First Traversals](http://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/):
(a) Inorder (Left, Root, Right) : 4 2 5 1 3
(b) Preorder (Root, Left, Right) : 1 2 4 5 3
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
```
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution(object):
def inorderSuccessor(self, root, p):
"""
:type root: TreeNode
:type p: TreeNode
:rtype: TreeNode
"""
succ = None
while root:
if root.val <= p.val:
root = root.right
else:
succ = root
root = root.left
return succ
```
### Binary Tree Upside Down
https://leetcode.com/problems/binary-tree-upside-down/description/
```
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution(object):
def upsideDownBinaryTree(self, root):
"""
:type root: TreeNode
:rtype: TreeNode
"""
if root is None or root.left is None:
return root
new_root = self.upsideDownBinaryTree(root.left)
root.left.left = root.right
root.left.right = root
root.left = None
root.right = None
return new_root
```
### Find Leaves of Binary Tree
https://leetcode.com/problems/find-leaves-of-binary-tree/description/
```
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution(object):
def findLeaves(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
result = []
self.dfs(root, result)
return result
def dfs(self, root, result):
if root is None:
return 0
level = max(self.dfs(root.left, result), self.dfs(root.right, result)) + 1
if level > len(result):
result.append([])
result[level-1].append(root.val)
return level
```
### Binary Tree Vertical Order Traversal
https://leetcode.com/problems/binary-tree-vertical-order-traversal/description/
```
a = []
a.append((1,2))
a
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution(object):
def verticalOrder(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
result = collections.defaultdict(list)
level = [(root, 0)]
while level:
next_level = []
for node, order in level:
if node:
result[order].append(node.val)
next_level.append((node.left, order-1))
next_level.append((node.right, order+1))
level = next_level
return [result[i] for i in sorted(result)]
```
# Lecture 5: How to Implement Search Problem Effectively
1.BFS是用来搜索最短径路的解是比较合适的,比如求最少步数的解,最少交换次数的解,因为BFS搜索过程中遇到的解一定是离根最近的,所以遇到一个解,一定就是最优解,此时搜索算法可以终止。这个时候不适宜使用DFS,因为DFS搜索到的解不一定是离根最近的,只有全局搜索完毕,才能从所有解中找出离根的最近的解。(当然这个DFS的不足,可以使用迭代加深搜索ID-DFS去弥补)
2.空间优劣上,DFS是有优势的,DFS不需要保存搜索过程中的状态,而BFS在搜索过程中需要保存搜索过的状态,而且一般情况需要一个队列来记录。
3.DFS适合搜索全部的解,因为要搜索全部的解,那么BFS搜索过程中,遇到离根最近的解,并没有什么用,也必须遍历完整棵搜索树,DFS搜索也会搜索全部,但是相比DFS不用记录过多信息,所以搜素全部解的问题,DFS显然更加合适。
4.基本原则:能用BFS的时候就用BFS,不能用的时候再用DFS。
## BFS
### Template
```
## version 1: traversal
queue = [start]
while queue:
node = queue.pop(0)
new_node = node + 1 # or other conditions
queue.append(new_node)
## version 2: length of shortest path
length, level = 0, [start]
while level:
new_level = []
for node in level:
new_node = node + 1
new_level.append(new_node)
length += 1
level = new_level
```
#### Surrounded Regions
https://leetcode.com/problems/surrounded-regions/description/
小技巧总结:
• 在网格图、矩阵图、棋盘上做多个方向扩展时,用dx dy数组会让程序写起
来更方便
```
any?
any([[]])
row = 'XXSi'
['XO'[c == 'S'] for c in row]
board = ["XXXX","XOOX","XXOX","XSXX"]
board[:] = [['XO'[c == 'S'] for c in row] for row in board]
board
board = [list("XXXX"),list("XOOX"),list("XXOX"),list("XSXX")]
for row in board:
for i, c in enumerate(row):
row[i] = 'XO'[c == 'S']
board
class Solution(object):
def solve(self, board):
"""
:type board: List[List[str]]
:rtype: void Do not return anything, modify board in-place instead.
"""
if not any(board):
return
m, n = len(board), len(board[0])
save = [ij for k in range(max(m,n)) for ij in ((0, k), (m-1, k), (k, 0), (k, n-1))]
while save:
i, j = save.pop()
if 0 <= i < m and 0 <= j < n and board[i][j] == 'O':
board[i][j] = 'S'
save += (i+1, j), (i-1, j), (i, j+1), (i, j-1)
for row in board:
for i, c in enumerate(row):
if c == 'S':
row[i] = 'O'
else:
row[i] = 'X'
# for row in board:
# for i, c in enumerate(row):
# row[i] = 'XO'[c == 'S']
# board[:] = [['XO'[c == 'S'] for c in row] for row in board]
c = Solution()
board = [list("XXXX"),list("XOOX"),list("XXOX"),list("XOXX")] # board is a list of list !!!
c.solve(board)
board
```
#### Nearest Exit
https://leetcode.com/problems/walls-and-gates/description/
小技巧总结:
– 多源点单终点 --> 单源点多终点,最短路常用转化套路
– 多源多终点 --> 单源多终点 (增加超级源,最短路常用转化套路)
– BFS可以求边长=1的图的最短路(如此题的棋盘图)
```
class Solution(object):
def wallsAndGates(self, rooms):
"""
:type rooms: List[List[int]]
:rtype: void Do not return anything, modify rooms in-place instead.
"""
if not any(rooms): return
IFN = 2147483647
m, n = len(rooms), len(rooms[0])
queue = [(i, j) for i, row in enumerate(rooms) for j, c in enumerate(row) if c == 0]
while queue:
i, j = queue.pop(0)
for x, y in ((i+1, j), (i-1, j), (i, j+1), (i, j-1)):
if 0 <= x < m and 0 <= y < n and rooms[x][y] == IFN:
rooms[x][y] = rooms[i][j] + 1
queue.append((x, y))
```
## DFS
### Template
```
## version 1: traversal
stack = [start]
while stack:
node = stack.pop()
new_node = node + 1 # or other conditions
stack.append(new_node)
## version 2: Divide & Conquer
def dfs(root): # ex: binary tree
## null or leaf
if root is None:
## do something and return;
## Divide
left = dfs(root.left);
right = dfs(root.right);
## Conquer
result = Merge(left, right)
return result
```
#### Letter Combinations of a Phone Number
https://leetcode.com/problems/letter-combinations-of-a-phone-number/description/
```
digits ='23'
chr = ["", "", "abc", "def", "ghi", "jkl", "mno", "pqrs", "tuv", "wxyz"]
res = []
for i in range(len(digits)):
num = int(digits[i])
temp = []
for c in chr[num]:
if len(res) == 0:
temp += [c]
else:
for r in res:
temp += [r + c]
res[:] = temp
res
class Solution(object):
def letterCombinations(self, digits):
"""
:type digits: str
:rtype: List[str]
"""
n, res = len(digits), []
if n == 0: return res
letters = ['', '', 'abc', 'def', 'ghi', 'jkl', 'mno', 'pqrs', 'tuv', 'wxyz']
for i in range(n):
num = int(digits[i])
temp = []
for c in letters[num]:
if len(res) == 0:
temp.append(c)
else:
for j in range(len(res)):
temp.append(res[j]+c)
res[:] = temp
return res
# 思路:
# • 基础枚举型DFS(按方法1 执行过程理解)
# – 输入数字串长度为n,做n个阶段的选择,DFS n层
# – 每一阶段枚举该位的数字对应的一个字母
# – 直到所有情况都枚举完
class Solution(object):
def letterCombinations(self, digits):
"""
:type digits: str
:rtype: List[str]
"""
def dfs(num, string, res):
if num == n:
res.append(string)
return
for c in letters[int(digits[num])]:
dfs(num+1, string+c, res)
n, res = len(digits), []
if n == 0: return res
letters = ['', '', 'abc', 'def', 'ghi', 'jkl', 'mno', 'pqrs', 'tuv', 'wxyz']
dfs(0, '', res)
return res
```
#### Factorization
http://www.lintcode.com/zh-cn/problem/factorization/
* Note:
(1) the way to add path
(2) the way to copy temp in the last problem
```
a = []
a.append([1,2,3])
a.append([2,3])
a
class Solution:
"""
@param: n: An integer
@return: a list of combination
"""
def getFactors(self, n):
# write your code here
def dfs(start, remain):
if remain == 1:
if len(path) > 1:
print(path, result)
result.append(path)
print(result)
return
for i in range(start, remain):
if i > int(remain / i):
break
if remain % i == 0:
path.append(i)
dfs(i, int(remain/i))
path.pop()
path.append(remain)
dfs(remain, 1)
path.pop()
# print(result)
result, path = [], []
dfs(2, n)
return result
c = Solution()
c.getFactors(8)
a = [[2, 2, 2]]
a.append([2,4])
a += [[1]]
a
# 思路:
# • DFS:
# – 扩展:枚举每一个位置的因子
# • 每次从前一个枚举的因子start开始枚举,以保证枚举因子的顺序是从小到大
# – 退出条件:n除以已枚举因子之后还剩的数remain = 1 时
# • 用什么方法记录状态
# – start、remain采用方法1 放到DFS函数的参数中
# – 记录已枚举因子的数组path采用方法3 成员变量手动模拟栈
class Solution:
"""
@param: n: An integer
@return: a list of combination
"""
def getFactors(self, n):
# write your code here
def dfs(start, remain):
if remain == 1:
if len(path) > 1:
result.append(path[:])
return
for i in range(start, int(remain**0.5) + 1):
if remain % i == 0:
path.append(i)
dfs(i, remain/i)
path.pop()
path.append(remain)
dfs(start, 1)
path.pop()
path, result = [], []
dfs(2, n)
return result
```
#### Word Squares
https://leetcode.com/problems/word-squares/description/
```
alist = ['a1', 'a2', 'a3']
blist = ['b1', 'b2', 'b3']
for i, (a, b) in enumerate(zip(alist, blist)):
print(i, a, b)
class Solution(object):
def wordSquares(self, words):
"""
:type words: List[str]
:rtype: List[List[str]]
"""
n, res = len(words[0]), []
starts = collections.defaultdict(list)
for i in range(1, n):
for word in words:
starts[word[:i]].append(word)
def dfs(square):
k = len(square)
if k == n:
res.append(square)
return
start = ''
for i in range(k):
start += square[i][k]
for word in starts[start]:
dfs(square + [word])
for word in words:
dfs([word])
return res
```
#### Expression Add Operators
https://leetcode.com/problems/expression-add-operators/description/
```
class Solution(object):
def addOperators(self, num, target):
"""
:type num: str
:type target: int
:rtype: List[str]
"""
n, res = len(num), []
def dfs(left, temp, cur, last):
if len(left) == 0:
if cur == target:
res.append(temp[:])
return
for i in range(1, len(left)+1):
val = left[:i]
if i == 1 or (i > 1 and left[0] != '0'):
dfs(left[i:], temp + '+' + val, cur + int(val), int(val))
dfs(left[i:], temp + '-' + val, cur - int(val), -int(val))
dfs(left[i:], temp + '*' + val, cur -last + last*int(val), last*int(val))
for i in range(1, n+1):
if i == 1 or (i > 1 and num[0] != '0'):
dfs(num[i:], num[:i], int(num[:i]), int(num[:i]))
return res
```
# Lecture 6: Math, Computational Graphic, Bit Operation
### Search a 2D Matrix (Binary search!!!)
https://leetcode.com/problems/search-a-2d-matrix/description/
```
# binary search template
while start + 1 < end:
mid = start + (end - start) / 2
if ....
class Solution(object):
def searchMatrix(self, matrix, target):
"""
:type matrix: List[List[int]]
:type target: int
:rtype: bool
"""
if len(matrix) == 0 or len(matrix[0]) == 0:
return False
m, n = len(matrix), len(matrix[0])
start, end = 0, m*n - 1
while start + 1 < end:
mid = start + (end - start) / 2
i = mid / n
j = mid % n
if matrix[i][j] == target:
return True
elif matrix[i][j] > target:
end = mid - 1
else:
start = mid + 1
i = start / n
j = start % n
if matrix[i][j] == target:
return True
i = end / n
j = end % n
if matrix[i][j] == target:
return True
return False
```
### Search a 2D Matrix II
https://leetcode.com/problems/search-a-2d-matrix-ii/description/
```
class Solution(object):
def searchMatrix(self, matrix, target):
"""
:type matrix: List[List[int]]
:type target: int
:rtype: bool
"""
if len(matrix) == 0 or len(matrix[0]) == 0:
return False
m, n = len(matrix), len(matrix[0])
r, c = 0, n-1
while r < m and c >= 0:
if matrix[r][c] == target:
return True
elif matrix[r][c] > target:
c -= 1
else:
r += 1
return False
```
### Rotate Image
https://leetcode.com/problems/rotate-image/description/
```
for i in range(-1):
print(i)
class Solution(object):
def rotate(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: void Do not return anything, modify matrix in-place instead.
"""
m, n = len(matrix), len(matrix[0])
for i in range(m):
for j in range(i):
matrix[i][j], matrix[j][i] = matrix[j][i], matrix[i][j]
for i in range(m):
for j in range(n/2):
matrix[i][j], matrix[i][n-1-j] = matrix[i][n-1-j], matrix[i][j]
```
### Sparse Matrix Multiplication
https://leetcode.com/problems/sparse-matrix-multiplication/description/
```
class Solution(object):
def multiply(self, A, B):
"""
:type A: List[List[int]]
:type B: List[List[int]]
:rtype: List[List[int]]
"""
m, n, l = len(A), len(B), len(B[0])
dicA, dicB = {}, {}
for i in range(m):
for j in range(n):
if A[i][j] != 0:
dicA[i] = dicA.get(i, []) + [(j, A[i][j])]
for i in range(n):
for j in range(l):
if B[i][j] != 0:
dicB[i] = dicB.get(i, []) + [(j, B[i][j])]
C = [[0 for j in range(l)] for i in range(m)]
for i in dicA:
for a in dicA[i]:
j = a[0]
if j in dicB:
for b in dicB[j]:
k = b[0]
C[i][k] += a[1]*b[1]
return C
# # faster
# m, n, l = len(A), len(B), len(B[0])
# C = [[0 for j in range(l)] for i in range(m)]
# for i in range(m):
# for j in range(n):
# if A[i][j] != 0:
# for k in range(l):
# C[i][k] += A[i][j]*B[j][k]
# return C
```
## Bit operation
#### Big Integer Addition
http://www.lintcode.com/zh-cn/problem/big-integer-addition/
https://leetcode.com/problems/add-strings/description/
* ord(num1[i]) - ord('0') is faster than int(num1[i])
* do not forget i -= 1
```
class Solution(object):
def addStrings(self, num1, num2):
"""
:type num1: str
:type num2: str
:rtype: str
"""
i, j = len(num1) - 1, len(num2) - 1
carry, res = 0, ''
while i >= 0 or j >= 0:
if i >= 0:
carry += ord(num1[i]) - ord('0')
i -= 1
if j >= 0:
carry += ord(num2[j]) - ord('0')
j -= 1
res = str(carry % 10) + res
carry /= 10
return res if carry == 0 else str(carry) + res
```
#### Add Binary
https://leetcode.com/problems/add-binary/description/
```
class Solution(object):
def addBinary(self, a, b):
"""
:type a: str
:type b: str
:rtype: str
"""
i, j = len(a) - 1, len(b) - 1
res, carry = '', 0
while i >= 0 or j >= 0:
if i >= 0:
if a[i] == '1':
carry += 1
i -= 1
if j >= 0:
if b[j] == '1':
carry += 1
j -= 1
res = str(carry % 2) + res
carry /= 2
return res if carry == 0 else '1' + res
```
#### Add Two Numbers
http://www.lintcode.com/en/problem/add-two-numbers/
https://leetcode.com/problems/add-two-numbers/description/
```
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def addTwoNumbers(self, l1, l2):
"""
:type l1: ListNode
:type l2: ListNode
:rtype: ListNode
"""
dummy = ListNode(0)
p = dummy
carry = 0
while True:
if l1 is not None:
carry += l1.val
l1 = l1.next
if l2 is not None:
carry += l2.val
l2 = l2.next
p.val = carry % 10
carry /= 10
if l1 is not None or l2 is not None or carry != 0:
temp = ListNode(0)
p.next = temp
p = p.next
else:
break
return dummy
```
#### Add Two Numbers¶ II
https://leetcode.com/problems/add-two-numbers-ii/description/
* stack
* reverse a linked list !!!
```
a = [1,2,3]
a.pop()
a
a.pop(0)
a.pop()
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def addTwoNumbers(self, l1, l2):
"""
:type l1: ListNode
:type l2: ListNode
:rtype: ListNode
"""
nums1, nums2 = [], []
while l1 is not None:
nums1.append(l1.val)
l1 = l1.next
while l2 is not None:
nums2.append(l2.val)
l2 = l2.next
res = ListNode(0)
carry = 0
while len(nums1) > 0 or len(nums2) > 0:
if len(nums1) > 0:
carry += nums1.pop()
if len(nums2) > 0:
carry += nums2.pop()
res.val = carry % 10
carry /= 10
p = ListNode(carry)
p.next = res
res = p
return res if res.val != 0 else res.next
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def addTwoNumbers(self, l1, l2):
"""
:type l1: ListNode
:type l2: ListNode
:rtype: ListNode
"""
l1 = self.reverseList(l1)
l2 = self.reverseList(l2)
dummy = ListNode(0)
p = dummy
carry = 0
while True:
if l1 is not None:
carry += l1.val
l1 = l1.next
if l2 is not None:
carry += l2.val
l2 = l2.next
p.val = carry % 10
carry /= 10
if l1 is not None or l2 is not None or carry != 0:
p.next = ListNode(0)
p = p.next
else:
break
return self.reverseList(dummy)
def reverseList(self, l):
prev = None
while l:
curr = l
l = l.next
curr.next = prev
prev = curr
return prev
```
#### Big Integer multiplication
http://www.lintcode.com/en/problem/big-integer-multiplication/
https://leetcode.com/problems/multiply-strings/description/
```
'0'*0
class Solution(object):
def multiply(self, num1, num2):
"""
:type num1: str
:type num2: str
:rtype: str
"""
len1, len2 = len(num1), len(num2)
num3 = [0] * (len1 + len2)
for i in range(len1-1, -1, -1):
carry = 0
for j in range(len2-1, -1, -1):
product = num3[i + j + 1] + carry + (ord(num1[i]) - ord('0')) * (ord(num2[j]) - ord('0'))
num3[i + j + 1] = product % 10
carry = product / 10
num3[i] = carry
res = ''
i = 0
while i < len1 + len2 - 1 and num3[i] == 0:
i += 1
while i < len1 + len2:
res += str(num3[i])
i += 1
return res
```
### Pow(x, n)
https://leetcode.com/problems/powx-n/description/
```
n = -3
-n
class Solution(object):
def myPow(self, x, n):
"""
:type x: float
:type n: int
:rtype: float
"""
if n == 0:
return 1
if n < 0:
x = 1.0 / x
n = -n
res = 1
temp = x
while n != 0:
if n % 2 == 1:
res *= temp
temp *= temp
n /= 2
return res
```
### Reverse Linked List
https://leetcode.com/problems/reverse-linked-list/description/
* do not mass up the order !!!
```
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def reverseList(self, head):
"""
:type head: ListNode
:rtype: ListNode
"""
prev = None
while head is not None:
curr = head
head = head.next # do not mass up the order !!!
curr.next = prev
prev = curr
return prev
```
### Reverse Linked List II
https://leetcode.com/problems/reverse-linked-list-ii/description/
```
# Definition for singly-linked list.
# class ListNode(object):
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution(object):
def reverseBetween(self, head, m, n):
"""
:type head: ListNode
:type m: int
:type n: int
:rtype: ListNode
"""
dummy = ListNode(0)
dummy.next = head
mth_prev = self.find_kth(dummy, m-1)
mth = mth_prev.next
nth = self.find_kth(dummy, n)
nth_next = nth.next
nth.next = None
self.reverse(mth)
mth_prev.next = nth
mth.next = nth_next
return dummy.next
def reverse(self, head):
prev = None
while head is not None:
curr = head
head = head.next
curr.next = prev
prev = curr
return prev
def find_kth(self, head, k):
for i in range(k):
if head is None:
return None
head = head.next
return head
```
### Remove Nth Node From End of List
https://leetcode.com/problems/remove-nth-node-from-end-of-list/description/
| github_jupyter |
```
data_dir = '../data/'
src_file = 'Head4-Serialized-Def-ELVA.PILOT.POST-TEST.csv'
data_file = 'EN-QA-Head4-Serialized-Def-ELVA.PILOT.POST-TEST.csv'
def_file = 'Questin_ID_Definition.csv'
from qa_serializer_lang_selector import qa_serializer_lang_selector
q = qa_serializer_lang_selector(data_dir)
q.serialize_record(src_file, r'Definition')
q.select_lang([1], r'Definition').to_csv(data_dir + data_file, encoding= 'latin1')
def remove_non_extracted_stop_word(df_ac, stop_words):
stop_words_buf = stop_words[:]
for x in stop_words:
if not df_ac.columns.isin([x]).any():
print('Remove Stop Word: ', x)
stop_words_buf.remove(x)
return stop_words_buf
from basic_nlp import fex_basic_nlp
pipeline=['pos', 'lemma', 'synset', 'hype', 'hypo']
bnlqd = fex_basic_nlp(data_file, data_dir)
bnlqd.nlp_run(pipeline[0])
bnlqd.nlp_run(pipeline[1])
bnlqd.df_ac_lemma.to_csv(data_dir + 'Lemma-' + data_file, encoding= 'latin1')
bnlqd.nlp_run(pipeline[2])
bnlqd.df_ac_synset.to_csv(data_dir + 'Synset-' + data_file , encoding= 'latin1')
bnlqd.nlp_run(pipeline[3])
bnlqd.df_ac_hypernyms.to_csv(data_dir + 'Hypernyms-' + data_file, encoding= 'latin1')
bnlqd.nlp_run(pipeline[4])
bnlqd.df_ac_hyponyms.to_csv(data_dir + 'Hyponyms-' + data_file, encoding= 'latin1')
bnlpd = fex_basic_nlp(def_file, data_dir, 'Definition')
bnlpd.nlp_run(pipeline[0])
bnlpd.df_ac_pos.to_csv(data_dir + 'POS-P-' + data_file, encoding= 'latin1')
bnlpd.nlp_run(pipeline[1])
bnlpd.df_ac_lemma.to_csv(data_dir + 'Lemma-P-' + data_file, encoding= 'latin1')
from bi_trigram import bi_trigram
btgqd = bi_trigram(data_file, data_dir)
btgqd.nlp_run(r'bigram')
btgqd.nlp_run(r'trigram')
from oanc_lemma_frequency import odi_oanc_lemma_frequency
stop_words = ['a', 'be', 'to', 'and', 'or']
stop_words_d = remove_non_extracted_stop_word(bnlqd.df_ac_lemma, stop_words)
oanc_shelve = r'../../plimac3/Resource/OANC/ANC-all-lemma-04262014.db'
oalqd = odi_oanc_lemma_frequency(data_file, oanc_shelve, None, data_dir, stop_words_d)
oalqd.oanc_lemma_frequency('Lemma-' + data_file, r'Student_Question_Index', r'Pre_Col_Name')
from overlapping import odi_overlapping
stop_words_hy = ['be']
stop_words_hy_d = remove_non_extracted_stop_word(bnlqd.df_ac_lemma, stop_words_hy)
ovlqd = odi_overlapping(data_file, r'Questin_ID_Definition.csv', data_dir, stop_words_d)
ovlqd.count_overlapping('Lemma-' + data_file, r'Student_Question_Index',
r'Pre_Col_Name', r'Question_ID', r'Question_ID_Sec',
'Lemma-P-' + data_file, r'Question_ID', r'Question_ID_Sec')
ovlqd.count_overlapping_synset('Synset-' + data_file)
ovlqd.count_overlapping_hypernyms('Hypernyms-' + data_file, stop_words_hy_d)
ovlqd.count_overlapping_hyponyms('Hyponyms-' + data_file, stop_words_hy_d)
import pandas as pd
import ac_bi_trigram_pmi_distribution as gpmd
def bi_trigram_pmi_distribution(csv_file_pmi_sum_t, data_dir, num_clm_in_q, df_ac_gram,
gram = r'bigram', pmi_frq_min = 2, decimal_places = 4):
df_ac_pmi_sum_t = pd.read_csv(data_dir + csv_file_pmi_sum_t, encoding= 'latin1')
if gram == r'bigram':
sum_clm = 'Bigram_sum'
else: sum_clm = 'Trigram_sum'
df_ac_pmi_gram = df_ac_pmi_sum_t[df_ac_pmi_sum_t[sum_clm] >= pmi_frq_min]
df_ac_pmi_dist_gram = gpmd.ac_bi_trigram_pmi_distribution(df_ac_gram,
num_clm_in_q + 1, df_ac_pmi_gram, gram, decimal_places)
return df_ac_pmi_dist_gram
df_ac_pmi_dist_bigram = bi_trigram_pmi_distribution('PMI-Sum-T-Bigram-Def-PRE.csv', data_dir,
bnlqd.num_clm_in, btgqd.df_ac_bigram, r'bigram')
df_ac_pmi_dist_bigram
df_ac_pmi_dist_trigram = bi_trigram_pmi_distribution('PMI-Sum-T-Trigram-Def-PRE.csv', data_dir,
bnlqd.num_clm_in, btgqd.df_ac_trigram, r'Trigram')
df_ac_pmi_dist_trigram
import numpy as np
import ac_aggregate_plim as agpl
import ac_aggregate_item_level_plim as agpi
def aggregate_plim(bnlqd, oalqd, ovlqd, df_ac_pmi_dist_bigram, df_ac_pmi_dist_trigram, bnlpd,
specific_count_lemmas = None, stop_words_pos = None,
task_name = 'Definition', decimal_places = 4):
stem_identifier = task_name + '-Question'
option_identifier = task_name + '-Answer'
df_ac_lemma_buf = bnlqd.df_ac_lemma.copy()
if specific_count_lemmas is not None:
for x in specific_count_lemmas:
if not bnlqd.df_ac_lemma.columns.isin([x]).any():
df_ac_lemma_buf[x] = 0
df_ac_oanc_lemma_freq = oalqd.df_ac_oanc_lemma_freq_q.drop([oalqd.question_id_clm,
oalqd.stem_option_name_clm], axis=1)
df_ac_overlapping_lemma = ovlqd.df_ac_overlapping_lemma.drop([oalqd.question_id_clm,
oalqd.stem_option_name_clm], axis=1)
df_ac_overlapping_synset = ovlqd.df_ac_overlapping_syn_lemma.drop([oalqd.question_id_clm,
oalqd.stem_option_name_clm], axis=1)
df_ac_overlapping_hyper = ovlqd.df_ac_overlapping_hyper_lemma.drop([oalqd.question_id_clm,
oalqd.stem_option_name_clm], axis=1)
df_ac_overlapping_hypo = ovlqd.df_ac_overlapping_hypo_lemma.drop([oalqd.question_id_clm,
oalqd.stem_option_name_clm], axis=1)
df_ac_pmi_dist_bigram = df_ac_pmi_dist_bigram.iloc[:, oalqd.num_clm_in_q:]
df_ac_pmi_dist_bigram['Cntnt_Bigram'] = df_ac_pmi_dist_bigram['Cntnt_Bigram'].fillna('')
df_ac_pmi_dist_bigram['PMI_Bigram_SD'] = df_ac_pmi_dist_bigram['PMI_Bigram_SD'].fillna(0.0)
df_ac_pmi_dist_bigram = df_ac_pmi_dist_bigram.fillna(-10.0)
df_ac_pmi_dist_trigram = df_ac_pmi_dist_trigram.iloc[:, oalqd.num_clm_in_q:]
df_ac_pmi_dist_trigram['Cntnt_Trigram'] = df_ac_pmi_dist_trigram['Cntnt_Trigram'].fillna('')
df_ac_pmi_dist_trigram['PMI_Trigram_SD'] = df_ac_pmi_dist_trigram['PMI_Trigram_SD'].fillna(0.0)
df_ac_pmi_dist_trigram = df_ac_pmi_dist_trigram.fillna(-10.0)
if specific_count_lemmas == None:
df_ac_lemma = None
else:
df_ac_lemma = df_ac_lemma_buf
df_ac_aggregate = agpl.ac_aggregate_plim(bnlqd.df_ac_pos, oalqd.num_clm_in_q + 1,
df_ac_overlapping_lemma, df_ac_overlapping_synset,
None, df_ac_oanc_lemma_freq, oalqd.stem_option_name_clm, stem_identifier,
list(oalqd.df_ac_in_q.columns), stop_words_pos, df_ac_lemma,
specific_count_lemmas, bnlpd.df_ac_pos, ovlqd.passage_name_clm_q,
ovlqd.passage_sec_clm_q, ovlqd.passage_name_clm_p, ovlqd.passage_sec_clm_p,
bnlpd.num_clm_in + 1, decimal_places,
df_ac_overlapping_hyper, df_ac_overlapping_hypo,
df_ac_bigram_pmi_distribution = df_ac_pmi_dist_bigram,
df_ac_trigram_pmi_distribution = df_ac_pmi_dist_trigram)
key_dummy = r'Key_Dummy'
t = df_ac_aggregate.shape
row_lgth = t[0]
df_key_dummy = pd.DataFrame(np.empty((row_lgth, 1),
dtype=object), df_ac_aggregate.index,
[key_dummy])
df_key_dummy = df_key_dummy.fillna(option_identifier)
df_ac_aggregate[key_dummy] = df_key_dummy[key_dummy]
return df_ac_aggregate
def aggregate_item_level_plim(df_ac_aggregate, task_name = 'Definition', cntnt_clm = 'Content', decimal_places = 4):
stem_identifier = task_name + '-Question'
df_ac_aggregate_item_level = agpi.ac_aggregate_item_level_plim(df_ac_aggregate,
r'Key_Dummy', oalqd.stem_option_name_clm, stem_identifier,
None, decimal_places, cntnt_clm)
return df_ac_aggregate_item_level
task_name = 'Definition'
stop_words_pos = None
specific_count_lemmas = [r'dk', r'nr']
df_ac_aggregate = aggregate_plim(bnlqd, oalqd, ovlqd, df_ac_pmi_dist_bigram, df_ac_pmi_dist_trigram,
bnlpd, specific_count_lemmas, stop_words_pos, task_name)
df_ac_aggregate.to_csv(data_dir + 'Aggregate_EN-QA-Head4-Serialized-Def-ELVA.PILOT.POST-TEST.csv', encoding= 'latin1')
df_ac_aggregate_item_level = aggregate_item_level_plim(df_ac_aggregate, task_name)
df_ac_aggregate_item_level.to_csv(data_dir + 'Key-Stem-Passage-Aggregate_EN-QA-Head4-Serialized-Def-ELVA.PILOT.POST-TEST.csv', encoding= 'latin1')
```
| github_jupyter |
# Speedup Training Using GPUs
In this tutorial, you will learn:
* How to copy graph and feature data to GPU.
* Train a GNN model on GPU.
```
import dgl
import torch
import torch.nn as nn
import torch.nn.functional as F
import itertools
```
## Copy graph and feature data to GPU
We first load the Zachery's Karate club graph and node labels as from the previous sessions.
```
from tutorial_utils import load_zachery
# ----------- 0. load graph -------------- #
g = load_zachery()
print(g)
```
Right now the graph and all its feature data are stored in CPU. Use the `to` API to copy them to another device.
```
print('Current device:', g.device)
g = g.to('cuda:0')
print('New device:', g.device)
```
Verify that features are also copied to GPU.
```
print(g.ndata['club'].device)
print(g.ndata['club_onehot'].device)
```
## Create a GNN model on GPU
The step is the same as creating a CNN or RNN model on GPU. In PyTorch, one can use the `to` API to achieve so.
```
# ----------- 1. node features -------------- #
node_embed = nn.Embedding(g.number_of_nodes(), 5) # Every node has an embedding of size 5.
# Copy node embeddings to GPU
node_embed = node_embed.to('cuda:0')
inputs = node_embed.weight # Use the embedding weight as the node features.
nn.init.xavier_uniform_(inputs)
```
The community label is stored in the `'club'` node feature (0 for instructor, 1 for club president). Only nodes 0 and 33 are labeled.
```
labels = g.ndata['club']
labeled_nodes = [0, 33]
print('Labels', labels[labeled_nodes])
```
### Define a GraphSAGE model
Our model consists of two layers, each computes new node representations by aggregating neighbor information. The equations are:
$$
h_{\mathcal{N}(v)}^k\leftarrow \text{AGGREGATE}_k\{h_u^{k-1},\forall u\in\mathcal{N}(v)\}
$$
$$
h_v^k\leftarrow \sigma\left(W^k\cdot \text{CONCAT}(h_v^{k-1}, h_{\mathcal{N}(v)}^k) \right)
$$
DGL provides implementation of many popular neighbor aggregation modules. They all can be invoked easily with one line of codes. See the full list of supported [graph convolution modules](https://docs.dgl.ai/api/python/nn.pytorch.html#module-dgl.nn.pytorch.conv).
```
from dgl.nn import SAGEConv
# ----------- 2. create model -------------- #
# build a two-layer GraphSAGE model
class GraphSAGE(nn.Module):
def __init__(self, in_feats, h_feats, num_classes):
super(GraphSAGE, self).__init__()
self.conv1 = SAGEConv(in_feats, h_feats, 'mean')
self.conv2 = SAGEConv(h_feats, num_classes, 'mean')
def forward(self, g, in_feat):
h = self.conv1(g, in_feat)
h = F.relu(h)
h = self.conv2(g, h)
return h
# Create the model with given dimensions
# input layer dimension: 5, node embeddings
# hidden layer dimension: 16
# output layer dimension: 2, the two classes, 0 and 1
net = GraphSAGE(5, 16, 2)
```
Copy the network to GPU
```
net = net.to('cuda:0')
# ----------- 3. set up loss and optimizer -------------- #
# in this case, loss will in training loop
optimizer = torch.optim.Adam(itertools.chain(net.parameters(), node_embed.parameters()), lr=0.01)
# ----------- 4. training -------------------------------- #
all_logits = []
for e in range(100):
# forward
logits = net(g, inputs)
# compute loss
logp = F.log_softmax(logits, 1)
loss = F.nll_loss(logp[labeled_nodes], labels[labeled_nodes])
# backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
all_logits.append(logits.detach())
if e % 5 == 0:
print('In epoch {}, loss: {}'.format(e, loss))
# ----------- 5. check results ------------------------ #
pred = torch.argmax(logits, axis=1)
print('Accuracy', (pred == labels).sum().item() / len(pred))
```
**What if the graph and its feature data cannot fit into one GPU memory?**
* Instead of running a GNN on the full graph, run it on some sample subgraphs till converge.
* Issue different samples to different GPUs to enjoy even more acceleration.
* Partition the graph to multiple machines and train it distributedly.
Our later sessions will cover each of these methods.
| github_jupyter |
### Introduction
An example of implementing the Metapath2Vec representation learning algorithm using components from the `stellargraph` and `gensim` libraries.
**References**
**1.** Metapath2Vec: Scalable Representation Learning for Heterogeneous Networks. Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 135–144, 2017. ([link](https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf))
**2.** Distributed representations of words and phrases and their compositionality. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. In Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. ([link](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf))
**3.** Gensim: Topic modelling for humans. ([link](https://radimrehurek.com/gensim/))
**4.** Social Computing Data Repository at ASU [http://socialcomputing.asu.edu]. R. Zafarani and H. Liu. Tempe, AZ: Arizona State University, School of Computing, Informatics and Decision Systems Engineering. 2009.
```
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import os
import networkx as nx
import numpy as np
import pandas as pd
from stellargraph.data.loader import load_dataset_BlogCatalog3
%matplotlib inline
```
### Load the dataset
The dataset is the BlogCatalog3 network.
It can be downloaded from [here.](http://socialcomputing.asu.edu/datasets/BlogCatalog3)
The following is the description of the dataset from the publisher [4]:
> This is the data set crawled from BlogCatalog ( http://www.blogcatalog.com ). BlogCatalog is a social blog directory website. This contains the friendship network crawled and group memberships. For easier understanding, all the contents are organized in CSV file format.
The statistics of this network are,
- Number of bloggers : 10,312
- Number of friendship pairs: 333,983
- Number of groups: 39
We assume that the dataset file `BlogCatalog-dataset.zip` has been downloaded and unzipped in the directory,
`~/data`
and the data in `csv` format (the files `edges.csv`, `nodes.csv`, `groups.csv`, and `group-edges.csv` can be found in directory,
`~/data/BlogCatalog-dataset/data/`
```
dataset_location = os.path.expanduser("~/data/BlogCatalog-dataset/data")
g_nx = load_dataset_BlogCatalog3(location=dataset_location)
print("Number of nodes {} and number of edges {} in graph.".format(g_nx.number_of_nodes(), g_nx.number_of_edges()))
```
### The Metapath2Vec algorithm
The Metapath2Vec algorithm introduced in [1] is a 2-step representation learning algorithm. The two steps are:
1. Use uniform random walks to generate sentences from a graph. A sentence is a list of node IDs. The set of all sentences makes a corpus. The random walk is driven by a metapath that defines the node type order by which the random walker explores the graph.
2. The corpus is then used to learn an embedding vector for each node in the graph. Each node ID is considered a unique word/token in a dictionary that has size equal to the number of nodes in the graph. The Word2Vec algorithm [2] is used for calculating the embedding vectors.
## Corpus generation using random walks
The `stellargraph` library provides an implementation for uniform, first order, random walks as required by Metapath2Vec. The random walks have fixed maximum length and are controlled by the list of metapath schemas specified in parameter `metapaths`.
A metapath schema defines the type of node that the random walker is allowed to transition to from its current location. In the `stellargraph` implementation of metapath-driven random walks, the metapath schemas are given as a list of node types under the assumption that the input graph is not a multi-graph, i.e., two nodes are only connected by one edge type.
See [1] for a detailed description of metapath schemas and metapth-driven random walks.
For the **BlogCatalog3** dataset we use the following 3 metapaths.
- "user", "group", "user"
- "user", "group", "user", "user"
- "user", "user"
```
from stellargraph.data import UniformRandomMetaPathWalk
from stellargraph import StellarGraph
# Create the random walker
rw = UniformRandomMetaPathWalk(StellarGraph(g_nx))
# specify the metapath schemas as a list of lists of node types.
metapaths = [
["user", "group", "user"],
["user", "group", "user", "user"],
["user", "user"],
]
walks = rw.run(nodes=list(g_nx.nodes()), # root nodes
length=100, # maximum length of a random walk
n=1, # number of random walks per root node
metapaths=metapaths # the metapaths
)
print("Number of random walks: {}".format(len(walks)))
```
### Representation Learning using Word2Vec
We use the Word2Vec [2] implementation in the free Python library gensim [3] to learn representations for each node in the graph.
We set the dimensionality of the learned embedding vectors to 128 as in [1].
```
from gensim.models import Word2Vec
model = Word2Vec(walks, size=128, window=5, min_count=0, sg=1, workers=2, iter=1)
model.wv.vectors.shape # 128-dimensional vector for each node in the graph
```
### Visualise Node Embeddings
We retrieve the Word2Vec node embeddings that are 128-dimensional vectors and then we project them down to 2 dimensions using the [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) algorithm.
```
# Retrieve node embeddings and corresponding subjects
node_ids = model.wv.index2word # list of node IDs
node_embeddings = model.wv.vectors # numpy.ndarray of size number of nodes times embeddings dimensionality
node_targets = [ g_nx.node[node_id]['label'] for node_id in node_ids]
```
Transform the embeddings to 2d space for visualisation
```
transform = TSNE #PCA
trans = transform(n_components=2)
node_embeddings_2d = trans.fit_transform(node_embeddings)
# draw the points
label_map = { l: i for i, l in enumerate(np.unique(node_targets))}
node_colours = [ label_map[target] for target in node_targets]
plt.figure(figsize=(20,16))
plt.axes().set(aspect="equal")
plt.scatter(node_embeddings_2d[:,0],
node_embeddings_2d[:,1],
c=node_colours, alpha=0.3)
plt.title('{} visualization of node embeddings'.format(transform.__name__))
plt.show()
```
### Downstream task
The node embeddings calculated using Metapath2Vec can be used as feature vectors in a downstream task such as node attribute inference (e.g., inferring the gender or age attribute of 'user' nodes), community detection (e.g., clustering of 'user' nodes based on the similarity of their embedding vectors), and link prediction (e.g., prediction of friendship relation between 'user' nodes).
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.