text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
## Node Classification on Citation Network
As a start, we present a end to end example, demonstrating how GraphScope process node classification task on citation network by combining analytics, interactive and graph neural networks computation.
In this example, we use [ogbn-mag](https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag) dataset. ogbn-mag is a heterogeneous network composed of a subset of the Microsoft Academic Graph. It contains 4 types of entities(i.e., papers, authors, institutions, and fields of study), as well as four types of directed relations connecting two entities.
Given the heterogeneous ogbn-mag data, the task is to predict the class of each paper. We apply both the attribute and structural information to classify papers. In the graph, each paper node contains a 128-dimensional word2vec vector representing its content, which is obtained by averaging the embeddings of words in its title and abstract. The embeddings of individual words are pre-trained. The structural information is computed on-the-fly.
This tutorial has the following steps:
- Querying graph data with Gremlin;
- Running graph analytical algorithms;
- Running graph-based machine learning tasks.
```
# Install graphscope package if you are NOT in the Playground
!pip3 install graphscope
# Import the graphscope module
import graphscope
graphscope.set_option(show_log=True) # enable logging
# Load the obgn_mag dataset as a graph
from graphscope.dataset import load_ogbn_mag
graph = load_ogbn_mag()
```
## Interactive query with gremlin
In this example, we launch a interactive query and use graph traversal to count the number of papers two given authors have co-authored. To simplify the query, we assume the authors can be uniquely identified by ID `2` and `4307`, respectively.
```
# Get the entrypoint for submitting Gremlin queries on graph g.
interactive = graphscope.gremlin(graph)
# Count the number of papers two authors (with id 2 and 4307) have co-authored.
papers = interactive.execute(
"g.V().has('author', 'id', 2).out('writes').where(__.in('writes').has('id', 4307)).count()").one()
print("result", papers)
```
## Graph analytics with analytical engine
Continuing our example, we run graph algorithms on graph to generate structural features. below we first derive a subgraph by extracting publications in specific time out of the entire graph (using Gremlin!), and then run k-core decomposition and triangle counting to generate the structural features of each paper node.
```
# Exact a subgraph of publication within a time range.
sub_graph = interactive.subgraph(
"g.V().has('year', inside(2014, 2020)).outE('cites')"
)
# Project the subgraph to simple graph by selecting papers and their citations.
simple_g = sub_graph.project(vertices={"paper": []}, edges={"cites": []})
# compute the kcore and triangle-counting.
kc_result = graphscope.k_core(simple_g, k=5)
tc_result = graphscope.triangles(simple_g)
# Add the results as new columns to the citation graph.
sub_graph = sub_graph.add_column(kc_result, {"kcore": "r"})
sub_graph = sub_graph.add_column(tc_result, {"tc": "r"})
```
## Graph neural networks (GNNs)
Then, we use the generated structural features and original features to train a learning model with learning engine.
In our example, we train a GCN model to classify the nodes (papers) into 349 categories,
each of which represents a venue (e.g. pre-print and conference).
```
# Define the features for learning,
# we chose original 128-dimension feature and k-core, triangle count result as new features.
paper_features = []
for i in range(128):
paper_features.append("feat_" + str(i))
paper_features.append("kcore")
paper_features.append("tc")
# Launch a learning engine. here we split the dataset, 75% as train, 10% as validation and 15% as test.
lg = graphscope.graphlearn(sub_graph, nodes=[("paper", paper_features)],
edges=[("paper", "cites", "paper")],
gen_labels=[
("train", "paper", 100, (0, 75)),
("val", "paper", 100, (75, 85)),
("test", "paper", 100, (85, 100))
])
# Then we define the training process, use internal GCN model.
from graphscope.learning.examples import GCN
from graphscope.learning.graphlearn.python.model.tf.trainer import LocalTFTrainer
from graphscope.learning.graphlearn.python.model.tf.optimizer import get_tf_optimizer
def train(config, graph):
def model_fn():
return GCN(graph,
config["class_num"],
config["features_num"],
config["batch_size"],
val_batch_size=config["val_batch_size"],
test_batch_size=config["test_batch_size"],
categorical_attrs_desc=config["categorical_attrs_desc"],
hidden_dim=config["hidden_dim"],
in_drop_rate=config["in_drop_rate"],
neighs_num=config["neighs_num"],
hops_num=config["hops_num"],
node_type=config["node_type"],
edge_type=config["edge_type"],
full_graph_mode=config["full_graph_mode"])
trainer = LocalTFTrainer(model_fn,
epoch=config["epoch"],
optimizer=get_tf_optimizer(
config["learning_algo"],
config["learning_rate"],
config["weight_decay"]))
trainer.train_and_evaluate()
# hyperparameters config.
config = {"class_num": 349, # output dimension
"features_num": 130, # 128 dimension + kcore + triangle count
"batch_size": 500,
"val_batch_size": 100,
"test_batch_size":100,
"categorical_attrs_desc": "",
"hidden_dim": 256,
"in_drop_rate": 0.5,
"hops_num": 2,
"neighs_num": [5, 10],
"full_graph_mode": False,
"agg_type": "gcn", # mean, sum
"learning_algo": "adam",
"learning_rate": 0.01,
"weight_decay": 0.0005,
"epoch": 5,
"node_type": "paper",
"edge_type": "cites"}
# Start traning and evaluating
train(config, lg)
```
| github_jupyter |
```
from IPython.core.debugger import set_trace
import gzip
import struct
import matplotlib as mpl
import matplotlib.pyplot as plt
# pre-requirement: MNIST data files stored in local directory under $folder/mnist/
# after downloaded from http://yann.lecun.com/exdb/mnist/
class MnistInput:
def __init__(self, data, folder):
if data == "train":
zX = folder + "/vae/data/" + 'train-images-idx3-ubyte.gz'
zy = folder + "/vae/data/" + 'train-labels-idx1-ubyte.gz'
elif data == "test":
zX = folder + "/vae/data/" + 't10k-images-idx3-ubyte.gz'
zy = folder + "/vae/data/" + 't10k-labels-idx1-ubyte.gz'
else: raise ValueError("Incorrect data input")
self.zX = zX
self.zy = zy
return
def read(self, num):
zX = self.zX
zy = self.zy
with gzip.open(zX) as fX, gzip.open(zy) as fy:
magic, nX, rows, cols = struct.unpack(">IIII", fX.read(16))
magic, ny = struct.unpack(">II", fy.read(8))
if nX != ny: raise ValueError("Inconsistent data and label files")
img_size = cols*rows
if num <= 0 or num > nX: num = nX
for i in range(num):
X = struct.unpack("B"*img_size, fX.read(img_size))
X = np.array(X).reshape(rows, cols)
y, = struct.unpack("B", fy.read(1))
yield (X, y)
return
class MNIST:
def __init__(self, nn, folder="../convolution-network"):
self.nn = nn
self.train_input = MnistInput("train", folder)
self.test_input = MnistInput("test", folder)
return
def train(self, n_sample):
i = 1
for X, y in self.train_input.read(n_sample):
X = X/255
if self.nn.type == "MLP" or self.nn.type == "RNN":
X = X.reshape(-1,1)
else:
X = np.array([X])
#print("Training: ", i); i+=1
#if i==100 or i==200: set_trace()
self.nn.train_1sample(X, y)
return
def test(self, n_sample):
correct = 0
total = 0
for X, y in self.test_input.read(n_sample):
aX = X/255
if self.nn.type == "MLP" or self.nn.type == "RNN":
aX = aX.reshape(-1,1)
else:
aX = np.array([aX])
predict = self.nn.predict_1sample(aX)
correct += 1 * (predict == y)
total += 1
#print("\nPredict {} to be {}:".format(y, predict))
#plt.imshow(X, cmap=mpl.cm.Greys)
#plt.show()
accuracy = correct/total
return accuracy
```
| github_jupyter |
# Resample Data
## Pandas Resample
You've learned about bucketing to different periods of time like Months. Let's see how it's done. We'll start with an example series of days.
```
import numpy as np
import pandas as pd
dates = pd.date_range('10/10/2018', periods=11, freq='D')
close_prices = np.arange(len(dates))
close = pd.Series(close_prices, dates)
close
```
Let's say we want to bucket these days into 3 day periods. To do that, we'll use the [DataFrame.resample](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.resample.html) function. The first parameter in this function is a string called `rule`, which is a representation of how to resample the data. This string representation is made using an offset alias. You can find a list of them [here](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases). To create 3 day periods, we'll set `rule` to "3D".
```
close.resample('3D')
```
This returns a `DatetimeIndexResampler` object. It's an intermediate object similar to the `GroupBy` object. Just like group by, it breaks the original data into groups. That means, we'll have to apply an operation to these groups. Let's make it simple and get the first element from each group.
```
close.resample('3D').first()
```
You might notice that this is the same as `.iloc[::3]`
```
close.iloc[::3]
```
So, why use the `resample` function instead of `.iloc[::3]` or the `groupby` function?
The `resample` function shines when handling time and/or date specific tasks. In fact, you can't use this function if the index isn't a [time-related class](https://pandas.pydata.org/pandas-docs/version/0.21/timeseries.html#overview).
```
try:
# Attempt resample on a series without a time index
pd.Series(close_prices).resample('W')
except TypeError:
print('It threw a TypeError.')
else:
print('It worked.')
```
One of the resampling tasks it can help with is resampling on periods, like weeks. Let's resample `close` from it's days frequency to weeks. We'll use the "W" offset allies, which stands for Weeks.
```
pd.DataFrame({
'days': close,
'weeks': close.resample('W').first()})
```
The weeks offset considers the start of a week on a Monday. Since 2018-10-10 is a Wednesday, the first group only looks at the first 5 items. There are offsets that handle more complicated problems like filtering for Holidays. For now, we'll only worry about resampling for days, weeks, months, quarters, and years. The frequency you want the data to be in, will depend on how often you'll be trading. If you're making trade decisions based on reports that come out at the end of the year, we might only care about a frequency of years or months.
## OHLC
Now that you've seen how Pandas resamples time series data, we can apply this to Open, High, Low, and Close (OHLC). Pandas provides the [`Resampler.ohlc`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.ohlc.html#pandas.core.resample.Resampler.ohlc) function will convert any resampling frequency to OHLC data. Let's get the Weekly OHLC.
```
close.resample('W').ohlc()
```
Can you spot a potential problem with that? It has to do with resampling data that has already been resampled.
We're getting the OHLC from close data. If we want OHLC data from already resampled data, we should resample the first price from the open data, resample the highest price from the high data, etc..
To get the weekly closing prices from `close`, you can use the [`Resampler.last`](https://pandas.pydata.org/pandas-docs/version/0.21.0/generated/pandas.core.resample.Resampler.last.html#pandas.core.resample.Resampler.last) function.
```
close.resample('W').last()
```
## Quiz
Implement `days_to_weeks` function to resample OHLC price data to weekly OHLC price data. You find find more Resampler functions [here](https://pandas.pydata.org/pandas-docs/version/0.21.0/api.html#id44) for calculating high and low prices.
```
import quiz_tests
def days_to_weeks(open_prices, high_prices, low_prices, close_prices):
"""Converts daily OHLC prices to weekly OHLC prices.
Parameters
----------
open_prices : DataFrame
Daily open prices for each ticker and date
high_prices : DataFrame
Daily high prices for each ticker and date
low_prices : DataFrame
Daily low prices for each ticker and date
close_prices : DataFrame
Daily close prices for each ticker and date
Returns
-------
open_prices_weekly : DataFrame
Weekly open prices for each ticker and date
high_prices_weekly : DataFrame
Weekly high prices for each ticker and date
low_prices_weekly : DataFrame
Weekly low prices for each ticker and date
close_prices_weekly : DataFrame
Weekly close prices for each ticker and date
"""
# TODO: Implement Function
weekly_open = open_prices.resample ('W').first()
weekly_high = high_prices.resample ('W').max()
weekly_low = low_prices.resample ('W').min()
weekly_close = close_prices.resample('W').last()
return weekly_open, weekly_high, weekly_low, weekly_close
quiz_tests.test_days_to_weeks(days_to_weeks)
```
## Quiz Solution
If you're having trouble, you can check out the quiz solution [here](resample_data_solution.ipynb).
| github_jupyter |
# Average data on group level
We average each z-scored time series, weighted with the design (rest is inverted before averaging). We then average over all patients of one group. This excludes patients deemed inconclusive by the 2D-LI method performed in the previous step.
The results of this notebook will not be used in later analyses, as we later use leave-one-out. They only serve the purpose of visualizing the whole group. The companion notebook to this one is #7, where we show the same maps, but on a surface rather than in volume space.
### import modules
```
import glob
import pickle
import shutil
import pandas as pd
from scipy import stats
from nilearn import input_data, image, plotting
import matplotlib.pyplot as plt
```
### get data
```
data_df = pd.read_csv('../data/interim/csv/info_epi_zscored_zdiff_summarymaps_2dpredclean_df.csv',
index_col=[0,1],
header=0)
data_df.tail()
```
### count wadas
```
data_df.groupby(level=1).count().mean(axis=1)
```
### get the conclusive data
```
is_conclusive = data_df.loc[:, 'pred'] != 'inconclusive'
conc_df = data_df[is_conclusive]
conc_df.shape
```
### accuracy of 2D-prediction
Just a quick sanity check if this still works. In [Wegrzyn et al. 2019](https://dx.doi.org/10.3389/fneur.2019.00655) it was 85% for a sample with 62% typical.
```
(conc_df.index.get_level_values(1) == conc_df['pred']).mean().round(2)
```
### extract data
```
masker = input_data.NiftiMasker('../data/external/MNI152_T1_2mm_brain_mask.nii.gz').fit()
plotting.plot_roi(masker.mask_img_);
```
### collect data
```
def make_group_df(data_df, metric='z-scored', masker=masker):
"""get data of all conclusive patients, put into big table"""
# only conclusive patients
is_conclusive = data_df.loc[:, 'pred'] != 'inconclusive'
conc_data_df = data_df[is_conclusive]
ims = conc_data_df.loc[:, 'meanMap_%s'%metric]
data = masker.transform(ims)
df = pd.DataFrame(data)
df.index = pd.MultiIndex.from_tuples(conc_data_df.index)
return df
mean_df = make_group_df(data_df)
mean_df.tail()
mean_df.groupby(level=1).count().mean(axis=1)
```
### group averages
Average all patients belonging to one wada group
```
group_mean_df = mean_df.groupby(level=1).mean()
group_mean_df
```
Store the group averages to file
```
for i in group_mean_df.index:
data = group_mean_df.loc[i,:]
im = masker.inverse_transform(data)
im.to_filename('../data/processed/nii/zOrig_%s.nii.gz'%i)
```
### t-maps
```
group_df = mean_df.reorder_levels([1,0])
group_df = group_df.sort_index()
for group in group_df.index.levels[0]:
this_group_df = group_df.loc[group]
t,p = stats.ttest_1samp(this_group_df,0)
t_im = masker.inverse_transform(t)
t_im.to_filename('../data/processed/nii/tOrig_%s.nii.gz'%group)
```
### same thing, for L-R diff images
```
mean_diff_df = make_group_df(data_df,'z-scored-diff')
group_mean_diff_df = mean_diff_df.groupby(level=1).mean()
group_mean_diff_df
for i in group_mean_diff_df.index:
data = group_mean_diff_df.loc[i,:]
im = masker.inverse_transform(data)
im.to_filename('../data/processed/nii/zDiff_%s.nii.gz'%i)
```
### t-maps
```
group_diff_df = mean_diff_df.reorder_levels([1,0])
group_diff_df = group_diff_df.sort_index()
for group in group_diff_df.index.levels[0]:
this_group_df = group_diff_df.loc[group]
t,p = stats.ttest_1samp(this_group_df,0)
t_im = masker.inverse_transform(t)
t_im.to_filename('../data/processed/nii/tDiff_%s.nii.gz'%group)
```
### re-load and show the generated files
```
file_list = glob.glob('../data/processed/nii/z*.nii.gz')
file_list.sort()
file_list
thresh = 0
n_ims = len(file_list)
fig = plt.figure(figsize=(16, 20))
for i, im in enumerate(file_list):
ax = plt.subplot2grid((n_ims, 10), (i, 0), colspan=8)
ax = plotting.plot_stat_map(
im,
display_mode='x',
cut_coords=(-55, -45, -5, 5, 45, 55),
threshold=thresh,
title=im,
colorbar=False,
axes=ax)
ax = plt.subplot2grid((n_ims, 10), (i, 8), colspan=2)
ax = plotting.plot_stat_map(
im,
display_mode='y',
cut_coords=([15]),
threshold=thresh,
colorbar=True,
axes=ax)
plt.show()
```
### show t-maps
```
file_list = glob.glob('../data/processed/nii/t*.nii.gz')
file_list.sort()
file_list
```
Copy to reports dir for later use
```
[shutil.copy2(f,'../reports/nii/') for f in file_list]
thresh_list = [1.67,4.09,2.12,1.67,4.09,2.12]
n_ims = len(file_list)
fig = plt.figure(figsize=(16, 20))
for i, (im,thresh) in enumerate(zip(file_list,thresh_list)):
ax = plt.subplot2grid((n_ims, 10), (i, 0), colspan=8)
ax = plotting.plot_stat_map(
im,
display_mode='x',
cut_coords=(-55, -45, -5, 5, 45, 55),
threshold=thresh,
title='%s %.2f'%(im,thresh),
colorbar=False,
axes=ax)
ax = plt.subplot2grid((n_ims, 10), (i, 8), colspan=2)
ax = plotting.plot_stat_map(
im,
display_mode='y',
cut_coords=([15]),
threshold=thresh,
colorbar=True,
axes=ax)
plt.show()
```
### summary
The figure illustrates the prototypical activity of each group, for the original data and for the L-R flipped data. In the flipped images, we see that differences between group are emphasized.
**************
< [Previous](05-mw-identify-inconclusive.ipynb) | [Contents](00-mw-overview-notebook.ipynb) | [Next >](07-mw-group-surface-plots.ipynb)
| github_jupyter |
# Epidemilogical analysis Lab Notebook
(to edit this notebook and the associated python files, `git checkout` the corresponding commit by its hash, e.g. `git checkout 422024d`)
```
from IPython.display import display, Markdown
from datetime import datetime
cur_datetime = datetime.now()
display(Markdown(f'# {cur_datetime.strftime("%d/%b/%Y %H:%M")}'))
```
# Compartmental models dynamics on SD
On this notebook, we'll explore and visualize multiple epidemiological models behavior in a dynamic system approach. The models were written on cadCAD - a library for Complex Adaptive Dynamics simulations which allows you to mix and prototype
different modelling paradigms in a reproducible and consistent manner.
Differently from other data processing methods, e.g. machine learning models, where the objective is to predict outputs based on previous information, working with cadCAD allows us to pursue a different goal. As it works with dynamic simulations, we aim to create a range of possible future scenarios for a disease's spread according to the behavior of epidemiological parameters.
```
%%capture
%matplotlib inline
# Dependences
from time import time
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Experiments
import run
```
The models are simulated within a range of 100 days, i.e., each simulation has 100 timesteps.
```
# Run all experiments. Typical run duration for an Core-i3 laptop is about 2-3min.
# Tweak the prey_predator_abm/sim_params.py file if you want it to take longer (or not).
start_time = time()
experiments = run.run()
simulation = experiments['simulation']
subset = experiments['subset']
end_time = time()
print("Execution in {:.1f}s".format(end_time - start_time))
```
# SIR model
In order to create a mathematical representation of an infectious disease, we will use compartmental models.. These models assign the system's population to compartments with labels that represent different states where the individuals can be, and the order of the labels shows the flow patterns between these compartments. As described in Wikipedia,
> "[Compartmental] models try to predict things such as how a disease spreads, or the total number infected, or the duration of an epidemic, and to estimate various epidemiological parameters such as the reproductive number. Such models can show how different public health interventions may affect the outcome of the epidemic, e.g., what the most efficient technique is for issuing a limited number of vaccines in a given population."
You can find more information about compartmental models on [Wikipedia's page](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology).
The simplest compartmental model is SIR, which consists of three compartments:
- **S**usceptible: The number of individuals at risk of contracting the disease;
- **I**nfectious: The number of individuals who have been infected and are capable of infecting susceptible individuals;
- **R**ecovered: The number of individuals who have been infected and have recovered from the disease.
As SIR is a very simple model, **it considers that the disease's death rate is negligible**.
The SIR model can be expressed by the following ordinary differential equations:
\begin{aligned}{\frac {d}{dt}Susceptible}&= - \beta * Infected * {\frac {Susceptible}{Total Population}}
\\{\frac{d}{dt}Infected}&=\beta * Infected * \frac {Susceptible}{Total Population} -\gamma * Infected
\\{\frac {d}{dt}Recovered}&=\gamma *Infected\end{aligned}
Where the parameters are:
- **Infection rate** ($\beta$): expected amount of people an infected person infects per day
- **Recovering rate** ($\gamma$): the proportion of infected recovering per day ($\gamma$ = 1 / recovery time in days)
The ratio between $\beta$ and $\gamma$ results on the **basic reproductive number ($R₀$)**, a very important parameter that determines **how contagious is a disease**. As said in Wikipedia,
> "[The reproductive number] of an infection can be thought of as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. The definition describes the state where no other individuals are infected or immunized (naturally or through vaccination)."
You can find more information about $R₀$ on [Wikipedia's page](https://en.wikipedia.org/wiki/Basic_reproduction_number).
It's a challenging task to estimate a disease's reproductive number due to many reasons, but mostly because of disease underreporting. The Covid-19 pandemic is a good example of it: preliminary studies reported an average $R₀$ of around 3.3 [[1]](https://academic.oup.com/jtm/article/27/2/taaa021/5735319), but more recent ones estimated it to be approximately 2.7 [[2]](https://www.sciencedirect.com/science/article/pii/S0140673620302609?via%3Dihub) or even as high as 5.7 [[3]](https://wwwnc.cdc.gov/eid/article/26/7/20-0282_article), which shows a massive variability. It also variates drastically by the location, because as said before, exogenous factors have an important role in infectious diseases behaviors.
It's also important to note that the reproductive number is **time variant**, as the infection rate can decrease over time. If it weren't, epidemics wouldn't come to an end, as the disease's $R₀$ wouldn't reach a value lower than 1. This fact is what allows us to create different epidemic scenarios, as we can **create different $R₀$ trends and analyze how each of them impacts the population**.
In the first model built, the definition of parameters is up to the user and they are not time dependent, i.e., we have a deterministic and time-invariant system. This means that if we make a simulation with the same initial populations and parameters 100 times, we will get 100 equal results.
Evidently, this is not how real world works and it will not allow us to reach our goal of creating multiple spread scenarios, so it's quite important that stochasticity is added to the model.
## SIR model stock and flow diagram
Before properly building the model, it's essential to graphically represent it so that our problem is conceptually understood.
The following stock and flow diagram shows how the information flows through the SIR model stocks of populations.

## SIR model mechanism
As we are initially building a model with considerably low complexity, we will not consider exogenous process (at least for now). One example of exogenous process that impacts the population distribution between compartments is the availability of hospital beds and respirators. If the health system in some certain location is overwhelmed, the disease's death rate tends to increase.
In this model, the behaviors of the SIR model are simply going to be **infected and recovered growth**, as those are the two phenomena controlled by us (as seen in the equations, the susceptible population only changes by infected growth, so there's no behavior related to susceptible growth).
The infected growth causes a decrease in the susceptible population, as previously susceptible individuals become infected; for obvious reasons, it also increases the infected population.
Similarly, the recovered growth causes a decrease in the infected population, as previously infected individuals become recovered, and it increases the recovered population.

## Simulation results
The model's simulation was created given the following parameters and initial state variables:
- **Infection rate** ( $\beta$ ): 0.4 (this means an infected individual infects another individual every 2.5 days);
- **Recovering rate** ($\gamma$): 0.07 (as 𝛾 = 1 / recovery time, this parameter is calculated using an average recovery time of 14 days. So, $\gamma$ = 1 / 14 $\approx$ 0.07);
- **Reproductive number** ($R₀$): $\beta$ / $\gamma$ $\approx$ 5.71;
- **Susceptible population**: 990;
- **Infected population**: 10;
- **Recovered population**: 0.
When plotting a graph of the population evolution over time, some relevant observations can be done:
- From around timesteps 10 to 30, the susceptible population decreases drastically until it almost reaches zero;
- The infected population grows until it reaches its peak around timestep 20. Afterwards, it decreases until it reaches zero;
- The recovered population doesn't grow at such high rates in the beginning, which is explained by the average recovering time used to calculate $\gamma$. From timestep 20, it acquires a logarithmic tendency, until all the population is considered recovered.
The system's behavior is close to expected, as susceptible population's highest **decrease rates** match infected population's highest **increase rates** and recovered population gets close to its peak around 20 days after infected's peak.
```
sir = experiments[simulation == 0]
plt.rcParams["figure.figsize"]=20,5
fig, ax = plt.subplots()
ax.plot(sir['timestep'], sir['susceptible'], label='susceptible')
ax.plot(sir['timestep'], sir['infected'], label='infected')
ax.plot(sir['timestep'], sir['recovered'], label='recovered')
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a title on the plot
plt.title("Population evolution over time in SIR model ($R₀$ = 5.71)", fontsize=16)
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel("Time (days)")
ax.set_ylabel("People")
plt.show()
```
# SEIR model
The SIR model can give us a decent analysis on the behavior of an infectious disease, but its simplicity limitates it. Many infections have a significant incubation period during which individuals have been infected but neither show symptoms nor are capable of infecting other individuals. Because of that, the **SEIR** model can represent them in a better way.
The SEIR model is described by the following equations:
\begin{aligned}{\frac {d}{dt}Susceptible}&= - \beta * Infected * {\frac {Susceptible}{Total Population}}
\\{\frac {d}{dt}Exposed}&=\beta * Infected * {\frac {Susceptible}{Total Population}} - \delta * Exposed
\\{\frac{d}{dt}Infected}&=\delta * Exposed - \gamma * Infected
\\{\frac {d}{dt}Recovered}&=\gamma*Infected
\end{aligned}
Where the parameters are:
- **Infection rate** ($\beta$): expected amount of people an infected person infects per day
- **Recovering rate** ($\gamma$): the proportion of infected people recovering per day ($\gamma$ = 1 / recovery time in days)
- **Exposure rate** ($\delta$): expected rate that exposed people turn into infected ($\delta$ = 1 / incubation period in days)
As seen above, SEIR also considers the disease's death rate negligible. It's also notable that it has very similar equations to SIR. Besides adding an equation that represents the exposed population, SEIR also has differences in the infected population equation.
While SIR considered the difference between **the amount of individuals infected by and infected person** and the amount of infected individuals recovering per day, SEIR considers the difference between **the amount of exposed individuals turned into infected** and the amount of infected individuals recovering per day.
When conceptually analyzing the meaning of each compartment, it becomes very simple to understand this change in the equation.
## SEIR model stock and flow diagram
Differently from SIR model, the population in SEIR doesn't flow directly from the susceptible stock to the infected one. Firstly, the exposed growth makes them flow to the exposed stock; after the incubation period, the infected growth makes them flow to the infected stock and, after the recovery time, the population finally flows to the recovered stock.

## SEIR model mechanism
As seen in the equations and the stock and flow diagram, the difference between both models' mechanisms is that SEIR has an exposed growth behavior that triggers a decrease in the susceptible population and an increase in the exposed population. Besides that, infected growth causes a decrease in the exposed population, differently from SIR, where it decreased the susceptible one. Exogenous processes still weren't added to the model.

## Simulation results
The model's simulation was created given the following parameters and initial state variables:
- **Infection rate** ( $\beta$ ): 1 (this means an infected individual infects another individual once a day);
- **Recovering rate** ( $\gamma$ ): 0.25 (differently from SIR model, this parameter is calculated using an average recovery time of 4 days. So, $\gamma$ = 1 / 4 = 0.25);
- **Reproductive number** ($R₀$): $\beta$ / $\gamma$ = 4;
- **Exposure rate** ($\delta$): 0.333 (as $\delta$ = 1 / incubation period in days, this parameter is calculated using an average incubation period of 3 days. So, $\delta$ = 1 / 3 $\approx$ 0.333;
- **Susceptible population**: 990;
- **Exposed population**: 10;
- **Infected population**: 0;
- **Recovered population**: 0.
As we can see, the parameters are very different from the ones used in SIR's simulation. This was made in order to see the difference this parameters make on the system's behavior. As said in the beginning of the notebook, the differential of using cadCAD is the ability to create varied scenarios of a system.
The graph shows some notable differences from the one shown in SIR model:
- The susceptible population decreases at low rate for slightly longer than on SIR, beginnig to decrease considerably around the 15th day;
- The exposed population reaches higher increase rates approximately at the same time that susceptible population begins to decrease, but because of the high $\delta$ used, the curve is quite flat;
- The infected population shows a very similar behavior to the exposed one, with a little "delay" (justified by the incubation period) and a higher peak (justified by the lower recovering rate than the exposure rate). It is also much flatter than the presented on SIR, which shows the influence of $R₀$ on the system's behavior;
- The recovered population reaches its peak considerably before than on SIR, with a much higher growth rate between timesteps 15 and 30.
```
seir = experiments[simulation == 1]
fig, ax = plt.subplots()
plt.rcParams["figure.figsize"]=20,5
ax.plot(seir['timestep'], seir['exposed'], label='exposed')
ax.plot(seir['timestep'], seir['susceptible'], label='susceptible')
ax.plot(seir['timestep'], seir['infected'], label='infected' )
ax.plot(seir['timestep'], seir['recovered'], label='recovered')
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a title on the plot
plt.title("Population evolution over time in SEIR model ($R₀$ = 4.0)", fontsize=16)
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel("Time (days)")
ax.set_ylabel("People")
plt.show()
```
# SEIRD model
As we know, some diseases also have a significant death rate, such as measles, Ebola and SARS. Because of that, SIR and SEIR models can be considerably inaccurate when representing them. Therefore, the SEIRD model can better do it, as it includes individuals who died because of the disease in compartment **D**.
The SEIRD model is based on the following equations:
\begin{aligned}{\frac {d}{dt}Susceptible}&= - \beta * Infected * {\frac {Susceptible}{Total Population}}
\\{\frac {d}{dt}Exposed}&=\beta * Infected * {\frac {Susceptible}{Total Population}} - \delta * Exposed
\\{\frac{d}{dt}Infected}&=\delta * Exposed - (1 - \alpha) * \gamma * Infected - \alpha * \rho * Infected
\\{\frac {d}{dt}Recovered}&= (1 - \alpha) * \gamma *Infected
\\{\frac {d}{dt}Dead}&=\alpha * \rho * Infected\end{aligned}
Where the parameters are:
- **Infection rate** ($\beta$): expected amount of people an infected person infects per day
- **Recovering rate** ($\gamma$): the proportion of infected people recovering per day ($\gamma$ = 1 / recovery time in days)
- **Exposure rate** ($\delta$): expected rate that exposed people turn into infected ($\delta$ = 1 / incubation period in days)
- **Death proportion rate** ($\rho$): rate at which infected people die per day ($\rho$ = 1 / days from infection until death)
- **Death rate** ($\alpha$): the probability an infected person has of dying
We can again see that the equations are similar to the previous models. Besides including a death population equation, the differences occur in the infected and recovered ones, where **death rate is now considered**.
## SEIRD model stock and flow diagram
The stock and flow diagram shows that, in SEIRD model, the population in the infected stock can flow to two population stocks (dead or recovered).

## SEIRD model mechanism
The difference between SEIR and SEIRD mechanisms is simply the addition of a death growth behavior that increases the dead population and decreases the infected population. It's interesting to observe that on SEIRD, we have a mechanism (infected population) that is triggered by three different behaviors, which shows cadCAD'S potential of adding complexity to a system.

## Simulation results
We will introduce parameter sweeping in SEIRD model. This feature will alow us to analyze two scenarios of the disease's spread based on two different reproductive numbers. As the recovering rate is a disease characteristic, it tends to remain constant, while the infection rate is a characteristic that depends on several factors other than the disease itself (e.g. social isolation policies). Because of that, in order to get different values for $R₀$, we have to work with different values for $\beta$.
The first model's simulation was created given the following parameters and initial state variables:
- **Infection rate** ( $\beta$ ): 0.4;
- **Recovering rate** ( $\gamma$ ): 0.25;
- **Reproductive number** ($R₀$): $\beta$ / $\gamma$ = 1.6;
- **Exposure rate** ($\delta$): 0.333;
- **Death proportion rate** ($\rho$): 0.111 (as $\rho$ = 1 / days from infection until death, this means the average time from infection until death is 9 days);
- **Death rate** ($\alpha$): 0.01 (this means 1% of infected people will die);
- **Susceptible population**: 9990;
- **Exposed population**: 100;
- **Infected population**: 0;
- **Recovered population**: 0;
- **Dead population**: 0.
When comparing it to SEIR's simulation, we can observe that the change in the reproductive number drastically changes the population curves. The exposed and infected populations took much longer to grow and were much flatter. Similarly, susceptible and recovered populations also took longer to respectively decrease and increase.
Although we aren't considering the death rate as negligible anymore, the dead population grew very little in the first 100 days, which makes sense when we look at the $\rho$ and $\alpha$ values used, which are considerably low.
It's also notable that, differently from the other simulations did before, at the end of the first 100 days there are people who didn't even got exposed or infected, making both susceptible and recovered populations to be different from zero (in the other simulations, the entire population was recovered after the first 100 days).
This simulation may represent the spread of a disease such as **common flu** considerably well, as its mean $R₀$ is around 1.3 [[4]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2715422/).
```
seird_1 = experiments[(simulation == 2) & (subset == 0)]
fig, ax = plt.subplots()
plt.rcParams["figure.figsize"]=20,5
ax.plot(seird_1['timestep'], seird_1['exposed'], label='exposed')
ax.plot(seird_1['timestep'], seird_1['susceptible'], label='susceptible')
ax.plot(seird_1['timestep'], seird_1['infected'], label='infected' )
ax.plot(seird_1['timestep'], seird_1['recovered'], label='recovered')
ax.plot(seird_1['timestep'], seird_1['dead'], label='dead')
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a title on the plot
plt.title("Population evolution over time in SEIRD model ($R₀$ = 1.6)", fontsize=16)
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel("Time (days)")
ax.set_ylabel("People")
plt.show()
```
For the second SEIRD simulation, the only parameter changed was the infection rate and, consequently, the reproductive number:
- **Infection rate** ($\beta$): 0.2 (this means an individual infects another individual every 5 days);
- **Reproductive number** ($R₀$): $\beta$ / $\gamma$ = 0.2/0.25 = 0.8.
This change allows us to clearly note the relevance of $R₀$ in an infectious disease spread. When replacing the previously used value for its half, the populations change very little from their initial states. The infected population peak is so low that we can't even see it on the graph. Consequently, the massive majority of the population remains susceptible for the first 100 days and will propably continue as that since the infected population is negligible at that time.
This simulation also shows the importance of keeping a disease's $R₀$ lower than 1, as that is what makes it unable to spread in a population.
```
seird_2 = experiments[(simulation == 2) & (subset == 1)]
fig, ax = plt.subplots()
plt.rcParams["figure.figsize"]=20,5
ax.plot(seird_2['timestep'], seird_2['exposed'], label='exposed')
ax.plot(seird_2['timestep'], seird_2['susceptible'], label='susceptible')
ax.plot(seird_2['timestep'], seird_2['infected'], label='infected' )
ax.plot(seird_2['timestep'], seird_2['recovered'], label='recovered')
ax.plot(seird_2['timestep'], seird_2['dead'], label='dead')
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a title on the plot
plt.title("Population evolution over time in SEIRD model ($R₀$ = 0.8)", fontsize=16)
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
# Put labels on axis
ax.set_xlabel("Time (days)")
ax.set_ylabel("People")
plt.show()
```
# Covid-19 SEIR model with stochastic parameters (SEIRBayes)
Now that we explored the dynamics of an epidemic, we can try to get our model closer to how real world works. For this, as said before, we will add stochasticity to its parameters. The method chosen to do this is **Bayesian inference**. As explained in Wikipedia,
> "Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data."
You can read more about Bayesian inference on [Wikipedia's page](https://en.wikipedia.org/wiki/Bayesian_inference).
In our system's case, the epidemiological parameters become variables that have a probability of assuming certain values, while the probability distribution (i.e., **the function that gives the probability of each value's occurrence**) varies as the model evolves.
Thus, the outcomes of our system won't be curves that assume a specific value at each timestep anymore; instead, they will be curves with **a range of possible values over time**. This can be done quite easily on cadCAD with **Monte Carlo Method**.
At each Monte Carlo run, the epidemiological parameters will change with a different random seed informed in the script. This will allow us to have **a stochastic model while keeping it reproducible**. The support script used to create a probability distribution for each population was taken from **3778**'s (brazilian tech company that uses data science to develop healthcare solutions) [Covid-19 analysis repository](https://github.com/3778/COVID-19).
In order to get a plot that shows the probability distribution of each population over time, we will need to do some data processing. The functions `take_max()`, `take_min()` and `take_mean` are used, respectively, to get the maximum, minimum and mean values of each state variable over time. Afterwards, `stochastic_plot()` plots the mean lines and shows the range of each population over time.
```
def clean_df(df):
df.reset_index(inplace=True)
df.drop(['index'], axis=1, inplace=True)
def take_max(df):
max_ = list()
for row in df:
max_.append(row.max())
return max_
def take_min(df):
min_ = list()
for row in df:
min_.append(row.min())
return min_
def take_mean(df):
mean_ = list()
for row in df:
mean_.append(row.sum()/len(row))
return mean_
def stochastic_plot(dfs):
susceptible = []
exposed = []
infected = []
recovered = []
for df in dfs:
clean_df(df)
susceptible.append(df['susceptible'].values)
exposed.append(df['exposed'].values)
infected.append(df['infected'].values)
recovered.append(df['recovered'].values)
susceptible = np.transpose(susceptible)
exposed = np.transpose(exposed)
infected = np.transpose(infected)
recovered = np.transpose(recovered)
raw_data = {'susceptible': list(susceptible[:]),
'exposed': list(exposed[:]),
'infected': list(infected[:]),
'recovered': list(recovered[:])
}
result = pd.DataFrame(raw_data,
columns=['susceptible',
'exposed',
'infected',
'recovered'])
result['timestep'] = dfs[0]['timestep']
plt.rcParams["figure.figsize"]=20,5
fig, ax = plt.subplots()
ax.plot(result['timestep'], take_mean(result['susceptible']), label='susceptible')
ax.fill_between(result['timestep'],
take_min(result['susceptible']),
take_max(result['susceptible']), alpha=0.2)
ax.plot(result['timestep'], take_mean(result['exposed']), label='exposed')
ax.fill_between(result['timestep'],
take_min(result['exposed']),
take_max(result['exposed']), alpha=0.2)
ax.plot(result['timestep'], take_mean(result['infected']), label='infected')
ax.fill_between(result['timestep'],
take_min(result['infected']),
take_max(result['infected']), alpha=0.2)
ax.plot(result['timestep'], take_mean(result['recovered']), label='recovered')
ax.fill_between(result['timestep'],
take_min(result['recovered']),
take_max(result['recovered']), alpha=0.2)
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.ticklabel_format(style='plain')
# Put a title on the plot
plt.title("Population evolution over time in SEIR model (stochastic simulation)", fontsize=16)
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xlabel("Time (days)")
ax.set_ylabel("People")
plt.show()
```
## Simulation results
As the model equations and mechanisms will remain the same as those of the deterministic SEIR model, we will move forward to its results. Since our parameters are now random variables, four inputs will be given to them in order to characterize their distribution: **lower bound**, **upper bound**, **probability density** and **distribution family**. The probability density used was the same for all variables (0.95), as well as the distribution family (log-normal distribution). For the first tests, we decided to run **2 simulations with 10 Monte Carlo runs each**. Besides that, **the recovering ($\gamma$) and exposure ($\delta$) rate parameters were replaced by the recovering and incubation periods**, which are simply their reciprocals. The bounds and initial state varbiables of the first simulation are listed below:
- **Infectious period ($\gamma$)**: Lower bound: 7.0; upper bound: 14.0;
- **Incubation period ($\delta$)**: Lower bound: 4.1; upper bound: 7.0;
- **Reproductive number ($R₀$)**: Lower bound: 2.5; upper bound: 4;
- **Susceptible population**: 999990;
- **Exposed population**: 10;
- **Infected population**: 10;
- **Recovered population**: 0;
```
from stochastic_seir.config import MONTE_CARLO_RUNS
dataframe = experiments[simulation == 3]
dfs = np.array_split(dataframe, MONTE_CARLO_RUNS)
stochastic_plot(dfs)
```
The only change made for the second simulation was in $R₀$'s bounds. As $\alpha$ and $\gamma$ are more stable parameters (they are different from one disease to another, but usually vary little for the same disease in different scenarios), keeping them at the same values makes sense, since our main objective is to analyze how political and geographical factor may affect a disease's spread.
The bounds of the second simulation are listed below:
- **Infectious period ($\gamma$)**: Lower bound: 7.0; upper bound: 14.0;
- **Incubation period ($\delta$)**: Lower bound: 4.1; upper bound: 7.0;
- **Reproductive number ($R₀$)**: Lower bound: 4; upper bound: 6;
- **Susceptible population**: 999990;
- **Exposed population**: 10;
- **Infected population**: 10;
- **Recovered population**: 0.
By analyzing both simulations, we can clearly see that the different probability distribution affected not only the population means over time, but also their amplitude, since the chosen intervals for $R₀$ on the second simulation had a lower amplitude than on the first.
```
dataframe = experiments[simulation == 4]
dfs = np.array_split(dataframe, MONTE_CARLO_RUNS)
stochastic_plot(dfs)
```
| github_jupyter |
# MNIST Image Classification with TensorFlow on Cloud AI Platform
This notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras).
## Learning objectives
1. Understand how to build a Dense Neural Network (DNN) for image classification
2. Understand how to use dropout (DNN) for image classification
3. Understand how to use Convolutional Neural Networks (CNN)
4. Know how to deploy and use an image classification model using Google Cloud's [Vertex AI](https://cloud.google.com/vertex-ai/)
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/2_mnist_models.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
First things first. Configure the parameters below to match your own Google Cloud project details.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
```
## Building a dynamic model
In the previous notebook, <a href="1_mnist_linear.ipynb">1_mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
```
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
```
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
```
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
```
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.
**TODO 1**: Define the Keras layers for a DNN model
**TODO 2**: Define the Keras layers for a dropout model
**TODO 3**: Define the Keras layers for a CNN model
Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
```
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dense(nclasses),
Softmax()
],
'dnn_dropout': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
],
'cnn': [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu', input_shape=(WIDTH, HEIGHT, 1)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
```
## Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
```
!python3 -m mnist_models.trainer.test
```
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
```
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
```
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
```
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
```
## Training on the cloud
Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
```
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
```
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
```
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
```
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
```
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
```
AI platform job could take around 10 minutes to complete. Enable the **AI Platform Training & Prediction API**, if required.
```
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
```
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow.
## Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.
Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
```
%%bash
gcloud config set ai_platform/region global
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
```
To predict with the model, let's take one of the example images.
**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
```
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
```
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
```
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
```
Copyright 2022 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Calibration Procedure
* Compute center offset:
- Set $\lambda_{\rm center}$ to set of known spectral lines
- Measure pixel position of each:
- average each to determine central pixel $n_o$
| $\lambda_{\rm center}$ | Pixel |
| ----------------------: |:------:|
| 0 nm | 5.2 |
| 445 nm | 6.22 |
| 901 nm | 3.1 |
* Compute spectrometer calibration angles/length ($\ f_L, \delta, \gamma$)
* Move known spectral line $\lambda_o$ to left and right sides of detector
* record $\lambda_{\rm center}$ and pixel position for each
* Compute best fit of $\ f_{\rm calib}$
| $\lambda_o$ | Side | $\lambda_{\rm center}$| Pixel |
| ------------- | ---- |:----------------------|-------:|
| 809.4 nm | R |729.4910 nm |508 |
| 809.4 nm | L |899.5830 nm | 4 |
| ... | ... | ... |... |
# Optimization Function
Optimize for 3 parameters:
* $f_L$: Focal length of spectrometer
* $\delta$: Detector angle (The angle of the image plane relative to the plane perpendicular to the spectrograph focal axis at the center of the image plane)
* $\gamma$: inclusion angle
from experiment:
* $n = n_{px} - n_o$: Pixel from central pixel
* $\lambda_{\rm center}$: Wavelength of center pixel
* $\lambda_p$: Wavelength of pixel $n$
Fixed Constants:
* $m$: Diffraction order (typically one)
* $x_{\rm pixel}$: pixel size
* $d_{grating}$: Grating pitch (1/(groves / mm))
residual: (wl, wl_p, n, f, delta,gamma)
We measure pixel position ($n$) of a known wavelength ($\lambda_p$) for multple peaks and spectrometer positions and find the best fit parameters $\ f_L, \delta, \gamma$:
$$ \lambda_p = f_{\rm calib} ( n, \lambda_{\rm center},
\underbrace{m, x_{\rm pixel}, d_{\rm grating}}_{\rm spec\ params},
\overbrace{f_L,\ \ \delta,\ \ \gamma}^{\rm Calibration\ params} ) $$
$$ \lambda_p = \frac{d}{m} \cdot \left[ \sin( \psi - \frac{\gamma}{2}) + \sin(\psi+\frac{\gamma}{2} + \eta) \right]$$
Where
$$ \psi = \arcsin \left[ \frac{ m\ \lambda_{\rm center} } { 2\ d_{\rm grating} \cos(\frac{\gamma}{2})} \right] $$
$$ \eta = \arctan \left[ \frac{ n\ x_{pixel} \cos{\delta}} {f_L + n\ x_{pixel} \sin(\delta)} \right]$$
$$n = n_{px} - n_o$$
```
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from pprint import pprint
%matplotlib notebook
def wl_p_calib(px, n0, wl_center, m_order, d_grating, x_pixel, f, delta, gamma):
#consts
#d_grating = 1./150. #mm
#x_pixel = 16e-3 # mm
#m_order = 1 # diffraction order, unitless
n = px - n0
psi = np.arcsin( m_order* wl_center / (2*d_grating*np.cos(gamma/2.)))
eta = np.arctan(n*x_pixel*np.cos(delta) / (f+n*x_pixel*np.sin(delta)))
return ((d_grating/m_order)
*(np.sin(psi-0.5*gamma)
+ np.sin(psi+0.5*gamma+eta)))
from scipy.optimize import least_squares
def fit_residual(
# optimization parameters
opt_params,
# other params and data
px, n0, wl_center, m_order, d_grating, x_pixel,
wl_actual
):
(f, delta, gamma,) = opt_params
wl_model = wl_p_calib(px, n0, wl_center, m_order, d_grating, x_pixel, f, delta, gamma)
return wl_model - wl_actual
initial_guess = (300, 0, 0)
kwargs = dict(
px=A,
n0=256,
wl_center=A,
m_order=1,
d_grating=1/150.,
x_pixel=16e-3,
)
result = least_squares(fit_residual, initial_guess, kwargs=kwargs)
```
# CL calibration
# grating 3 (150 g/mm Bz 500)
```
# grating 3 (150 g/mm Bz 500)
wl_center_data = np.array([
[0, 814],
[253.652, 813.5],
[435.833, 815.25],
[546.074, 815.25],
[696.543, 813.5],
[763.511, 813.75],
[912.297, 815.0],
])
n0 = np.mean(wl_center_data[:,1])
n0
plt.figure(1)
plt.plot(wl_center_data[:,0], wl_center_data[:,1])
plt.axhline(n0)
print(wl_center_data[:,0] - 250)
print(wl_center_data[:,0] + 250)
dispersion_data = np.array([
#wl_actual, wl_center, pixel
[253.652, 503, 84],
[435.833, 685, 79],
[546.074, 796, 75],
[696.543, 946, 75.5],
[763.511, 1013, 73],
[912.297, 1162, 68],
[253.652, 3, 1546],
[435.833, 185, 1549],
[546.074, 296, 1548],
[696.543, 446, 1554.5],
[763.511, 513, 1556],
[912.297, 662, 1556],
[ 0.000, 0, 814],
[253.652, 253.652, 813.5],
[435.833, 435.833, 815.25],
[546.074, 546.074, 815.25],
[696.543, 696.543, 813.5],
[763.511, 763.511, 813.75],
[912.297, 912.297, 815.0],
])
initial_guess = (300,1,0)
kwargs = dict(
px=dispersion_data[:,2],
n0=np.mean(wl_center_data[:,1]),
wl_center=dispersion_data[:,1]*1e-6,
m_order=1,
d_grating=1/150.,
x_pixel=16e-3,
wl_actual=dispersion_data[:,0]*1e-6
)
result = least_squares(fit_residual, initial_guess, kwargs=kwargs)
result.x
kwargs = dict(
px=dispersion_data[:,2],
#px=wl_center_data[:,1],
n0=np.mean(wl_center_data[:,1]),
wl_center=dispersion_data[:,1]*1e-6,
#wl_center=wl_center_data[:,0]*1e-6,
m_order=1,
d_grating=1/150.,
x_pixel=16e-3,
#wl_actual=dispersion_data[:,0]*1e-6,
f = result.x[0],
delta = result.x[1],
gamma = result.x[2],
)
wl_p_calib(**kwargs)*1e6 - dispersion_data[:,0]
#wl_p_calib(**kwargs)*1e6 - wl_center_data[:,0]
wl_p_calib(px=815, n0=n0, wl_center=300e-6, m_order=1, d_grating=1/150., x_pixel=16e-3, f=300, delta=0, gamma=0)
# grating 1 (300 g/mm Bz 500)
wl_center_data = np.array([
[0, 825],
[253.652, 825.5],
[435.833, 826.5],
[546.074, 825.5],
[696.543, 826.75],
[763.511, 825.5],
[912.297, 827.0],
])
n0 = np.mean(wl_center_data[:,1])
n0
plt.figure(2)
plt.plot(wl_center_data[:,0], wl_center_data[:,1])
plt.axhline(n0)
print(wl_center_data[:,0] - 120)
print(wl_center_data[:,0] + 130)
dispersion_data = np.array([
#wl_actual, wl_center, pixel
[253.652, 384, 55],
[435.833, 566, 49.5],
[546.074, 676, 47.5],
[696.543, 826, 41],
[763.511, 894, 34.5],
[912.297, 1042, 28.5],
[253.652, 134, 1526],
[435.833, 316, 1535],
[546.074, 426, 1542],
[696.543, 577, 1544.5],
[763.511, 644, 1547.5],
[912.297, 792, 1558.5],
[0, 0, 825],
[253.652, 253.652, 825.5],
[435.833, 435.833, 826.5],
[546.074, 546.074, 825.5],
[696.543, 696.543, 826.75],
[763.511, 763.511, 825.5],
[912.297, 912.297, 827.0],
])
initial_guess = (300,1,0.1)
kwargs = dict(
px=dispersion_data[:,2],
n0=np.mean(wl_center_data[:,1]),
wl_center=dispersion_data[:,1]*1e-6,
m_order=1,
d_grating=1/300.,
x_pixel=16e-3,
wl_actual=dispersion_data[:,0]*1e-6
)
result = least_squares(fit_residual, initial_guess, kwargs=kwargs)
result.x
kwargs = dict(
px=dispersion_data[:,2],
#px=wl_center_data[:,1],
n0=np.mean(wl_center_data[:,1]),
wl_center=dispersion_data[:,1]*1e-6,
#wl_center=wl_center_data[:,0]*1e-6,
m_order=1,
d_grating=1/300.,
x_pixel=16e-3,
#wl_actual=dispersion_data[:,0]*1e-6,
f = result.x[0],
delta = result.x[1],
gamma = result.x[2],
)
wl_p_calib(**kwargs)*1e6 - dispersion_data[:,0]
#wl_p_calib(**kwargs)*1e6 - wl_center_data[:,0]
```
# Old Version
```
from lmfit import Parameters, minimize
offset_data = np.array([
[0, 5.1940],
[435.8, 441.0860],
[546.1, 551.3630],
[610.8, 616.2860],
[809.4, 814.4940],
])
offset_
pl.plot(offset_data[:,0], offset_data[:,1], 'x-')
wl_offset = np.average(offset_data[:,0]-offset_data[:,1])
print(wl_offset, "nm")
```
Dispersion
809.4, R, 508, 729.4910
809.4, L, 004, 899.5830
610.8, R, 508, 531.0920
610.8, L, 004, 701.9610
435.8, R, 508, 354.9880
435.8, L, 004, 526.9880
```
D = dispersion_data = np.array([
[809.4, 508, 729.4910],
[809.4, 4, 899.5830],
[610.8, 508, 531.0920],
[610.8, 4, 701.9610],
[435.8, 508, 354.9880],
[435.8, 4, 526.9880]
])
data = dict(
wl = 1e-6*(D[:,2] + wl_offset),
n = D[:,1] - 256,
wl_p = 1e-6*D[:,0],
)
data
def wl_p_func(wl_center, n, f, delta, gamma):
#consts
d_grating = 1./150. #mm
x_pixel = 16e-3 # mm
m_order = 1 # diffraction order, unitless
psi = np.arcsin( m_order* wl_center / (2*d_grating*np.cos(gamma/2)))
eta = np.arctan(n*x_pixel*np.cos(delta) / (f+n*x_pixel*np.sin(delta)))
return ((d_grating/m_order)
*(np.sin(psi-0.5*gamma)
+ np.sin(psi+0.5*gamma+eta)))
def residual(wl_center, wl_p, n, f, delta, gamma):
#print 'wl_center', wl_center.shape
#print 'psi', psi.shape
#print 'eta', eta.shape
residual = -wl_p + wl_p_func(wl_center, n, f, delta, gamma)
return residual
def residual_lmfit(params, x, data):
f = params['f'].value
delta = params['delta'].value
gamma = params['gamma'].value
wl = data['wl']
wl_p = data['wl_p']
n = data['n']
return residual(wl, wl_p, n, f, delta, gamma)
params = Parameters()
params.add('f', value=300, vary=True)#, min=280, max=320, vary=True)
params.add('delta', value=0)#, min=-np.pi/8, max=np.pi/8, vary=True)
params.add('gamma', value=np.pi/6)#, min=0, max=np.pi/4, vary=True)
result = minimize(residual_lmfit, params, args=(0,data,))
print result.success, result.message
pprint(params.values())
180*0.0233/np.pi
f_array = np.linspace(280,320, 20)
delta = 0
gamma = np.pi/6.
pl.plot(f_array,
[np.sum((residual(data['wl'], data['wl_p'], data['n'],
f, delta, gamma))**2)
for f in f_array])
f = params['f'].value
delta = params['delta'].value
gamma = params['gamma'].value
pixels = np.arange(-256,256)
#print wl_p_func(750e-6, pixels, f, delta, gamma)*1e6
pl.plot( pixels, wl_p_func((750e-6, pixels, f, delta, gamma)*1e6)
pl.figure()
pl.plot( pixels[:-1], np.diff(wl_p_func(750e-6, pixels, f, delta, gamma)*1e6))
```
| github_jupyter |
# Distribuição Normal
Gaussiana, curva de sino
* simétrica
* média = mediana = moda
* variáveis contínuas
Ex:
* altura e peso de uma população
* tamanho do crânio de recém nascidos
* pressão sanguínea
$$ p(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right] $$
```
import matplotlib.pyplot as plt
SMALL_SIZE = 12
MEDIUM_SIZE = 14
BIGGER_SIZE = 16
# Font Sizes
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('figure', figsize = (8, 6)) # Figure Size
import numpy as np
from scipy.stats import norm
import math
mu = 0
variance = 1
sigma = np.sqrt(variance)
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
p = 1/np.sqrt(2*math.pi*sigma**2)*np.exp(-(x-mu)**2/(2*sigma**2))
plt.plot(x, norm.pdf(x, mu, sigma), label='scipy')
plt.plot(x, p, 'k:', lw=5, label='formula')
plt.legend();
x = np.linspace(-5, 5, 100)
mu = 0
sigma = 1
plt.plot(x,norm.pdf(x, mu, sigma), 'k:', lw=5,
label=r'$\mu={}, \sigma={}$'.format(mu,sigma))
mu = 1
sigma = 1.5
plt.plot(x,norm.pdf(x, mu, sigma), 'r-.', lw=2,
label=r'$\mu={}, \sigma={}$'.format(mu,sigma))
mu = -1
sigma = .75
plt.plot(x,norm.pdf(x, mu, sigma), 'g', lw=3,
label=r'$\mu={}, \sigma={}$'.format(mu,sigma))
plt.legend();
```
Z_score
$$ z_i = \frac{x_i-\bar{x}}{\sigma} $$
# Distribuição
* Variáveis Contínuas
* Normal
* Exponencial
* Gamma
* Variáveis Discretas
* Binomial
* Poisson
# Distribuição binomial
* variáveis discretas
* dois resultados possíveis de igual chance de ocorrência
* eventos finitos e independentes
Ex.:
* Dado
* Moeda
$$ p(k|n,p) = \frac{n!}{k!(n-k)!}p^k(1-p)^{n-k} $$
```
from math import factorial
from scipy.stats import binom
n = 50
p = 0.5
k_vec = np.arange(1,n+1) # target, starts at 1 goes to n, all possible outcomes
def compute_binomial_prob(n,k,p):
return factorial(n)/(factorial(k)*factorial(n-k)) * p**k * (1-p)**(n-k)
P_vec = [compute_binomial_prob(n,k,p) for k in k_vec]
plt.plot(k_vec, binom.pmf(k_vec, n, p), 'r', label='scipy')
plt.plot(k_vec, P_vec, 'k:', lw=5, label='formula')
plt.legend();
```
Ex: https://towardsdatascience.com/fun-with-the-binomial-distribution-96a5ecabf65b
```
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Input variables
# Number of trials
trials = 1000
# Number of independent experiments in each trial
n = 10
# Probability of success for each experiment
p = 0.5
# Function that runs our coin toss trials (Monte Carlo)
# heads is a list of the number of successes from each trial of n experiments
def run_binom(trials, n, p):
heads = []
for i in range(trials):
tosses = [np.random.random() for i in range(n)]
heads.append(len([i for i in tosses if i>=0.50]))
return heads
# Run the function
heads = run_binom(trials, n, p)
# Plot the results as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(heads, bins=11, label='simulation results')
ax.set_xlabel("Number of Heads",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16);
# Plot the actual binomial distribution as a sanity check
from scipy.stats import binom
x = range(0,11)
plt.plot(x, binom.pmf(x, n, p), 'ro', label='actual binomial distribution')
plt.vlines(x, 0, binom.pmf(x, n, p), colors='r', lw=5, alpha=0.5)
plt.legend()
plt.show()
# Probability of getting 6 heads
runs = 10000
prob_6 = sum([1 for i in np.random.binomial(n, p, size=runs) if i==6])/runs
print('The probability of 6 heads is: ' + str(prob_6))
# Call Center Simulation
# Number of employees to simulate
employees = 100
# Cost per employee
wage = 200
# Number of independent calls per employee
n = 50
# Probability of success for each call
p = 0.04
# Revenue per call
revenue = 100
# Binomial random variables of call center employees
conversions = np.random.binomial(n, p, size=employees)
# Print some key metrics of our call center
print('Average Conversions per Employee: ' + str(round(np.mean(conversions), 2)))
print('Standard Deviation of Conversions per Employee: ' + str(round(np.std(conversions), 2)))
print('Total Conversions: ' + str(np.sum(conversions)))
print('Total Revenues: ' + str(np.sum(conversions)*revenue))
print('Total Expense: ' + str(employees*wage))
print('Total Profits: ' + str(np.sum(conversions)*revenue - employees*wage))
# Number of days to simulate
sims = 1000
sim_conversions = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
sim_profits = np.array(sim_conversions)*revenue - employees*wage
# Call Center Simulation (Higher Conversion Rate)
# Number of employees to simulate
employees = 100
# Cost per employee
wage = 200
# Number of independent calls per employee
n = 55
# Probability of success for each call
p = 0.05
# Revenue per call
revenue = 100
# Binomial random variables of call center employees
conversions_up = np.random.binomial(n, p, size=employees)
# Simulate 1,000 days for our call center
# Number of days to simulate
sims = 1000
sim_conversions_up = [np.sum(np.random.binomial(n, p, size=employees)) for i in range(sims)]
sim_profits_up = np.array(sim_conversions_up)*revenue - employees*wage
# Plot and save the results as a histogram
fig, ax = plt.subplots(figsize=(14,7))
ax = sns.distplot(sim_profits, bins=20, label='original call center simulation results')
ax = sns.distplot(sim_profits_up, bins=20, label='improved call center simulation results', color='red')
ax.set_xlabel("Profits",fontsize=16)
ax.set_ylabel("Frequency",fontsize=16)
plt.legend();
```
Ex: https://cmdlinetips.com/2018/03/probability-distributions-in-python/
https://www.probabilisticworld.com/discrete-probability-distributions-overview/
```
# for inline plots in jupyter
%matplotlib inline
# import matplotlib
import matplotlib.pyplot as plt
# import seaborn
import seaborn as sns
# settings for seaborn plotting style
sns.set(color_codes=True)
# settings for seaborn plot sizes
sns.set(rc={'figure.figsize':(4.5,3)})
```
# Uniform distribution
$$ P(x;n) = \frac{1}{n} $$
```
# import uniform distribution
from scipy.stats import uniform
# random numbers from uniform distribution
# Generate 10 numbers from 0 to 10
n = 10000
a = 0
b = 10
data_uniform = uniform.rvs(size=n, loc = a, scale=b) #random variables
ax = sns.distplot(data_uniform,
bins=100,
kde=False, # density plot
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Uniform ', ylabel='Frequency');
```
# Normal distribution
$$ p(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right] $$
```
from scipy.stats import norm
# generate random numbers from N(0,1)
data_normal = norm.rvs(size=10000,loc=0,scale=1) # loc = mean, scale = std
ax = sns.distplot(data_normal,
bins=100,
kde=False,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Normal', ylabel='Frequency');
```
# Bernoulli distribution
Discrete. Outcome: 0 or 1.
\begin{equation}
P(x;p) = \begin{cases}
p &\text{if $x=1$}\\
1-p &\text{if $x=0$}
\end{cases}
\end{equation}
```
# import bernoulli
from scipy.stats import bernoulli
# generate bernoulli
data_bern = bernoulli.rvs(size=10000,p=0.3)
ax= sns.distplot(data_bern,
kde=False,
color="skyblue",
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Bernoulli', ylabel='Frequency');
```
# Binomial distribution
Discrete.
Obtains the number of successe from N Bernoulli trials.
$$ p(k|n,p) = \frac{n!}{k!(n-k)!}p^k(1-p)^{n-k} $$
\begin{equation}
p(x; p,n) = \binom{n}{x}p^{x}(1-p)^{n-x}
\end{equation}
```
from scipy.stats import binom
binom.rvs(n=10,p=0.5) # sucesses from n=10 Bernoulli trials with p=0.5
data_binom = binom.rvs(n=10,p=0.5,size=10000)
ax = sns.distplot(data_binom,
kde=False,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Binomial', ylabel='Frequency');
```
# Poisson distribution
number of times an event happened in a time interval
* rate of ocurrence (mu)
$$ P(x;\mu)=\frac{\mu^x e^{-\mu}}{x!} $$
```
from scipy.stats import poisson
data_poisson = poisson.rvs(mu=3, size=10000)
ax = sns.distplot(data_poisson,
kde=False,
color='green',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Poisson', ylabel='Frequency');
```
# Beta distribution
* Continuous
* distribution for probabilities
```
from scipy.stats import beta
data_beta = beta.rvs(1, 1, size=10000) # ~ uniform
ax = sns.distplot(data_beta,
kde=False,
bins=100,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Beta(1,1)', ylabel='Frequency');
data_beta_a10b1 = beta.rvs(10, 1, size=10000) #skewed right
ax = sns.distplot(data_beta_a10b1,
kde=False,
bins=50,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Beta(10,1)', ylabel='Frequency');
data_beta_a1b10 = beta.rvs(1, 10, size=10000) # skewed leftt
ax = sns.distplot(data_beta_a1b10,
kde=False,
bins=100,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Beta(1,10)', ylabel='Frequency');
data_beta_a10b10 = beta.rvs(10, 10, size=10000) # ~ normal
ax = sns.distplot(data_beta_a10b10,
kde=False,
bins=100,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Beta(10,10)', ylabel='Frequency');
```
# Gamma distribution
```
from scipy.stats import gamma
data_gamma = gamma.rvs(a=5, size=10000)
ax = sns.distplot(data_gamma,
kde=False,
bins=100,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Gamma', ylabel='Frequency');
help(gamma)
```
# Lognormal distribution
```
from scipy.stats import lognorm
data_lognorm = lognorm.rvs(0.2, size=10000)
ax = sns.distplot(data_lognorm,kde=False,
bins=100,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Log Normal', ylabel='Frequency');
```
# Negative Binomial distribution
* discrete
```
from scipy.stats import nbinom
data_nbinom = nbinom.rvs(10, 0.5, size=10000)
ax = sns.distplot(data_nbinom,
kde=False,
color='skyblue',
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Negative Binomial', ylabel='Frequency');
```
# PDF e CDF
PDF: Função Densidade de Probabilidade
* relativa
* não negativa
* integral = 1
CDF: Função Distribuição Acumulada
* acumulada
* não negativa
* integral = 1
Survivor fuction = 1 - CDF
ECDF
* estimador empírico
```
```
| github_jupyter |
```
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
import numpy as np
import torch
import os
import networkx as nx
co_dir = "../results/2021-09-07_21-01_dist_mnist_complete"
cy_dir = "../results/2021-09-05_14-27_dist_mnist_v3"
r3_dir = "../results/2021-09-07_20-11_dist_mnist_random3"
r8_dir = "../results/2021-09-07_19-01_dist_mnist_random8"
solo_dir = "../results/2021-09-04_16-01_dist_mnist_v3"
co_dinno = torch.load(os.path.join(co_dir, "dinno_results.pt"), map_location=torch.device("cpu"))
co_dsgt = torch.load(os.path.join(co_dir, "dsgt_results.pt"), map_location=torch.device("cpu"))
co_dsgd = torch.load(os.path.join(co_dir, "dsgd_results.pt"), map_location=torch.device("cpu"))
cy_dinno = torch.load(os.path.join(cy_dir, "dinno_results.pt"), map_location=torch.device("cpu"))
cy_dsgt = torch.load(os.path.join(cy_dir, "dsgt_results.pt"), map_location=torch.device("cpu"))
cy_dsgd = torch.load(os.path.join(cy_dir, "dsgd_results.pt"), map_location=torch.device("cpu"))
r3_dinno = torch.load(os.path.join(r3_dir, "dinno_results.pt"), map_location=torch.device("cpu"))
r3_dsgt = torch.load(os.path.join(r3_dir, "dsgt_results.pt"), map_location=torch.device("cpu"))
r3_dsgd = torch.load(os.path.join(r3_dir, "dsgd_results.pt"), map_location=torch.device("cpu"))
r8_dinno = torch.load(os.path.join(r8_dir, "dinno_results.pt"), map_location=torch.device("cpu"))
r8_dsgt = torch.load(os.path.join(r8_dir, "dsgt_results.pt"), map_location=torch.device("cpu"))
r8_dsgd = torch.load(os.path.join(r8_dir, "dsgd_results.pt"), map_location=torch.device("cpu"))
results_solo = torch.load(os.path.join(solo_dir, "solo_results.pt"), map_location=torch.device("cpu"))
co_G = nx.read_gpickle(os.path.join(co_dir, "graph.gpickle"))
co_fied = nx.linalg.algebraicconnectivity.algebraic_connectivity(co_G)
cy_G = nx.read_gpickle(os.path.join(cy_dir, "graph.gpickle"))
cy_fied = nx.linalg.algebraicconnectivity.algebraic_connectivity(cy_G)
r3_G = nx.read_gpickle(os.path.join(r3_dir, "graph.gpickle"))
r3_fied = nx.linalg.algebraicconnectivity.algebraic_connectivity(r3_G)
r8_G = nx.read_gpickle(os.path.join(r8_dir, "graph.gpickle"))
r8_fied = nx.linalg.algebraicconnectivity.algebraic_connectivity(r8_G)
def plot_vals(ax, aca, acb, acc, results_solo, title):
t = torch.arange(aca.shape[0]) * 20
cent_acc = 0.985
ax.plot(t, cent_acc * torch.ones_like(t), c=cent_color,
linewidth=2, linestyle=":", label="Centralized")
for i in range(9):
ax.plot(t, results_solo[i]["validation_accuracy"] * np.ones_like(t),
linewidth=2, color=solo_color, linestyle=":")
ax.plot(t, results_solo[9]["validation_accuracy"] * np.ones_like(t),
linewidth=2, color=solo_color, label="Individual", linestyle=":")
ax.plot(t, torch.mean(aca, dim=1), c=dinno_color, linewidth=2, label="DiNNO")
ax.fill_between(t, torch.amax(aca, dim=1), torch.amin(aca, dim=1),
color=dinno_color, alpha=0.5, zorder=3)
ax.plot(t, torch.mean(acb, dim=1), c=dsgt_color, linewidth=2, label="DSGT")
ax.fill_between(t, torch.amax(acb, dim=1), torch.amin(acb, dim=1),
color=dsgt_color, alpha=0.5, zorder=2)
ax.plot(t, torch.mean(acc, dim=1), c=dsgd_color, linewidth=2, label="DSGD")
ax.fill_between(t, torch.amax(acc, dim=1), torch.amin(acc, dim=1),
color=dsgd_color, alpha=0.5, zorder=1)
ax.set_ylim((0.0, 1.0))
ax.set_title(title)
ax.set_xlabel("Communication Round")
ax.grid(zorder=0)
dinno_color="darkorange"
dsgt_color="limegreen"
dsgd_color="purple"
cent_color="indigo"
solo_color="cornflowerblue"
(fig, axs) = plt.subplots(ncols=4, figsize=(20, 5), tight_layout=True)
co_aca = torch.stack(co_dinno["top1_accuracy"])
co_acb = torch.stack(co_dsgt["top1_accuracy"])
co_acc = torch.stack(co_dsgd["top1_accuracy"])
cy_aca = torch.stack(cy_dinno["top1_accuracy"])
cy_acb = torch.stack(cy_dsgt["top1_accuracy"])
cy_acc = torch.stack(cy_dsgd["top1_accuracy"])
r3_aca = torch.stack(r3_dinno["top1_accuracy"])
r3_acb = torch.stack(r3_dsgt["top1_accuracy"])
r3_acc = torch.stack(r3_dsgd["top1_accuracy"])
r8_aca = torch.stack(r8_dinno["top1_accuracy"])
r8_acb = torch.stack(r8_dsgt["top1_accuracy"])
r8_acc = torch.stack(r8_dsgd["top1_accuracy"])
title = "Complete, Fiedler = {:.1f}".format(co_fied)
plot_vals(axs[0], co_aca, co_acb, co_acc, results_solo, title)
axs[0].set_ylabel("Top1 Validation Accuracy")
axs[0].legend(loc=4)
title = "Cycle, Fiedler = {:.1f}".format(cy_fied)
plot_vals(axs[1], cy_aca, cy_acb, cy_acc, results_solo, title)
title = "Random 1, Fiedler = {:.1f}".format(r3_fied)
plot_vals(axs[2], r3_aca, r3_acb, r3_acc, results_solo, title)
title = "Random 2, Fiedler = {:.1f}".format(r8_fied)
plot_vals(axs[3], r8_aca, r8_acb, r8_acc, results_solo, title)
fig.savefig("mnist_four.png", dpi=1000)
ag_dinno = torch.vstack([v[1].reshape(1, -1) for v in r3_dinno["consensus_error"]])
ag_dsgt = torch.vstack([v[1].reshape(1, -1) for v in r3_dsgt["consensus_error"]])
ag_dsgd = torch.vstack([v[1].reshape(1, -1) for v in r3_dsgd["consensus_error"]])
(fig, ax) = plt.subplots(figsize=(10, 8), tight_layout=True)
ax.plot(ag_dinno[1:, :9], c=dinno_color, zorder=3)
ax.plot(ag_dinno[1:, 9], c=dinno_color, label="DiNNO", zorder=3)
ax.plot(ag_dsgt[1:, :9], c=dsgt_color, zorder=2)
ax.plot(ag_dsgt[1:, 9], c=dsgt_color, label="DSGT", zorder=2)
ax.plot(ag_dsgd[1:, :9], c=dsgd_color, zorder=1)
ax.plot(ag_dsgd[1:, 9], c=dsgd_color, label="DSGD", zorder=1)
ax.set_yscale("log")
ax.grid()
ax.set_xlabel("Communication Round")
ax.set_ylabel("Node Disagreement")
ax.legend(loc=1)
fig.savefig("mnist_agree.svg")
```
| github_jupyter |
```
%matplotlib inline
```
# Pyplot tutorial
An introduction to the pyplot interface.
Intro to pyplot
===============
:mod:`matplotlib.pyplot` is a collection of functions
that make matplotlib work like MATLAB.
Each ``pyplot`` function makes
some change to a figure: e.g., creates a figure, creates a plotting area
in a figure, plots some lines in a plotting area, decorates the plot
with labels, etc.
In :mod:`matplotlib.pyplot` various states are preserved
across function calls, so that it keeps track of things like
the current figure and plotting area, and the plotting
functions are directed to the current axes (please note that "axes" here
and in most places in the documentation refers to the *axes*
`part of a figure <figure_parts>`
and not the strict mathematical term for more than one axis).
<div class="alert alert-info"><h4>Note</h4><p>the pyplot API is generally less-flexible than the object-oriented API.
Most of the function calls you see here can also be called as methods
from an ``Axes`` object. We recommend browsing the tutorials and
examples to see how this works.</p></div>
Generating visualizations with pyplot is very quick:
```
import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.show()
```
You may be wondering why the x-axis ranges from 0-3 and the y-axis
from 1-4. If you provide a single list or array to
`~.pyplot.plot`, matplotlib assumes it is a
sequence of y values, and automatically generates the x values for
you. Since python ranges start with 0, the default x vector has the
same length as y but starts with 0. Hence the x data are
``[0, 1, 2, 3]``.
`~.pyplot.plot` is a versatile function, and will take an arbitrary number of
arguments. For example, to plot x versus y, you can write:
```
plt.plot([1, 2, 3, 4], [1, 4, 9, 16])
```
Formatting the style of your plot
---------------------------------
For every x, y pair of arguments, there is an optional third argument
which is the format string that indicates the color and line type of
the plot. The letters and symbols of the format string are from
MATLAB, and you concatenate a color string with a line style string.
The default format string is 'b-', which is a solid blue line. For
example, to plot the above with red circles, you would issue
```
plt.plot([1, 2, 3, 4], [1, 4, 9, 16], 'ro')
plt.axis([0, 6, 0, 20])
plt.show()
```
See the `~.pyplot.plot` documentation for a complete
list of line styles and format strings. The
`~.pyplot.axis` function in the example above takes a
list of ``[xmin, xmax, ymin, ymax]`` and specifies the viewport of the
axes.
If matplotlib were limited to working with lists, it would be fairly
useless for numeric processing. Generally, you will use `numpy
<http://www.numpy.org>`_ arrays. In fact, all sequences are
converted to numpy arrays internally. The example below illustrates
plotting several lines with different format styles in one function call
using arrays.
```
import numpy as np
# evenly sampled time at 200ms intervals
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
```
Plotting with keyword strings
=============================
There are some instances where you have data in a format that lets you
access particular variables with strings. For example, with
`numpy.recarray` or `pandas.DataFrame`.
Matplotlib allows you provide such an object with
the ``data`` keyword argument. If provided, then you may generate plots with
the strings corresponding to these variables.
```
data = {'a': np.arange(50),
'c': np.random.randint(0, 50, 50),
'd': np.random.randn(50)}
data['b'] = data['a'] + 10 * np.random.randn(50)
data['d'] = np.abs(data['d']) * 100
plt.scatter('a', 'b', c='c', s='d', data=data)
plt.xlabel('entry a')
plt.ylabel('entry b')
plt.show()
```
Plotting with categorical variables
===================================
It is also possible to create a plot using categorical variables.
Matplotlib allows you to pass categorical variables directly to
many plotting functions. For example:
```
names = ['group_a', 'group_b', 'group_c']
values = [1, 10, 100]
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.bar(names, values)
plt.subplot(132)
plt.scatter(names, values)
plt.subplot(133)
plt.plot(names, values)
plt.suptitle('Categorical Plotting')
plt.show()
```
Controlling line properties
===========================
Lines have many attributes that you can set: linewidth, dash style,
antialiased, etc; see `matplotlib.lines.Line2D`. There are
several ways to set line properties
* Use keyword args::
plt.plot(x, y, linewidth=2.0)
* Use the setter methods of a ``Line2D`` instance. ``plot`` returns a list
of ``Line2D`` objects; e.g., ``line1, line2 = plot(x1, y1, x2, y2)``. In the code
below we will suppose that we have only
one line so that the list returned is of length 1. We use tuple unpacking with
``line,`` to get the first element of that list::
line, = plt.plot(x, y, '-')
line.set_antialiased(False) # turn off antialiasing
* Use `~.pyplot.setp`. The example below
uses a MATLAB-style function to set multiple properties
on a list of lines. ``setp`` works transparently with a list of objects
or a single object. You can either use python keyword arguments or
MATLAB-style string/value pairs::
lines = plt.plot(x1, y1, x2, y2)
# use keyword args
plt.setp(lines, color='r', linewidth=2.0)
# or MATLAB style string value pairs
plt.setp(lines, 'color', 'r', 'linewidth', 2.0)
Here are the available `~.lines.Line2D` properties.
====================== ==================================================
Property Value Type
====================== ==================================================
alpha float
animated [True | False]
antialiased or aa [True | False]
clip_box a matplotlib.transform.Bbox instance
clip_on [True | False]
clip_path a Path instance and a Transform instance, a Patch
color or c any matplotlib color
contains the hit testing function
dash_capstyle [``'butt'`` | ``'round'`` | ``'projecting'``]
dash_joinstyle [``'miter'`` | ``'round'`` | ``'bevel'``]
dashes sequence of on/off ink in points
data (np.array xdata, np.array ydata)
figure a matplotlib.figure.Figure instance
label any string
linestyle or ls [ ``'-'`` | ``'--'`` | ``'-.'`` | ``':'`` | ``'steps'`` | ...]
linewidth or lw float value in points
marker [ ``'+'`` | ``','`` | ``'.'`` | ``'1'`` | ``'2'`` | ``'3'`` | ``'4'`` ]
markeredgecolor or mec any matplotlib color
markeredgewidth or mew float value in points
markerfacecolor or mfc any matplotlib color
markersize or ms float
markevery [ None | integer | (startind, stride) ]
picker used in interactive line selection
pickradius the line pick selection radius
solid_capstyle [``'butt'`` | ``'round'`` | ``'projecting'``]
solid_joinstyle [``'miter'`` | ``'round'`` | ``'bevel'``]
transform a matplotlib.transforms.Transform instance
visible [True | False]
xdata np.array
ydata np.array
zorder any number
====================== ==================================================
To get a list of settable line properties, call the
`~.pyplot.setp` function with a line or lines as argument
.. sourcecode:: ipython
In [69]: lines = plt.plot([1, 2, 3])
In [70]: plt.setp(lines)
alpha: float
animated: [True | False]
antialiased or aa: [True | False]
...snip
Working with multiple figures and axes
======================================
MATLAB, and :mod:`.pyplot`, have the concept of the current figure
and the current axes. All plotting functions apply to the current
axes. The function `~.pyplot.gca` returns the current axes (a
`matplotlib.axes.Axes` instance), and `~.pyplot.gcf` returns the current
figure (a `matplotlib.figure.Figure` instance). Normally, you don't have to
worry about this, because it is all taken care of behind the scenes. Below
is a script to create two subplots.
```
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure()
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
```
The `~.pyplot.figure` call here is optional because a figure will be created
if none exists, just as an axes will be created (equivalent to an explicit
``subplot()`` call) if none exists.
The `~.pyplot.subplot` call specifies ``numrows,
numcols, plot_number`` where ``plot_number`` ranges from 1 to
``numrows*numcols``. The commas in the ``subplot`` call are
optional if ``numrows*numcols<10``. So ``subplot(211)`` is identical
to ``subplot(2, 1, 1)``.
You can create an arbitrary number of subplots
and axes. If you want to place an axes manually, i.e., not on a
rectangular grid, use `~.pyplot.axes`,
which allows you to specify the location as ``axes([left, bottom,
width, height])`` where all values are in fractional (0 to 1)
coordinates. See :doc:`/gallery/subplots_axes_and_figures/axes_demo` for an example of
placing axes manually and :doc:`/gallery/subplots_axes_and_figures/subplot_demo` for an
example with lots of subplots.
You can create multiple figures by using multiple
`~.pyplot.figure` calls with an increasing figure
number. Of course, each figure can contain as many axes and subplots
as your heart desires::
import matplotlib.pyplot as plt
plt.figure(1) # the first figure
plt.subplot(211) # the first subplot in the first figure
plt.plot([1, 2, 3])
plt.subplot(212) # the second subplot in the first figure
plt.plot([4, 5, 6])
plt.figure(2) # a second figure
plt.plot([4, 5, 6]) # creates a subplot() by default
plt.figure(1) # figure 1 current; subplot(212) still current
plt.subplot(211) # make subplot(211) in figure1 current
plt.title('Easy as 1, 2, 3') # subplot 211 title
You can clear the current figure with `~.pyplot.clf`
and the current axes with `~.pyplot.cla`. If you find
it annoying that states (specifically the current image, figure and axes)
are being maintained for you behind the scenes, don't despair: this is just a thin
stateful wrapper around an object oriented API, which you can use
instead (see :doc:`/tutorials/intermediate/artists`)
If you are making lots of figures, you need to be aware of one
more thing: the memory required for a figure is not completely
released until the figure is explicitly closed with
`~.pyplot.close`. Deleting all references to the
figure, and/or using the window manager to kill the window in which
the figure appears on the screen, is not enough, because pyplot
maintains internal references until `~.pyplot.close`
is called.
Working with text
=================
`~.pyplot.text` can be used to add text in an arbitrary location, and
`~.pyplot.xlabel`, `~.pyplot.ylabel` and `~.pyplot.title` are used to add
text in the indicated locations (see :doc:`/tutorials/text/text_intro` for a
more detailed example)
```
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, density=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
```
All of the `~.pyplot.text` functions return a `matplotlib.text.Text`
instance. Just as with lines above, you can customize the properties by
passing keyword arguments into the text functions or using `~.pyplot.setp`::
t = plt.xlabel('my data', fontsize=14, color='red')
These properties are covered in more detail in :doc:`/tutorials/text/text_props`.
Using mathematical expressions in text
--------------------------------------
matplotlib accepts TeX equation expressions in any text expression.
For example to write the expression $\sigma_i=15$ in the title,
you can write a TeX expression surrounded by dollar signs::
plt.title(r'$\sigma_i=15$')
The ``r`` preceding the title string is important -- it signifies
that the string is a *raw* string and not to treat backslashes as
python escapes. matplotlib has a built-in TeX expression parser and
layout engine, and ships its own math fonts -- for details see
:doc:`/tutorials/text/mathtext`. Thus you can use mathematical text across platforms
without requiring a TeX installation. For those who have LaTeX and
dvipng installed, you can also use LaTeX to format your text and
incorporate the output directly into your display figures or saved
postscript -- see :doc:`/tutorials/text/usetex`.
Annotating text
---------------
The uses of the basic `~.pyplot.text` function above
place text at an arbitrary position on the Axes. A common use for
text is to annotate some feature of the plot, and the
`~.pyplot.annotate` method provides helper
functionality to make annotations easy. In an annotation, there are
two points to consider: the location being annotated represented by
the argument ``xy`` and the location of the text ``xytext``. Both of
these arguments are ``(x, y)`` tuples.
```
ax = plt.subplot()
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
line, = plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.ylim(-2, 2)
plt.show()
```
In this basic example, both the ``xy`` (arrow tip) and ``xytext``
locations (text location) are in data coordinates. There are a
variety of other coordinate systems one can choose -- see
`annotations-tutorial` and `plotting-guide-annotation` for
details. More examples can be found in
:doc:`/gallery/text_labels_and_annotations/annotation_demo`.
Logarithmic and other nonlinear axes
====================================
:mod:`matplotlib.pyplot` supports not only linear axis scales, but also
logarithmic and logit scales. This is commonly used if data spans many orders
of magnitude. Changing the scale of an axis is easy:
plt.xscale('log')
An example of four plots with the same data and different scales for the y axis
is shown below.
```
# Fixing random state for reproducibility
np.random.seed(19680801)
# make up some data in the open interval (0, 1)
y = np.random.normal(loc=0.5, scale=0.4, size=1000)
y = y[(y > 0) & (y < 1)]
y.sort()
x = np.arange(len(y))
# plot with various axes scales
plt.figure()
# linear
plt.subplot(221)
plt.plot(x, y)
plt.yscale('linear')
plt.title('linear')
plt.grid(True)
# log
plt.subplot(222)
plt.plot(x, y)
plt.yscale('log')
plt.title('log')
plt.grid(True)
# symmetric log
plt.subplot(223)
plt.plot(x, y - y.mean())
plt.yscale('symlog', linthresh=0.01)
plt.title('symlog')
plt.grid(True)
# logit
plt.subplot(224)
plt.plot(x, y)
plt.yscale('logit')
plt.title('logit')
plt.grid(True)
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,
wspace=0.35)
plt.show()
```
It is also possible to add your own scale, see `adding-new-scales` for
details.
| github_jupyter |
# Prepare and Deploy a TensorFlow Model to AI Platform for Online Serving
This Notebook demonstrates how to prepare a TensorFlow 2.x model and deploy it for serving with AI Platform Prediction. This example uses the pretrained [ResNet V2 101](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4) image classification model from [TensorFlow Hub](https://tfhub.dev/) (TF Hub).
The Notebook covers the following steps:
1. Downloading and running the ResNet module from TF Hub
2. Creating serving signatures for the module
3. Exporting the model as a SavedModel
4. Deploying the SavedModel to AI Platform Prediction
5. Validating the deployed model
## Setup
This Notebook was tested on **AI Platform Notebooks** using the standard TF 2.2 image.
### Import libraries
```
import base64
import os
import json
import requests
import time
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from typing import List, Optional, Text, Tuple
```
### Configure GCP environment settings
```
PROJECT_ID = 'jk-mlops-dev' # Set your project Id
BUCKET = 'labs-workspace' # Set your bucket name Id
REGION = 'us-central' # Set your region for deploying the model
MODEL_NAME = 'resnet_50'
MODEL_VERSION = 'v1'
GCS_MODEL_LOCATION = 'gs://{}/models/{}/{}'.format(BUCKET, MODEL_NAME, MODEL_VERSION)
THUB_MODEL_HANDLE = 'https://tfhub.dev/google/imagenet/resnet_v2_50/classification/4'
IMAGENET_LABELS_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
IMAGES_FOLDER = 'test_images'
!gcloud config set project $PROJECT_ID
```
### Create a local workspace
```
LOCAL_WORKSPACE = '/tmp/workspace'
if tf.io.gfile.exists(LOCAL_WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(LOCAL_WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(LOCAL_WORKSPACE)
```
## 1. Loading and Running the ResNet Module
### 1.1. Download and instantiate the model
```
os.environ["TFHUB_DOWNLOAD_PROGRESS"] = 'True'
local_savedmodel_path = hub.resolve(THUB_MODEL_HANDLE)
print(local_savedmodel_path)
!ls -la {local_savedmodel_path}
model = hub.load(THUB_MODEL_HANDLE)
```
The expected input to most TF Hub TF2 image classification models, including ResNet 101, is a rank 4 tensor conforming to the following tensor specification: `tf.TensorSpec([None, height, width, 3], tf.float32)`. For the ResNet 101 model, the expected image size is `height x width = 224 x 224`. The color values for all channels are expected to be normalized to the [0, 1] range.
The output of the model is a batch of logits vectors. The indices into the logits are the `num_classes = 1001` classes from the ImageNet dataset. The mapping from indices to class labels can be found in the [labels file](download.tensorflow.org/data/ImageNetLabels.txt) with class 0 for "background", followed by 1000 actual ImageNet classes.
We will now test the model on a couple of JPEG images.
### 1.2. Display sample images
```
image_list = [tf.io.read_file(os.path.join(IMAGES_FOLDER, image_path))
for image_path in os.listdir(IMAGES_FOLDER)]
ncolumns = len(image_list) if len(image_list) < 4 else 4
nrows = int(len(image_list) // ncolumns)
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(10,10))
for axis, image in zip(axes.flat[0:], image_list):
decoded_image = tf.image.decode_image(image)
axis.set_title(decoded_image.shape)
axis.imshow(decoded_image.numpy())
```
### 1.3. Preprocess the testing images
The images need to be preprocessed to conform to the format expected by the ResNet101 model.
```
def _decode_and_scale(image, size):
image = tf.image.decode_image(image, expand_animations=False)
image_height = image.shape[0]
image_width = image.shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.cast(tf.image.resize(image, [size, size]), tf.uint8)
return image
size = 224
raw_images = tf.stack(image_list)
preprocessed_images = tf.map_fn(lambda x: _decode_and_scale(x, size), raw_images, dtype=tf.uint8)
preprocessed_images = tf.image.convert_image_dtype(preprocessed_images, tf.float32)
print(preprocessed_images.shape)
```
### 2.4. Run inference
```
predictions = model(preprocessed_images)
predictions
```
The model returns a batch of arrays with logits. This is not a very user friendly output so we will convert it to the list of ImageNet class labels.
```
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
IMAGENET_LABELS_URL)
imagenet_labels = np.array(open(labels_path).read().splitlines())
```
We will display the 5 highest ranked labels for each image
```
for prediction in list(predictions):
decoded = imagenet_labels[np.argsort(prediction.numpy())[::-1][:5]]
print(list(decoded))
```
## 2. Create Serving Signatures
The inputs and outputs of the model as used during model training may not be optimal for serving. For example, in a typical training pipeline, feature engineering is performed as a separate step preceding model training and hyperparameter tuning. When serving the model, it may be more optimal to embed the feature engineering logic into the serving interface rather than require a client application to preprocess data.
The ResNet V2 101 model from TF Hub is optimized for recomposition and fine tuning. Since there are no serving signatures in the model's metadata, it cannot be served with TF Serving as is.
```
list(model.signatures)
```
To make it servable, we need to add a serving signature(s) describing the inference method(s) of the model.
We will add two signatures:
1. **The default signature** - This will expose the default predict method of the ResNet101 model.
2. **Prep/post-processing signature** - Since the expected inputs to this interface require a relatively complex image preprocessing to be performed by a client, we will also expose an alternative signature that embeds the preprocessing and postprocessing logic and accepts raw unprocessed images and returns the list of ranked class labels and associated label probabilities.
The signatures are created by defining a custom module class derived from the `tf.Module` base class that encapsulates our ResNet model and extends it with a method implementing the image preprocessing and output postprocessing logic. The default method of the custom module is mapped to the default method of the base ResNet module to maintain the analogous interface.
The custom module will be exported as `SavedModel` that includes the original model, the preprocessing logic, and two serving signatures.
This technique can be generalized to other scenarios where you need to extend a TensorFlow model and you have access to the serialized `SavedModel` but you don't have access to the Python code implementing the model.
#### 2.1. Define the custom serving module
```
LABELS_KEY = 'labels'
PROBABILITIES_KEY = 'probabilities'
NUM_LABELS = 5
class ServingModule(tf.Module):
"""
A custom tf.Module that adds image preprocessing and output post processing to
a base TF 2 image classification model from TF Hub.
"""
def __init__(self, base_model, input_size, output_labels):
super(ServingModule, self).__init__()
self._model = base_model
self._input_size = input_size
self._output_labels = tf.constant(output_labels, dtype=tf.string)
def _decode_and_scale(self, raw_image):
"""
Decodes, crops, and resizes a single raw image.
"""
image = tf.image.decode_image(raw_image, dtype=tf.dtypes.uint8, expand_animations=False)
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.image.resize(image, [self._input_size, self._input_size])
image = tf.cast(image, tf.uint8)
return image
def _preprocess(self, raw_inputs):
"""
Preprocesses raw inputs as sent by the client.
"""
# A mitigation for https://github.com/tensorflow/tensorflow/issues/28007
with tf.device('/cpu:0'):
images = tf.map_fn(self._decode_and_scale, raw_inputs, dtype=tf.uint8)
images = tf.image.convert_image_dtype(images, tf.float32)
return images
def _postprocess(self, model_outputs):
"""
Postprocesses outputs returned by the base model.
"""
probabilities = tf.nn.softmax(model_outputs)
indices = tf.argsort(probabilities, axis=1, direction='DESCENDING')
return {
LABELS_KEY: tf.gather(self._output_labels, indices, axis=-1)[:,:NUM_LABELS],
PROBABILITIES_KEY: tf.sort(probabilities, direction='DESCENDING')[:,:NUM_LABELS]
}
@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], tf.float32)])
def __call__(self, x):
"""
A pass-through to the base model.
"""
return self._model(x)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def predict_labels(self, raw_images):
"""
Preprocesses inputs, calls the base model
and postprocesses outputs from the base model.
"""
# Call the preprocessing handler
images = self._preprocess(raw_images)
# Call the base model
logits = self._model(images)
# Call the postprocessing handler
outputs = self._postprocess(logits)
return outputs
serving_module = ServingModule(model, 224, imagenet_labels)
```
#### 2.2. Test the custom serving module
```
predictions = serving_module.predict_labels(raw_images)
predictions
```
## 3. Save the custom serving module as `SavedModel`
```
model_path = os.path.join(LOCAL_WORKSPACE, MODEL_NAME, MODEL_VERSION)
default_signature = serving_module.__call__.get_concrete_function()
preprocess_signature = serving_module.predict_labels.get_concrete_function()
signatures = {
'serving_default': default_signature,
'serving_preprocess': preprocess_signature
}
tf.saved_model.save(serving_module, model_path, signatures=signatures)
```
### 3.1. Inspect the `SavedModel`
```
!saved_model_cli show --dir {model_path} --tag_set serve --all
```
### 3.2. Test loading and executing the `SavedModel`
```
model = tf.keras.models.load_model(model_path)
model.predict_labels(raw_images)
```
### 3.3 Copy the model to Google Cloud Storage
```
!gsutil cp -r {model_path} {GCS_MODEL_LOCATION}
!gsutil ls {GCS_MODEL_LOCATION}
```
## License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
<a href="https://colab.research.google.com/github/SAK90/HousingPrices/blob/main/first_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing_data = load_housing_data()
housing_data.head()
housing_data.describe()
housing_data.isnull().sum()
import matplotlib.pyplot as plt
housing_data.hist(bins = 50, figsize = (20, 15))
plt.show()
housing.plot(kind="scatter", x="longitude", y="latitude", alpha = 0.1)
housing_data = load_housing_data()
housing_data.head()
correlation_matrix = housing_data.corr()
correlation_matrix["median_house_value"].sort_values(ascending = False)
housing_data.plot(kind = "scatter", y = "median_house_value", x = "median_income", alpha = 0.1)
plt.show()
housing_data.isnull().sum()
median = housing_data["total_bedrooms"].median()
housing_data["total_bedrooms"].fillna(median, inplace = True)
Y = housing_data.iloc[:, 8]
housing_data.drop("median_house_value", inplace = True, axis = 'columns')
housing_data.drop("ocean_proximity", inplace = True, axis = 'columns')
X = housing_data
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, random_state = 1)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_train, Y_train)
from sklearn.metrics import mean_squared_error
import numpy as np
X_train_prediction = lin_reg.predict(X_train)
lin_mean_square_error = mean_squared_error(X_train_prediction, Y_train)
lin_root_mean_square_error = np.sqrt(lin_mean_square_error)
print(lin_root_mean_square_error)
from sklearn.tree import DecisionTreeRegressor
dec_tree_reg = DecisionTreeRegressor()
dec_tree_reg.fit(X_train, Y_train)
X_train_prediction_tree = dec_tree_reg.predict(X_train)
tree_mean_square_error = mean_squared_error(X_train_prediction_tree, Y_train)
tree_root_mean_square_error = np.sqrt(tree_mean_square_error)
print(tree_root_mean_square_error)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(X_train, Y_train)
X_train_prediction_forest = forest_reg.predict(X_train)
forest_mean_square_error = mean_squared_error(X_train_prediction_forest, Y_train)
forest_root_mean_square_error = np.sqrt(forest_mean_square_error)
print(forest_root_mean_square_error)
def display_score(scores):
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
from sklearn.model_selection import cross_val_score
lin_reg_scores = cross_val_score(lin_reg, X_train, Y_train, scoring="neg_mean_squared_error", cv=10)
lin_reg_rmse_scores = np.sqrt(-lin_reg_scores)
display_score(lin_reg_rmse_scores)
dec_tree_scores = cross_val_score(dec_tree_reg, X_train, Y_train, scoring="neg_mean_squared_error", cv=10)
dec_tree_rmse_scores = np.sqrt(-dec_tree_scores)
display_score(dec_tree_rmse_scores)
forest_reg_scores = cross_val_score(forest_reg, X_train, Y_train, scoring="neg_mean_squared_error", cv=10)
forest_reg_rmse_scores = np.sqrt(-forest_reg_scores)
display_score(forest_reg_rmse_scores)
```
| github_jupyter |
# Video Classification
Video classification is one of the many tasks in the field of _video understanding_, technologies that automatically extract information from video. You can read more about the great, wide world of video understanding in our blog post [An Introduction to Video Understanding: Capabilities and Applications](https://blog.fastforwardlabs.com/2021/12/14/an-introduction-to-video-understanding-capabilities-and-applications.html).
The goal of this notebook is to provide an introduction to video classification, including datasets and models.
The notebook consists of three main parts:
- Setting up: Installation of the necessary packages, including Tensorflow, and importing the relevant libraries.
- The Data: An exploration of the [Kinetics Human Action Video Dataset](https://deepmind.com/research/open-source/kinetics) for action recognition.
- The Model: Experimentation with a pretrained version of the [I3D video classification model](https://deepmind.com/research/open-source/i3d-model) for action recognition, hosted on the Tensorflow Model Hub.
## Setting up
Install the necessary packages.
```
%%capture
#hides cell output. disable for logs.
!pip install -r requirements.txt
from collections import defaultdict
import os
import glob
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython import display
```
In order to make it easier to work with the dataset and model in this notebook, we created a small library of helper functions and classes. Here, we import the good bits.
```
from vidbench.data.load import KineticsLoader
from vidbench.models import I3DLoader
from vidbench.predict import predict, store_results_in_dataframe, compute_accuracy, evaluate
from vidbench.visualize import make_video_table
from vidbench.data.process import resample_video, load_and_resize_video, video_acceptable
```
## The Data
In this notebook we make use of the DeepMind [Kinetics dataset](https://arxiv.org/abs/1705.06950), which consists of thousands of YouTube videos focused on human actions and interactions. While there are several versions of this dataset, we'll focus primarily on the Kinetics 400 dataset which contains 400 human action classes, ranging from human-object interactions like playing instruments, as well as human-human interactions like shaking hands. Each video clip is approximately ten seconds and has been sourced from a unique YouTube video.
As of November 2021, the video files that make up the Kinetics datasets are stored at https://s3.amazonaws.com/kinetics/ as described in this [repository](https://github.com/cvdfoundation/kinetics-dataset). The Kinetics 400 contains a total of 306,245 video clips with at least 400 clips in each class. These are distributed among three splits: training, test and validation. Each split consists of thousands of videos in `.mp4` format.
| Dataset split | Clips per class | Total Videos |
|---------------|-----------------|--------------|
| train | 250-1000 | 246245 |
| test | 100 | 40000 |
| val | 50 | 20000 |
For each dataset split, videos are grouped into a series of directories and each directory is packaged as a `tar.gz` file that needs to be unpacked. Each of these `tar.gz` files contains about 1000 video clips. In this notebook we'll explore a handful of videos from the validation set and we created a `KineticsLoader` class to handle downloading and unpacking these video files. While this class is designed to handle the full validation (or test, or train) set, we can also use it to explore a small portion of the videos, which we'll demonstrate in the following cells.
```
# this class handles the infrastructure to support downloading, unpacking, and pre-processing videos
loader = KineticsLoader(version="400", split="val")
```
If you're exploring this notebook after performing automatic setup through the CML AMP interface, then we've already downloaded a chunk of videos to explore. If not, running the cell below will initiate a download and unpacking of (at least) 500 vidoes (recall that they are grouped in chunks of approximately 1000).
```
loader.download_n_videos(500)
```
With the code above we've downloaded just 1000 videos from the validation set (out of 20,000 available videos). We could download the entire validation set but that would require at least 30-50 GBs of storage! Since our goal in this notebook is simply to explore the video classification capability, 1000 videos is more than enough to explore with. This AMP also includes a benchmarking script that will allow the user to evaluate a model on _all_ videos in any of the data splits discussed above (more on this towards the end of the notebook).
We've also downloaded the ground truth labels for _all_ 20K videos in the validation set. These are stored in a Pandas DataFrame which you can see below.
```
ground_truth_labels_df = pd.read_csv(f"{loader.data_dir}/val.csv")
ground_truth_labels_df
```
As you can see, this DataFrame has nearly 20K rows - one for each video clip in the validation set. We need to filter this DataFrame to just those videos that we've actually downloaded. Our `KineticsLoader` keeps track of which videos we've downloaded. We can see all available pathnames with the following line. We're only showing the top 20 but there are 1000 video pathnames available.
```
loader.video_filenames[:20]
```
Next, we'll filter the ground truth DataFrame to include only those vidoes that have already been downloaded and are available locally.
```
available_youtube_ids = []
for filename in loader.video_filenames:
pathname, vidname = os.path.split(filename)
available_youtube_ids.append(vidname[:11]) #youtube IDs are 11 characters long
available_videos_metadata_df = ground_truth_labels_df[ground_truth_labels_df['youtube_id'].isin(available_youtube_ids)]
available_videos_metadata_df
```
Much better! Now we have a DataFrame with only 1000 rows -- one for each locally available video clip. What did we end up downloading? Let's take a look at what classes we have.
```
available_videos_metadata_df.label.value_counts()
```
Out of 400 unique classes, we have videos from 360 of those. Most classes only have one representative video clip, though we do have a handful of video clips from classes like "massaging legs," and "scrambling eggs."
Video classification is computationally expensive so in the following cell we take a small, random sample of vidoes to explore for the remainder of this notebook. Because this is a random sample, each time you run this cell you'll get a new set of 8 videos to play with!
```
NUM_VIDEOS = 8
video_sample = available_videos_metadata_df.sample(NUM_VIDEOS)
video_sample
```
There we go! We've selected a manageable batch of just 8 videos to examine. And we can see at a glance which classes our model will be attempting to predict.
### Visualize some videos
So what kind of videos are we dealing with? In the cell below we display our video clip sample along with their ground truth class labels. As you play each video, notice that each is only approximately ten seconds long.
```
video_html = make_video_table(loader.data_dir, video_sample['youtube_id'].values, video_sample['label'].values)
display.HTML(video_html)
```
### Pre-processing
Now that we have a sense of what kind of videos we're working with, let's start classifying them! But before we do that, we have another step to perform -- preprocessing.
The YouTube videos in the Kinetics dataset are all in `.mp4` format but TensorFlow models do not recognize this! We must convert the videos into a format that our TF model can work with. This requires two steps:
1. convert the `.mp4` format to a more appropriate data structure, like NumPy arrays
2. Resize the video dimensions to work within model specifications
#### Video resizing
While the first step is likely self-explanatory, the second step deserves some attention. Those with experience working with pre-trained image classification models are likely already familiar with the idea that these models require image inputs of a specific height and width. These requirements are determined during model training and set limits on how large (or small) an image must be in pixels in order for the model to process that image. Video classification is no different in this respect, but comes with a third dimension of complexity: time.
Let's take a look at the dimensions of the videos in our sample batch. The following cells will read an `.mp4` video clip into a numpy array and print the shape of that array to the screen. The shape tuple has the following format:
(number of frames, height in pixels, width in pixels, number of color channels)
```
def load_video(path):
"""Convert video to Numpy array."""
import cv2
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read() # frame is in BGR format
if not ret:
break
frames.append(frame)
finally:
cap.release()
return np.array(frames).astype("float32") / 255.0
video_paths = [glob.glob(f"{pathname}/{yt_id}*")[0] for yt_id in video_sample['youtube_id'].values]
for video_path in video_paths:
video_np = load_video(video_path)
print(video_np.shape)
```
As we can see, there's quite a bit of variation among our video batch that isn't really detectable when we viewed the raw video clips earlier. While each video is exactly 10 seconds long, some have 300 frames and others have only 127 frames. Some have small spatial dimensions (240 x 320) while others are quite large (720 x 1280). The only thing common to all videos is that they each have three color channels, the familiar RGB system.
This has implications for how we sample and process these video clips for model consumption. The model we'll use in this notebook requires spatial dimensions of (224 x 224) so we'll need to resize and crop each frame of each video to these dimensions.
But how do we deal with the temporal dimension, i.e., the number of frames? That depends on how we want to use our video classification model. The model has no specified limit on the number of frames it can accept (note, however, that more frames translates to longer processing time which can become very computationally expensive!) If we only send to the model one video clip at a time, we can feed it the full number of frames for inference. However, it's usually faster to send a batch to the model rather than sending each video separately. In that case, we need to deal with the variation in frame rates for these videos because we cannot create a numpy array in which one of the dimesions varies!
#### Video Resampling
The crux of the issue lies in the fact that different cameras have different frame capture rates. Higher quality videos have more frames-per-second (FPS) than lower quality videos leading to a situation in which a collection of 10 second videos can nevertheles have different numbers of frames. Below we show a toy example of two videos that are each 10 seconds long but the top fewer frames than the bottom one over that time span due to having a lower FPS.

In order to create a batch of video clips, we must resample the clips so that each has the same number of frames. This can be accomplished either by _upsampling_ videos with low FPS, or _downsampling_ videos with high FPS. A simple way to perform upsampling is to duplicate certain frames throughout the length of the video. In the figure below, we see Video 1 has frames added to match the number of frames contained in Video 2. These frames are duplicates of existing frames in Video 1 (which is why some of the frames are repeated colors).
<img src="images/upsample.gif">
In contrast, downsampling involves removing frames periodically throughout the video. This time, we remove a sample of unique frames from Video 2 so that it has the same number of frames as Video 1.
<img src = "images/downsample.gif">
In either case, resampling should be done in such a way that the frames we duplicate or remove are equally dispersed throughout the duration of the 10 second clip in order to capture as much of the original motion as possible.
In a pinch we could simply grab a fixed-size chunk of consecutive frames (from the beginning, middle, or end) from each video clip. The problem here is that this chunk may only represent a portion of the full 10 second interval. The model would then attempt to infer on a batch of videos in which one of them has frames representing close to the full 10 seconds, while another has frames representing only a fraction of the time. This could increase the difficulty of the model to properly classify the latter video.
#### Load videos into NumPy arrays
Luckily, our `KineticsLoader` class has a `load_videos` method, similar to the one above, that also handles these pre-processing steps. Here's what we do under the hood:
1. First, a central square proportional to the size of the video is cropped
2. The cropped portion is resized to 224x224 pixels
3. Videos are resampled
- If the user provides `num_frames`, all videos are resampled to have this many frames (upsampling those that have lower FPS, downsampling those with higher FPS)
- If `num_frames` is not provided, the algorithm determines which video in the batch has the lowest FPS (fewest frames) and all vidoes are downsampled to match this frame rate
Again we note that video clips with more total frames will take longer to process through the model. For the current dataset, `num_frames = 128` is reasonable choice. For reference, the maximum number of frames that we've seen in the dataset is 300 (30 FPS).
```
def load_videos(youtube_ids, num_frames):
"""Create numpy array with batch of videos from list of youtube ids""",
# we created this list of video pathnames in an earlier cell
global video_paths
video_batch = []
for video_path in video_paths:
video_np = load_and_resize_video(video_path, resize_type="crop")
video_np = np.expand_dims(video_np, axis=0)
video_batch.append(video_np)
video_batch_resampled = []
for video in video_batch:
resampled_video = resample_video(video, num_frames)
video_batch_resampled.append(resampled_video)
return np.concatenate(video_batch_resampled, axis=0).astype("float32")
num_frames = 128
videos_tensor = load_videos(video_sample['youtube_id'].values, num_frames=num_frames)
videos_tensor.shape
```
We now have a batch of 8 video clips in a format that our model will understand.
It's time to classify!
## The Model
In this notebook we make use of the Inflated 3D ConvNet (I3D) video classification model. This model architecture was introduced in 2017 and provided state-of-the-art results for video action classification for multiple datasets. You can read more about this model in the original paper, [Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset](https://arxiv.org/abs/1705.07750).
Since it's inception, there are now multiple pre-trained versions of the I3D model that are publicly available on the [TensorFlow Model Hub](https://www.tensorflow.org/hub). The original version was pre-trained on the Kinetics 400 dataset, which we explored above. Another version was trained on the Kinetics 600 dataset. We created an `I3DLoader` class to handle model loading from the TF Model Hub. You can choose either the `kinetics-400` or `kinetics-600` version.
The biggest difference between these two models is in the number of classes they predict. I3D trained on Kinetics 400 predicts, you guessed it, 400 different classes, while it's counterpart (I3D trained on Kinetics 600) predicts on 600 classes. While we didn't discuss the Kinetics 600 dataset in detail, it is essentially a superset of the Kinetics 400 dataset -- it includes all 400 labels from that dataset, plus an additional 200 unique classes. Either can be used to make inference on our sample of videos but keep in mind that with more classes comes more challenges in predicting the correct class -- there are simply more options for the model to choose from.
Below we load up the I3D model trained on the Kinetics 400 dataset.
```
i3d400 = I3DLoader(trained_on='kinetics-400')
```
Let's look at what kinds of predictions this model can make.
```
i3d400.labels[:20]
```
### Get model predictions
Now that we have data in the proper format, ground truth labels, and a model loaded and ready to go -- it's time to make predictions! This notebook was developed on CPUs so the following cell can take some time to run. One way to speed up inference is to resample the vidoes to each have fewer frames, as we discussed above. The downside is that performance will likely degrade since the model will have fewer frames on which to make a prediction.
```
scores, predictions, _, _ = predict(videos_tensor, i3d400, verbose=False)
```
The output of our `predict` function includes the probabilities associated with each of the 400 labels and the 400 labels themselves for each video clip in our batch, sorted in descending order so that the most likely classes (highest probabilities) are at the top of the list.
```
predictions
scores
```
Let's store our results as a Pandas DataFrame to make it easier to work with. This dataframe keeps only the top five model predictions for each video. The `Video_Id` index refers to the numbering of the vidoes in our visualize section above.
```
# collect metadata and model results
results = defaultdict(list)
results['YouTube_Id'] = video_sample['youtube_id'].values
results['Ground_Truth'] = video_sample['label'].values
for s, p in zip(scores, predictions):
results['scores'].append(list(s[:5]))
results['preds'].append(list(p[:5]))
results_df = store_results_in_dataframe(results)
results_df
```
## Evaluating the model
So how well did our model do? It may seem natural to consider the model's accuracy using only it's top prediction for each video and we'll look at that first. However, when working with hundreds of classes, subtlties arise. For example, some classes could be easily confused -- "catching or throwing a softball" and "catching or throwing a baseball" are both classes but it may be difficult for the model to discern the type of ball in a low quality video or if the ball only takes up a small handful of pixels in a wide-shot video. Additionaly, videos can contain more than one action -- "texting" while "driving a car" (don't do that!) or "hula hooping" while "playing ukulele". The Kinetics 400 dataset only provides a single ground-truth label for each video, rather than an exhaustive list of annotations. For this reason, the authors recommend evaluating model performance on the top-5 accuracy, rather than top-1.
### Visualize videos again
Let's first examine the model's top-1 accuracy by considering our video visualization. Our visualization helper function can also accept and display the model's top prediction beneath each video. If the model's prediction does not match the ground truth label, the text will display red.
```
video_html = make_video_table(loader.data_dir, results_df['YouTube_Id'], results_df['Ground_Truth'], results_df['pred_1'])
display.HTML(video_html)
```
While it may seem alarming that many of these texts are red, consider them in light of the discussion above -- how many of these top-1 labels might be a reasonable description of the video clip, even if it doesn't match the ground truth label? Are any of these labels possible points of confusion or are their multiple actions in the scene?
### Model accuracy on our very small sample
Finally, let's consider the top-5 accuracy. In this case, we score the model as being "correct" if the ground truth label is within the model's top-5 predictions for a given video.
```
accuracy_top_1 = compute_accuracy(results_df, num_top_classes = 1)
accuracy_top_5 = compute_accuracy(results_df, num_top_classes = 5)
```
As we can see, the model's accuracy improves when we consider the top-5 predictions. While this approach doesn't always make sense for every circumstance, due to the nature of the Kinetics dataset and it's annotations, this method is certainly valid in this case.
## Evaluating over many videos
So far, everything we've done has been to explore the capability of video classification with a small, concrete example. However, in practice, there are several models and many datasets that one might consider when building a real-world video classification application. In that case, one will need to evaluate dfferent models over various datasets in order to gauge which is most appropriate for the application in question. This is a model benchmarking job. To that end, we've created some additional utilities to facilitate model evaluation over a much larger portion of the Kinetics datasets. Included in this AMP is a benchmarking script that can be automated via the CML Jobs abstraction (or by a simple bash script, if that's your style). Here we provide a quick example of the core utilities contained within that script -- namely, the ability to load, pre-process, and batch over a larger portion of videos.
We break this task into two parts: load and cache videos, and evaluation. The first step is performed by our `KineticsLoader` class, and does essentially the same steps as our `load_videos` function above. Specifically, this will load the raw `mp4` format into numpy arrays and perform essential pre-processing, such as cropping and resizing to I3D specifications. Once loaded, these video examples are cached (saved to disk) in their numpy format so that they can be reused in any downstream evaluation process.
```
num_videos = 64
loader.load_and_cache_video_examples(num_videos)
```
The second step is an evaluation function that encapsulates all of the prediction steps we performed above. This function accepts a model and data loader class and infers on the requested number of videos, grouping them into the given `batch_size` after resampling to `num_frames` number of frames for each video. The results are stored in a Pandas DataFrame so that we can consider the overall model performance.
```
results_df = evaluate(
i3d400,
loader,
num_videos=num_videos,
batch_size=8,
top_n_results=5,
num_frames=100,
savefile="small_sample_results_i3d.csv"
)
results_df
accuracy_top_1 = compute_accuracy(results_df, num_top_classes = 1)
accuracy_top_5 = compute_accuracy(results_df, num_top_classes = 5)
```
To perform evaluation on larger datasets, please check [scripts/evaluate.py](scripts/evaluate.py). Instructions can be found in the [scripts/README.md](README.md) file.
**If this documentation includes code, including but not limited to, code examples, Cloudera makes this available to you under the terms of the Apache License, Version 2.0, including any required notices. A copy of the Apache License Version 2.0 can be found here.**
| github_jupyter |
## Import packages
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import savgol_filter
import cline_analysis as ca
import pandas as pd
import seaborn as sns
import datetime
import os
from scipy.signal import medfilt
import functools
from scipy.optimize import bisect
from scipy import stats
sns.set_style("whitegrid")
sns.set_style("ticks")
%matplotlib qt
%config InlineBackend.figure_format = 'svg'
plt.matplotlib.rcParams['svg.fonttype'] = 'svgfont' # fonts will be recognized by Adobe Illustrator
```
## Load data
```
dirname = '/Users/zoltan/Dropbox/Channels/Fluvial/Purus_2/csv_files/'
fnames,clxs,clys,rbxs,lbxs,rbys,lbys,curvatures,ages,widths,dates = ca.load_data(dirname)
fnames
dates
```
## Get migration rate
```
ts1 = 0 # first timestep
ts2 = 1 # second timestep
d = dates[ts2]-dates[ts1]
years = d.days/365.0
x = np.array(clxs[ts1])
y = np.array(clys[ts1])
xn = np.array(clxs[ts2])
yn = np.array(clys[ts2])
migr_rate, migr_sign, p, q = ca.get_migr_rate(x,y,xn,yn,years,0)
migr_rate = medfilt(savgol_filter(migr_rate,11,3),kernel_size=5) # smoothing
curv,s = ca.compute_curvature(x,y)
curv = medfilt(savgol_filter(curv,71,3),kernel_size=5) # smoothing
# set intervals affected by cu=toffs to NaN - specific to Purus 2 river segment
migr_rate[:1507] = np.NaN
plt.figure()
plt.plot(migr_rate)
```
## Read 'valid' inflection points and corresponding points of zero migration from CSV file
```
df = pd.read_csv('Purus_2_LT05_L1TP_233065_19870726_20170211_01_T1_inflection_and_zero_migration_indices.csv')
LZC = np.array(df['index of inflection point'])
LZM = np.array(df['index of zero migration'])
# indices of bends affected by low erodibility and cutoffs (these have been picked manually)
erodibility_inds = [12,28,40,42,79,91]
cutoff_inds = [0,1,2,3,4,5,64,65,83]
W
```
## Plot curvature and migration rate series side-by-side
```
# plot curvature and migration rate along the channel
W = np.nanmean(widths[0]) # mean channel width
fig, ax1 = plt.subplots(figsize=(25,4))
plt.tight_layout()
y1 = 0.6
y2 = 0.0
y3 = -0.87
y4 = -1.5
for i in range(0,len(LZC)-1,2):
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[0.85,0.85,0.85],zorder=0)
deltas = 25.0
ax1.fill_between(s, 0, curv*W)
ax2 = ax1.twinx()
ax2.fill_between(s, 0, migr_rate, facecolor='green')
ax1.plot([0,max(s)],[0,0],'k--')
ax2.plot([0,max(s)],[0,0],'k--')
ax1.set_ylim(y4,y1)
ax2.set_ylim(-15,40)
ax1.set_xlim(s[LZC[0]],s[-1])
for i in erodibility_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[1.0,0.85,0.85],zorder=0)
for i in cutoff_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[0.85,1.0,0.85],zorder=0)
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color='w')
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [35,35,20.7145,0,-15,-15,0,20.7145]
ax2.fill(xcoords,ycoords,color='w')
for i in range(0,len(LZC)-1,2):
ax1.text(s[LZC[i]],0.5,str(i),fontsize=12)
# plot curvature and migration rate along the channel
W = np.nanmean(widths[0]) # mean channel width
xlim = s[LZC[0]]
frame_no = 1
while xlim<=s[-1]:
fig, ax1 = plt.subplots(figsize=(10,5))
y1 = 0.6
y2 = 0.0
y3 = -0.87
y4 = -1.5
for i in range(0,len(LZC)-1,2):
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[0.85,0.85,0.85],zorder=0)
deltas = 25.0
ax1.fill_between(s, 0, curv*W)
ax2 = ax1.twinx()
ax2.fill_between(s, 0, migr_rate, facecolor='green')
ax1.plot([0,max(s)],[0,0],'k--')
ax2.plot([0,max(s)],[0,0],'k--')
ax1.set_ylim(y4,y1)
ax2.set_ylim(-15,40)
# ax1.set_xlim(s[LZC[0]],s[-1])
ax1.set_xlim(xlim,xlim+50000)
ax1.set_xlabel('along-channel distance (m)',fontsize=12)
ax1.set_ylabel('dimensionless curvature',fontsize=12)
ax2.set_ylabel('migration rate (m/yr)',fontsize=12)
for i in erodibility_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[1.0,0.85,0.85],zorder=0)
for i in cutoff_inds:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color=[0.85,1.0,0.85],zorder=0)
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [y1,y1,y2,y3,y4,y4,y3,y2]
ax1.fill(xcoords,ycoords,color='w')
for i in range(len(LZC)-1):
if np.sum(np.isnan(migr_rate[LZM[i]:LZM[i+1]]))>0:
xcoords = [s[LZC[i]],s[LZC[i+1]],s[LZC[i+1]],s[LZM[i+1]],s[LZM[i+1]],s[LZM[i]],s[LZM[i]],s[LZC[i]]]
ycoords = [35,35,20.7145,0,-15,-15,0,20.7145]
ax2.fill(xcoords,ycoords,color='w')
for i in range(0,len(LZC)-1,2):
if (s[LZC[i]]>xlim) & (s[LZC[i]]<xlim+50000-100):
ax1.text(s[LZC[i]],0.5,str(i),fontsize=12)
fname = '/Users/zoltan/Dropbox/Python/Channels/movie_frames/'+'purus_curv_migr_rate%03d.png'%frame_no
fig.savefig(fname) #, bbox_inches='tight')
plt.close(fig)
xlim = xlim+2000
frame_no = frame_no+1
xlim
```
## Estimate lag between curvature and migration rate
```
window_length = 500
time_shifts = ca.get_time_shifts(migr_rate,curv,window_length)
plt.figure()
plt.plot(time_shifts);
# average lag
25.0*np.round(np.mean(time_shifts))
# average lag estimated from distances between inflection points and points of zero migration
# (this is what was used in the paper)
np.mean(25.0*(LZM-LZC))
```
## Estimate friction factor Cf
```
# first we need a continuous channel segment (e.g., no NaNs due to cutoffs)
q=np.array(q)
p=np.array(p)
i1 = 1507
i2 = len(x)-1
i1n = p[np.where(q==i1)[0][0]]
i2n = p[np.where(q==i2)[0][0]]
xt = x[i1:i2]
yt = y[i1:i2]
xnt = xn[i1n:i2n]
ynt = yn[i1n:i2n]
plt.figure()
plt.plot(xt,yt)
plt.plot(xnt,ynt)
plt.axis('equal')
migr_rate_t, migr_sign_t, pt, qt = ca.get_migr_rate(xt,yt,xnt,ynt,years,0)
plt.figure()
plt.plot(migr_rate_t);
# this might take a while to run
kl = 20.0 # preliminary kl value (guesstimate)
k = 1
D = (W/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(xt,yt)
curv_t, s = ca.compute_curvature(xt,yt)
curv_t = medfilt(savgol_filter(curv_t,71,3),kernel_size=5) # smoothing
migr_rate_t = medfilt(savgol_filter(migr_rate_t,71,3),kernel_size=5)
get_friction_factor_1 = functools.partial(ca.get_friction_factor,curvature=curv_t,migr_rate=migr_rate_t,
kl=kl,W=W, k=k, D=D, s=s)
Cf_opt = bisect(get_friction_factor_1, 0.0002, 0.1)
print Cf_opt
Cf_opt = 0.0036111328125
```
## Estimate migration rate constant kl
```
# minimize the error between actual and predicted migration rates (using the 75th percentile)
errors = []
curv_t, s = ca.compute_curvature(xt,yt)
for i in np.arange(10,30):
print i
R1 = ca.get_predicted_migr_rate(curv_t,W=W,k=1,Cf=Cf_opt,D=D,kl=i,s=s)
errors.append(np.abs(np.percentile(np.abs(R1),75)-np.percentile(np.abs(migr_rate_t[1:-1]),75)))
plt.figure()
plt.plot(np.arange(10,30),errors);
kl_opt = 22.0 # the error is at minimum for kl = 22.0
1212/25
```
## Plot actual migration rate against nominal migration rate
```
# kernel density and scatterplot of actual vs. nominal migration rate
w = np.nanmedian(widths[0])
curv_nodim = w*curv*kl_opt
lag = 48
plt.figure(figsize=(8,8))
sns.kdeplot(curv_nodim[:-lag][np.isnan(migr_rate[lag:])==0], migr_rate[lag:][np.isnan(migr_rate[lag:])==0],
n_levels=20,shade=True,cmap='Blues',shade_lowest=False)
plt.scatter(curv_nodim[:-lag][::20],migr_rate[lag:][::20],c='k',s=15)
max_x = 15
plt.xlim(-max_x,max_x)
plt.ylim(-max_x,max_x)
plt.plot([-max_x,max_x],[-max_x,max_x],'k--')
plt.xlabel('nominal migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# get correlation coefficient for relationship between curvature and migration rate
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(curv_nodim[:-lag][np.isnan(migr_rate[lag:])==0],
migr_rate[lag:][np.isnan(migr_rate[lag:])==0])
print r_value
print r_value**2
print p_value
# number of data points used in analysis
len(curv_nodim[:-lag][np.isnan(migr_rate[lag:])==0])
# compute predicted migration rates
D = (w/18.8)**0.7092 # depth in meters (from width)
dx,dy,ds,s = ca.compute_derivatives(x,y)
R1 = ca.get_predicted_migr_rate(curv,W=w,k=1,Cf=Cf_opt,D=D,kl=kl_opt,s=s)
# plot actual and predicted migration rates
plt.figure()
plt.plot(s,migr_rate)
plt.plot(s,R1,'r')
# get correlation coefficient for relationship between actual and predicted migration rate
m_nonan = migr_rate[(np.isnan(R1)==0)&(np.isnan(migr_rate)==0)]
R_nonan = R1[(np.isnan(R1)==0)&(np.isnan(migr_rate)==0)]
slope, intercept, r_value, p_value, slope_std_rror = stats.linregress(R_nonan,m_nonan)
print r_value
print r_value**2
print p_value
# 90th percentile of migration rate
np.percentile(np.abs(m_nonan),90)
# plot actual vs. predicted migration rate
max_m = 15
plt.figure(figsize=(8,8))
sns.kdeplot(R_nonan,m_nonan,n_levels=10,shade=True,cmap='Blues',shade_lowest=False)
plt.plot([-max_m,max_m],[-max_m,max_m],'k--')
plt.scatter(R_nonan[::20],m_nonan[::20],c='k',s=15)
plt.xlim(-max_m,max_m)
plt.ylim(-max_m,max_m)
plt.xlabel('predicted migration rate (m/year)', fontsize=14)
plt.ylabel('actual migration rate (m/year)', fontsize=14)
# add points affected by cutoffs and low erodibility
for i in erodibility_inds:
plt.scatter(R1[LZC[i]:LZC[i+1]][::10],migr_rate[LZC[i]:LZC[i+1]][::10],c='r')
for i in cutoff_inds:
plt.scatter(R1[LZC[i]:LZC[i+1]][::10],migr_rate[LZC[i]:LZC[i+1]][::10],c='g')
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
end_ix = i + n_steps
if end_ix > len(sequence) - 1:
break
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
n_steps = 3
X, y = split_sequence(raw_seq, n_steps)
for i in range(len(X)):
print(X[i], y[i])
model = keras.models.Sequential()
model.add(keras.layers.LSTM(50,activation='relu', input_shape=(3,1)))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam',loss='mse')
X = X.reshape(-1,3,1)
model.fit(X,y,epochs=200)
x_input = np.array([70,80,90])
x_input = x_input.reshape(-1,3,1)
y_hat = model.predict(x_input)
y_hat
model2 = keras.models.Sequential()
model2.add(keras.layers.LSTM(50, activation='relu', return_sequences=True, input_shape=(3,1)))
model2.add(keras.layers.LSTM(50, activation='relu'))
model2.add(keras.layers.Dense(1))
model2.compile(optimizer='adam', loss='mse')
model2.fit(X,y, epochs=200)
y_hat = model2.predict(x_input)
print(y_hat)
model3 = keras.models.Sequential()
model3.add(keras.layers.Bidirectional(keras.layers.LSTM(50, activation='relu'), input_shape=(3,1)))
model3.add(keras.layers.Dense(1))
model3.compile(optimizer='adam', loss='mse')
model3.fit(X, y, epochs=200)
y_hat = model3.predict(x_input)
print(y_hat)
n_steps = 4
X, y = split_sequence(raw_seq, n_steps)
n_features = 1
n_seq = 2
n_steps = 2
X = X.reshape((X.shape[0], n_seq, n_steps, n_features))
model4 = keras.models.Sequential()
model4.add(keras.layers.TimeDistributed(
keras.layers.Conv1D(filters=64, kernel_size=1, activation='relu'), input_shape=(None, 2, 1)))
model4.add(keras.layers.TimeDistributed(
keras.layers.MaxPooling1D(pool_size=2)))
model4.add(keras.layers.TimeDistributed(
keras.layers.Flatten()))
model4.add(keras.layers.LSTM(50, activation='relu'))
model4.add(keras.layers.Dense(1))
model4.compile(optimizer='adam', loss='mse')
model4.fit(X, y, epochs=500)
x_input = np.array([60, 70, 80, 90])
x_input = x_input.reshape(1,2,2,1)
y_hat = model4.predict(x_input)
print(y_hat)
X,y = split_sequence(raw_seq, 4)
X = X.reshape(-1, 2, 1, 2, 1)
model5 = keras.models.Sequential()
model5.add(keras.layers.ConvLSTM2D(filters=64, kernel_size=(1,2), activation='relu', input_shape=(2,1,2,1)))
model5.add(keras.layers.Flatten())
model5.add(keras.layers.Dense(1))
model5.compile(optimizer='adam', loss='mse')
model5.fit(X, y, epochs=500)
x_input = np.array([60, 70, 80, 90])
x_input = x_input.reshape(-1,2,1,2,1)
y_hat = model5.predict(x_input)
print(y_hat)
in_seq1 = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = np.array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = np.array([ in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
in_seq1 = in_seq1.reshape((len(in_seq1),1))
in_seq2 = in_seq2.reshape((len(in_seq2),1))
out_seq = out_seq.reshape((len(out_seq),1))
dataset = np.hstack((in_seq1, in_seq2, out_seq))
print(dataset)
def split_sequences(sequences, n_steps):
X, y = list(), list()
for i in range(len(sequences)):
end_ix = i + n_steps
if end_ix > len(sequences):
break
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1,-1]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
X, y = split_sequences(dataset, 3)
print(X.shape, y.shape)
for i in range(len(X)):
print(X[i], y[i])
model6 = keras.models.Sequential()
model6.add(keras.layers.LSTM(50, activation='relu', input_shape=(3,2)))
model6.add(keras.layers.Dense(1))
model6.compile(optimizer='adam', loss='mse')
model6.fit(X,y,epochs=200)
x_input = np.array([[80,85],[90,95],[100,105]])
x_input = x_input.reshape(1,3,2)
yhat = model6.predict(x_input)
print(yhat)
def split_sequences(sequences, n_steps):
X, y= list(), list()
for i in range(len(sequences)):
end_ix = i + n_steps
if end_ix > len(sequences)-1:
break
seq_x , seq_y = sequences[i:end_ix,:], sequences[end_ix,:]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
in_seq1 = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = np.array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = np.array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
in_seq1 = in_seq1.reshape(len(in_seq1),1)
in_seq2 = in_seq2.reshape(len(in_seq2),1)
out_seq = out_seq.reshape(len(out_seq),1)
dataset = np.hstack((in_seq1,in_seq2, out_seq))
X,y = split_sequences(dataset, 3)
print(X.shape, y.shape)
for i in range(len(X)):
print(X[i], y[i])
model7 = keras.models.Sequential()
model7.add(keras.layers.LSTM(100, activation='relu', return_sequences=True, input_shape=(3,3)))
model7.add(keras.layers.LSTM(100, activation='relu'))
model7.add(keras.layers.Dense(3))
model7.compile(optimizer='adam', loss='mse')
model7.fit(X, y, epochs=400)
x_input = np.array([[70, 75, 145], [80, 85, 165], [90, 95, 185]])
x_input = x_input.reshape(-1, 3, 3)
yhat = model7.predict(x_input)
print(yhat)
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
def split_sequence(sequence, n_steps_in, n_steps_out):
X, y= list(), list()
for i in range(len(sequence)):
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
if out_end_ix > len(sequence):
break
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
X, y = split_sequence(raw_seq, 3, 2)
for i in range(len(X)):
print(X[i], y[i])
X = X.reshape(*X.shape,1)
model8 = keras.models.Sequential()
model8.add(keras.layers.LSTM(100,
activation='relu',
return_sequences=True,
input_shape=(3,1)))
model8.add(keras.layers.LSTM(100,
activation='relu',
))
model8.add(keras.layers.Dense(2))
model8.compile(optimizer='adam',loss='mse')
model8.fit(X,y,epochs=500)
x_input = np.array([70, 80, 90])
x_input = x_input.reshape(-1, 3, 1)
yhat = model8.predict(x_input)
print(yhat)
def split_sequence(sequence, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequence)):
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
if out_end_ix > len(sequence):
break
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
X, y = split_sequence(raw_seq, 3, 2)
X = X.reshape(*X.shape ,1)
y = y.reshape(*y.shape ,1)
model10 = keras.models.Sequential()
model10.add(keras.layers.LSTM(100, activation='relu', input_shape=(3, 1)))
model10.add(keras.layers.RepeatVector(2))
model10.add(keras.layers.LSTM(100, activation='relu', return_sequences=True))
model10.add(keras.layers.TimeDistributed(
keras.layers.Dense(1)))
model10.compile(optimizer='adam', loss='mse')
model10.fit(X, y, epochs=100)
x_input = np.array([70, 80, 90])
x_input = x_input.reshape(1, 3, 1)
yhat = model10.predict(x_input)
print(yhat)
def split_sequences(sequences, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequences)):
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out-1
if out_end_ix > len(sequences):
break
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1:out_end_ix:, -1]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
in_seq1 = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = np.array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = np.array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
in_seq1 = in_seq1.reshape(-1,1)
in_seq2 = in_seq2.reshape(-1,1)
out_seq = out_seq.reshape(-1,1)
dataset = np.hstack((in_seq1, in_seq2, out_seq))
X,y = split_sequences(dataset, 3, 2)
print(X.shape, y.shape)
for i in range(len(X)):
print(X[i], y[i])
model11 = keras.models.Sequential()
model11.add(keras.layers.LSTM(100, activation='relu',return_sequences=True, input_shape=(3,2)))
model11.add(keras.layers.LSTM(100, activation='relu'))
model11.add(keras.layers.Dense(2))
model11.compile(optimizer='adam', loss='mse')
model11.fit(X, y, epochs=200)
x_input = np.array([[70,75], [80,85], [90,95]])
x_input = x_input.reshape(1,3,2)
yhat = model11.predict(x_input)
print(yhat)
def split_sequences(sequences, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequences)):
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
if out_end_ix > len(sequences):
break
seq_x, seq_y = sequences[i:end_ix,:], sequences[end_ix:out_end_ix,:]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
in_seq1 = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = np.array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = np.array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
in_seq1 = in_seq1.reshape(-1,1)
in_seq2 = in_seq2.reshape(-1,1)
out_seq = out_seq.reshape(-1,1)
dataset = np.hstack((in_seq1, in_seq2, out_seq))
X,y = split_sequences(dataset, 3, 2)
print(X.shape, y.shape)
for i in range(len(X)):
print(X[i], y[i])
model12 = keras.models.Sequential()
model12.add(keras.layers.LSTM(200, activation='relu', input_shape=(3,3)))
model12.add(keras.layers.RepeatVector(2))
model12.add(keras.layers.LSTM(200, activation='relu', return_sequences=True))
model12.add(keras.layers.TimeDistributed(
keras.layers.Dense(3)))
model12.compile(optimizer='adam', loss='mse')
model12.fit(X, y, epochs=300)
x_input = np.array([[60, 65, 125], [70, 75, 145], [80, 85, 165]])
x_input = x_input.reshape(1, 3, 3)
yhat = model12.predict(x_input)
print(yhat)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
#Configurable parameters for pure pursuit
+ How fast do you want the robot to move? It is fixed at $v_{max}$ in this exercise
+ When can we declare the goal has been reached?
+ What is the lookahead distance? Determines the next position on the reference path that we want the vehicle to catch up to
```
vmax = 0.75
goal_threshold = 0.05
lookahead = 3.0
#You know what to do!
def simulate_unicycle(pose, v,w, dt=0.1):
x, y, t = pose
return x + v*np.cos(t)*dt, y + v*np.sin(t)*dt, t+w*dt
x_ref=[]
y_ref=[]
class PurePursuitTracker(object):
global x_ref, y_ref
def __init__(self, x, y, v, lookahead = 3.0):
"""
Tracks the path defined by x, y at velocity v
x and y must be numpy arrays
v and lookahead are floats
"""
self.length = len(x)
self.ref_idx = 0 #index on the path that tracker is to track
self.lookahead = lookahead
self.x, self.y = x, y
self.v, self.w = v, 0
def update(self, xc, yc, theta):
"""
Input: xc, yc, theta - current pose of the robot
Update v, w based on current pose
Returns True if trajectory is over.
"""
#Calculate ref_x, ref_y using current ref_idx
#Check if we reached the end of path, then return TRUE
#Two conditions must satisfy
#1. ref_idx exceeds length of traj
#2. ref_x, ref_y must be within goal_threshold
# Write your code to check end condition
ref_x, ref_y = self.x[self.ref_idx], self.y[self.ref_idx]
x_ref.append(ref_x) # Used for animation
y_ref.append(ref_y) # Used for animation
goal_x, goal_y = self.x[-1], self.y[-1]
if (self.ref_idx > self.length) or (np.linalg.norm([ref_x-goal_x, ref_y-goal_y])) < goal_threshold:
return True
#End of path has not been reached
#update ref_idx using np.hypot([ref_x-xc, ref_y-yc]) < lookahead
if np.hypot(ref_x-xc, ref_y-yc) < lookahead:
# print(self.ref_idx)
self.ref_idx = self.ref_idx + 1
#Find the anchor point
# this is the line we drew between (0, 0) and (x, y)
anchor = np.asarray([ref_x - xc, ref_y - yc])
#Remember right now this is drawn from current robot pose
#we have to rotate the anchor to (0, 0, pi/2)
#code is given below for this
theta = np.pi/2 - theta
rot = np.asarray([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) # Rotational matrix
anchor = np.dot(rot, anchor)
L = np.sqrt((anchor[0] ** 2 + anchor[1] **2)) # dist to reference path
X = anchor[0] #cross-track error
#from the derivation in not yc, thetaes, plug in the formula for omega
self.w = -1*(2 * self.v * X)/(L**2)
return False
```
## Visualize given trajectory
```
x = np.arange(0, 50, 0.5)
y = [np.sin(idx / 5.0) * idx / 2.0 for idx in x]
#write code here
plt.plot(x, y)
plt.grid()
plt.figure()
```
## Run the tracker simulation
1. Instantiate the tracker class
2. Initialize some starting pose
3. Simulate robot motion 1 step at a time - get $v$, $\omega$ from tracker, predict new pose using $v$, $\omega$, current pose in simulate_unicycle()
4. Stop simulation if tracker declares that end-of-path is reached
5. Record all parameters
```
#write code to instantiate the tracker class
tracker = PurePursuitTracker(x, y, vmax)
pose = -1, 0, np.pi/2 #arbitrary initial pose
x0,y0,t0 = pose # record it for plotting
traj =[]
while True:
#write the usual code to obtain successive poses
pose = simulate_unicycle(pose, tracker.v, tracker.w)
xa, ya, ta = pose
if tracker.update(xa, ya, ta):
print("ARRIVED!!")
break
traj.append([xa, ya, ta, tracker.w, tracker.ref_idx])
xs,ys,ts,ws,ids = zip(*traj)
plt.figure()
plt.plot(x,y,label='Reference')
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.plot(xs,ys,label='Tracked')
x0,y0,t0 = pose
plt.quiver(x0,y0, np.cos(t0), np.sin(t0),scale=12)
plt.title('Pure Pursuit trajectory')
plt.legend()
plt.grid()
```
# Visualize curvature
```
plt.figure()
plt.title('Curvature')
plt.plot(np.abs(ws))
plt.grid()
```
## Animate
Make a video to plot the current pose of the robot and reference pose it is trying to track. You can use funcAnimation in matplotlib
The first line sets up the figure and its axis, and the second line fixes the axis limits. Setting the limits in advance stops any rescaling of the limits that may make the animation jumpy and unusable.
```
from matplotlib.animation import FuncAnimation
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import rc
from IPython.display import HTML
plt.rcParams["animation.html"] = "jshtml"
rc('animation', embed_limit=4096)
fig, ax = plt.subplots()
ax.set_xlim(-10, 50)
ax.set_ylim(-22,22)
line = ax.plot(0,0)[0]
tracked = ax.plot(0,0)[0]
ball = plt.Circle((x0, y0), 0.5)
ball_tracked = plt.Circle((x0, y0), 0.5)
ax.add_patch(ball)
ax.add_patch(ball_tracked)
x_data=[]
y_data=[]
x_track=[]
y_track=[]
def init():
ax.set_xlim(-10, 50)
ax.set_ylim(-22,22)
line.set_data(x_data, y_data)
line.set_data(x_track, y_track)
ball.set_center((xs[0], ys[0]))
ball_tracked.set_center((x_ref[0], y_ref[0]))
return line, ball, ball_tracked
def animate(i):
# Plot bot trajectory
x_data.append(xs[i])
y_data.append(ys[i])
ball.set_center((xs[i], ys[i]))
line.set_xdata(x_data)
line.set_ydata(y_data)
# Plot reference trajectory the bot is tracking
x_track.append(x_ref[i])
y_track.append(y_ref[i])
ball_tracked.set_center((x_ref[i], y_ref[i]))
tracked.set_xdata(x_track)
tracked.set_ydata(y_track)
animation = FuncAnimation(fig, animate, frames=np.arange(0,1314),interval=15, init_func=init)
# animation
HTML(animation.to_html5_video())
```
## Effect of noise in simulations
What happens if you add a bit of Gaussian noise to the simulate_unicycle() output? Is the tracker still robust?
The noise signifies that $v$, $\omega$ commands did not get realized exactly
```
```
| github_jupyter |
```
import json
json.loads('{"coef0":0}')
#%%writefile ../../src/data/data_utils.py
# %load ../../src/data/data_utils.py
# %%writefile ../../src/data/data_utils.py
"""
Author: Jim Clauwaert
Created in the scope of my PhD
"""
import pandas as pd
import numpy as np
import itertools
def CreatePairwiseRankData(dfDataset, inv=False):
"""Create pairwise ranked data from dataset. All possible combinations are included.
(see README for default dataset layout)
Parameters
-----------
dfDataset : DataFrame
Dataframe containing at least 'ID', 'sequence', 'mean_score', '35boxstart' and '10boxstart'.
'mean_score_sd' is an optional column
Returns
--------
DF : Dataframe
Dataframe containing paired data with arguments found in original dataframe (subscripted with '_1' and '_2')
and rank. Rank is defined as 1 for samples in which 'mean_score_1' > 'mean_score_2' and -1 in other cases
"""
sampleCount = dfDataset.shape[0]
DF = pd.DataFrame(index=range(int(sampleCount*(sampleCount-1)/2)),
columns=[])
ZIP = list(itertools.combinations(dfDataset['ID'],2))
DF['ID_1'] = [item[0] for item in ZIP]
DF['ID_2'] = [item[1] for item in ZIP]
DF['sequence_1'] = [dfDataset[dfDataset['ID']==x]['sequence'].values[0] for x in DF['ID_1']]
DF['sequence_2'] = [dfDataset[dfDataset['ID']==x]['sequence'].values[0] for x in DF['ID_2']]
DF['mean_score_1'] = [dfDataset[dfDataset['ID']==x]['mean_score'].values[0] for x in DF['ID_1']]
DF['mean_score_2'] = [dfDataset[dfDataset['ID']==x]['mean_score'].values[0] for x in DF['ID_2']]
if 'mean_score_sd' in dfDataset.columns:
DF['mean_score_sd_1'] = [dfDataset[dfDataset['ID']==x]['mean_score_sd'].values[0] for x in DF['ID_1']]
DF['mean_score_sd_2'] = [dfDataset[dfDataset['ID']==x]['mean_score_sd'].values[0] for x in DF['ID_2']]
DF['35boxstart_1'] = [dfDataset[dfDataset['ID']==x]['35boxstart'].values[0] for x in DF['ID_1']]
DF['35boxstart_2'] = [dfDataset[dfDataset['ID']==x]['35boxstart'].values[0] for x in DF['ID_2']]
DF['10boxstart_1'] = [dfDataset[dfDataset['ID']==x]['10boxstart'].values[0] for x in DF['ID_1']]
DF['10boxstart_2'] = [dfDataset[dfDataset['ID']==x]['10boxstart'].values[0] for x in DF['ID_2']]
if inv is False:
DF['rank'] = [1 if x>(DF.iloc[i]['mean_score_2']) else -1 for i, x in enumerate(DF['mean_score_1']) ]
else:
DF['rank'] = [1 if x<(DF.iloc[i]['mean_score_2']) else -1 for i, x in enumerate(DF['mean_score_1']) ]
return DF
dataset = pd.read_csv("../../data/external/brewster_lib.csv")
test = CreatePairwiseRankData(dataset, inv=True)
np.unique(test[['ID_1']].values).size
test.to_csv('pw_brewster_prom_lib.csv', index=False)
'2' in test.columns
import matplotlib
import numpy as np
import math
import itertools
import sklearn
import warnings
import pandas as pd
import sklearn.linear_model
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
```
| github_jupyter |
```
from sklearn import datasets
import pandas as pd
# Load the boston house-prices dataset (regression).
boston = datasets.load_boston()
boston_target_name = 'MEDV'
boston_features_names = boston.feature_names
boston_df = pd.DataFrame(boston.data, columns=boston_features_names)
boston_df[boston_target_name] = boston.target
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
seed = 123456
X, y = boston_df[boston_features_names], boston_df[boston_target_name]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed)
pipeline = Pipeline([
('Xgbr', XGBRegressor(objective='reg:linear',
colsample_bytree=0.3,
learning_rate=0.1,
max_depth=3,
alpha=10,
n_estimators=10,
seed=seed))
])
pipeline.fit(X_train, y_train)
# Predict the first row of test data, the result should be identical to one produced by PMML
print(X_test.head(1))
pipeline.predict(X_test.head(1))
from nyoka import xgboost_to_pmml
# Export the pipeline model into PMML
model = 'xgbr_pmml.pmml'
xgboost_to_pmml(pipeline, boston_features_names, boston_target_name, model)
from daas_client import DaasClient
# Please, change to your URL of Daas server, and credentials
url = 'https://192.168.64.3:31753'
username = 'admin'
password = 'password'
project = 'Examples'
# Initiate a client of DaaS server, and set the created "Examples" project
client = DaasClient(url, username, password)
if not client.project_exists(project):
client.create_project(project, 'examples', 'This is an example project')
client.set_project(project)
from pprint import pprint
model_name = 'pmml-reg'
# Publish the built model into DaaS
publish_resp = client.publish(model,
name=model_name,
mining_function='regression',
x_test=X_test,
y_test=y_test,
description='A PMML regression model')
pprint(publish_resp)
# Try to test the published model
test_resp = client.test(model_name, model_version=publish_resp['model_version'])
pprint(test_resp)
# Call the test REST API above, 'model_name' is required in payload because the test runtime serves multiple models
# in a project.
import requests
bearer_token = 'Bearer {token}'.format(token=test_resp['access_token'])
payload = {
'args': {'X': [{'AGE': 89.5,
'B': 396.9,
'CHAS': 0.0,
'CRIM': 22.5971,
'DIS': 1.5184,
'INDUS': 18.1,
'LSTAT': 31.99,
'NOX': 0.7,
'PTRATIO': 20.2,
'RAD': 24.0,
'RM': 5.0,
'TAX': 666.0,
'ZN': 0.0}],
'model_name': model_name,
'model_version': publish_resp['model_version']}}
response = requests.post(test_resp['endpoint_url'],
headers={'Authorization': bearer_token},
json=payload,
verify=False)
pprint(response.json())
# Deploy the published model into product
deploy_resp = client.deploy(model_name,
deployment_name=model_name + '-svc',
model_version=publish_resp['model_version'])
pprint(deploy_resp)
# Call the product REST API above, the deployment runtime(s) serve the deployed model dedicatedly.
deploy_bearer_token = 'Bearer {token}'.format(token=deploy_resp['access_token'])
deploy_payload = {'args': {'X': [{'AGE': 89.5,
'B': 396.9,
'CHAS': 0.0,
'CRIM': 22.5971,
'DIS': 1.5184,
'INDUS': 18.1,
'LSTAT': 31.99,
'NOX': 0.7,
'PTRATIO': 20.2,
'RAD': 24.0,
'RM': 5.0,
'TAX': 666.0,
'ZN': 0.0}]}}
response = requests.post(deploy_resp['endpoint_url'],
headers={'Authorization': deploy_bearer_token},
json=deploy_payload,
verify=False)
pprint(response.json())
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime, timedelta, date
from mpl_toolkits.axes_grid1 import make_axes_locatable
# from matplotlib.ticker import FuncFormatter
import geopandas as gpd
ox_data_url = r'https://ocgptweb.azurewebsites.net/CSVDownload'
df = pd.read_csv(ox_data_url, parse_dates=['Date'], dayfirst=False)
df = df.loc[df['Date'] <= datetime(2020, 4, 13), :]
df.head()
def nuss_style_fun(fig, ax, title, author_line=True):
#remove top and right frame parts
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# set left and bottom axis to grey
ax.spines['left'].set_color('grey')
ax.spines['bottom'].set_color('grey')
# set ticks to grey
ax.tick_params(axis='x', colors='grey')
ax.tick_params(axis='y', colors='grey')
#set labels to grey
ax.yaxis.label.set_color('grey')
ax.xaxis.label.set_color('grey')
# align axis labels with axis ends
ax.set_xlabel(xlabel=None,
position=[0, 0],
horizontalalignment='left',
color='grey',
size=14)
ax.set_ylabel(ylabel=None,
position=[0, 1],
horizontalalignment='right',
color='grey',
size=14)
#align title
ax.set_title(label=title,
loc='left',
color=(0.41, 0.41, 0.41),
size=20)
#author line
if author_line:
fig.text(0.99, 0.01, '@rikunert', color='lightgrey', style='italic',
horizontalalignment='right')
return fig, ax
def draw_cases(x, y, color, title, label=None, xlabel='Date', ylabel='Cases', text1_y=12, text2_y=12, drawstyle='default'):
fig, ax = plt.subplots(figsize=[10.67, 5.33])
sns.lineplot(x=x,
y=y,
marker='o',
linewidth=1,
color=color,
drawstyle=drawstyle,
label=label,
ax=ax)
fig, ax = nuss_style_fun(fig, ax, title=title)
ax.set(xlabel='Date',
ylabel=ylabel)
ax.xaxis.set_ticks(np.arange(start=x.min(),
stop=max(x.max(), datetime.strptime('2020-04-19', '%Y-%m-%d')),
step=timedelta(days=14)))
if True:
ax.text(x='2020-03-16',
y=text1_y,
s='Soft lockdown',
horizontalalignment='right',
verticalalignment='top',
rotation='vertical',
color='orange',
size=14)
ax.axvline('2020-03-16', linestyle='--', color='orange', linewidth=1)
ax.text(x='2020-04-19',
y=text2_y,
s='Lockdown eased?',
horizontalalignment='right',
verticalalignment='top',
rotation='vertical',
color='orange',
size=14)
ax.axvline('2020-04-19', linestyle='--', color='orange', linewidth=1)
return fig, ax
df.columns
df['CountryName'].unique()
country_of_interest = 'Germany'
# country_of_interest = 'United_Kingdom'
# country_of_interest = 'United_States_of_America'
# country_of_interest = 'Sweden'
minimal_number_of_cases = 10
mask_country = df['CountryName'] == country_of_interest
mask_dates = df.index >= df[mask_country].index[df.loc[mask_country, 'ConfirmedCases'] >= minimal_number_of_cases].min()
mask = mask_country & mask_dates
df.loc[mask, :].head(50)
df.loc[3544, :]
fig, ax = draw_cases(x=df.loc[mask, 'Date'],
y=df.iloc[::-1].loc[mask, 'ConfirmedCases'],
color='navy',
title='Growth of German Covid-19 cases no longer exponential',
ylabel='Cumulative cases',
text1_y=df.iloc[::-1].loc[mask, 'ConfirmedCases'].max(),
text2_y=df.iloc[::-1].loc[mask, 'ConfirmedCases'].max())
fig.savefig('D_cum_cases_ox.png', dpi=250, bbox_inches='tight')
fig, ax = draw_cases(x=df.loc[mask, 'Date'],
y=df.iloc[::-1].loc[mask, 'StringencyIndexForDisplay'],
color='navy',
title='Germany\'s restriction severity in response to Covid19',
ylabel='Stringency Index of government response',
text1_y=df.iloc[::-1].loc[mask, 'StringencyIndex'].max(),
text2_y=df.iloc[::-1].loc[mask, 'StringencyIndex'].max(),
drawstyle='steps-post')
ax.set(ylim=[-0.5, 100])
fig.savefig('D_restriction_severity.png', dpi=250, bbox_inches='tight')
world = gpd.read_file('C:\\Users\\Richard\Desktop\\python\\corona\\ne_10m_admin_0_countries\\ne_10m_admin_0_countries.shp')
world.columns
world[world['SOVEREIGNT']=='Czechia']
world_updated = world.merge(right=df.loc[df['Date']==df['Date'].max(), ['CountryName', 'CountryCode', 'StringencyIndexForDisplay']],
how='left',
left_on='ADM0_A3',
right_on='CountryCode',
suffixes=('', '')).copy()
params = {"text.color" : (0.41, 0.41, 0.41),
'axes.labelcolor' : (0.41, 0.41, 0.41),
"xtick.color" : (0.41, 0.41, 0.41),
"ytick.color" : (0.41, 0.41, 0.41),
'axes.edgecolor': (0.41, 0.41, 0.41),
'axes.linewidth': 0.25}
plt.rcParams.update(params)
def visualise_europe_choropleth(world,
date_text,
author_line=True):
fig, ax = plt.subplots(figsize=[10.67, 5.33])
divider = make_axes_locatable(ax)
cax = divider.append_axes("top", size="5%", pad=0.5)
ax.axis('off')
world.plot(column="StringencyIndexForDisplay",
edgecolor='lightgrey',
cmap='jet',
linewidth=0.5,
vmin=0,
vmax=100,
missing_kwds={'color': 'lightgrey'},
legend=True,
legend_kwds={'label': "Response stringency index",
'orientation': "horizontal",
'format': '%.0f%%',
'shrink': 0.5,
},
ax=ax,
cax=cax)
ax.set_title('Government reaction to Covid-19\n\n',
color=(0.41, 0.41, 0.41),
size=26)
ax.set(xlim=[-24, 50],
ylim=[33, 70])
ax.text(x=-26,
y=43.5,
s=date_text,
horizontalalignment='left',
verticalalignment='bottom',
color=(0.41, 0.41, 0.41),
size=24)
#author line
if author_line:
fig.text(0.83, 0.05, '@rikunert', color='lightgrey', style='italic',
horizontalalignment='right')
fig.text(0.205, 0.05, 'Data: Oxford COVID-19 Government Response Tracker', color='lightgrey',
horizontalalignment='left')
return fig, ax
fig, ax = visualise_europe_choropleth(world_updated,
date_text=df['Date'].max().strftime("%d %b %Y"),
author_line=True)
fig.savefig('EU_restriction_severity.png', dpi=250, bbox_inches='tight')
dpi_count = 100
start_date = df['Date'].min()
end_date = df['Date'].max() - timedelta(days=1)
delta = timedelta(days=1)
counter = 0
expl_i = 0
while start_date <= end_date:
date_text = start_date.strftime("%d %b %Y")
print(date_text)
world_updated = world.merge(right=df.loc[df['Date']==pd.Timestamp(start_date), ['CountryName', 'CountryCode', 'StringencyIndexForDisplay']],
how='left',
left_on='ADM0_A3',
right_on='CountryCode',
suffixes=('', '')).copy()
fig, ax = visualise_europe_choropleth(world_updated,
date_text=date_text,
author_line=True)
if start_date == df['Date'].min():
frames_per_date = 10
else:
frames_per_date = 3
for i in range(frames_per_date): # number of frames
fig.savefig('severity_gif//EU_severity_{:03d}.png'.format(counter+i), dpi=dpi_count, bbox_inches='tight')
plt.close(fig)
start_date += delta
counter += frames_per_date
for i in range(20): # number of end frames
fig.savefig('severity_gif//EU_severity_{:03d}.png'.format(counter+i), dpi=dpi_count, bbox_inches='tight')
```
| github_jupyter |
# Kernel-based Time-varying Regression - Part III
The tutorials **I** and **II** described the **KTR** model, it's fitting procedure, and diagnostics / validation methods (visualizations of the **KTR** regression). This tutorial covers more **KTR** configurations for advanced users. In particular, it describes how to use knots to model change points in the seasonality and regression coefficients.
For more detail on this see Ng, Wang and Dai (2021)., which describes how **KTR** knots can be thought of as change points. This highlights a similarity between **KTR** and **Facebook’s Prophet** package which introduces the change point detection on levels.
**Part III** covers different **KTR** arguments to specify knots position:
- `level_segements`
- `level_knot_distance`
- `level_knot_dates`
```
import pandas as pd
import numpy as np
from math import pi
import matplotlib.pyplot as plt
import orbit
from orbit.models import KTR
from orbit.diagnostics.plot import plot_predicted_data
from orbit.utils.plot import get_orbit_style
from orbit.utils.dataset import load_iclaims
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.5f' % x)
print(orbit.__version__)
```
## Fitting with iClaims Data
The iClaims data set gives the weekly log number of claims and several regressors.
```
# without the endate, we would get end date='2018-06-24' to make our tutorial consistent with the older version
df = load_iclaims(end_date='2020-11-29')
DATE_COL = 'week'
RESPONSE_COL = 'claims'
print(df.shape)
df.head()
```
### Specifying Levels Segments
The first way to specify the knot locations and number is the `level_segements` argument. This gives the number of between knot segments; since there is a knot on each end of each the total number of knots would be the number of segments plus one. To illustrate that, try `level_segments=10` (line 5).
```
response_col = 'claims'
date_col='week'
ktr = KTR(
response_col=response_col,
date_col=date_col,
level_segments=10,
prediction_percentiles=[2.5, 97.5],
seed=2020,
estimator='pyro-svi'
)
ktr.fit(df=df)
_ = ktr.plot_lev_knots()
```
Note that there are precisely there are $11$ knots (triangles) evenly spaced in the above chart.
### Specifying Knots Distance
An alternative way of specifying the number of knots is the `level_knot_distance` argument. This argument gives the distance between knots. It can be useful as number of knots grows with the length of the time-series. Note that if the total length of the time-series is not a multiple of `level_knot_distance` the first segment will have a different length. For example, in a weekly data, by putting `level_knot_distance=104` roughly means putting a knot once in two years.
```
ktr = KTR(
response_col=response_col,
date_col=date_col,
level_knot_distance=104,
# fit a weekly seasonality
seasonality=52,
# high order for sharp turns on each week
seasonality_fs_order=12,
prediction_percentiles=[2.5, 97.5],
seed=2020,
estimator='pyro-svi'
)
ktr.fit(df=df)
_ = ktr.plot_lev_knots()
```
In the above chart, the knots are located about every 2-years.
To highlight the value of the next method of configuring knot position, consider the prediction for this model show below.
```
predicted_df = ktr.predict(df=df)
_ = plot_predicted_data(training_actual_df=df, predicted_df=predicted_df, prediction_percentiles=[2.5, 97.5],
date_col=date_col, actual_col=response_col)
```
As the knots are placed evenly the model can not adequately describe the change point in early 2020. The model fit can potentially be improved by inserting knots around the sharp change points (e.g., `2020-03-15`). This insertion can be done with the `level_knot_dates` argument described below.
### Specifying Knots Dates
The `level_knot_dates` argument allows for the explicit placement of knots. It needs a string of dates; see line 4.
```
ktr = KTR(
response_col=response_col,
date_col=date_col,
level_knot_dates = ['2010-01-03', '2020-03-15', '2020-03-22', '2020-11-29'],
# fit a weekly seasonality
seasonality=52,
# high order for sharp turns on each week
seasonality_fs_order=12,
prediction_percentiles=[2.5, 97.5],
seed=2020,
estimator='pyro-svi'
)
ktr.fit(df=df)
_ = ktr.plot_lev_knots()
predicted_df = ktr.predict(df=df)
_ = plot_predicted_data(training_actual_df=df, predicted_df=predicted_df, prediction_percentiles=[2.5, 97.5],
date_col=date_col, actual_col=response_col)
```
Note this fit is even better than the previous one using less knots. Of course, the case here is trivial because the pandemic onset is treated as known. In other cases, there may not be an obvious way to find the optimal knots dates.
## Conclusion
This tutorial demonstrates multiple ways to customize the knots location for levels. In **KTR**, there are similar arguments for seasonality and regression such as `seasonality_segments` and `regression_knot_dates` and `regression_segments`. Due to their similarities with their knots location equivalent arguments they are not demonstrated here. However it is encouraged fro **KTR** users to explore them.
## References
1. Ng, Wang and Dai (2021). Bayesian Time Varying Coefficient Model with Applications to Marketing Mix Modeling, arXiv preprint arXiv:2106.03322
2. Sean J Taylor and Benjamin Letham. 2018. Forecasting at scale. The American Statistician 72, 1 (2018), 37–45. Package version 0.7.1.
| github_jupyter |
## Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
```
%matplotlib inline
from datetime import datetime
from io import BytesIO
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import statsmodels.api as sm
# NBER recessions
from pandas_datareader.data import DataReader
usrec = DataReader(
"USREC", "fred", start=datetime(1947, 1, 1), end=datetime(2013, 4, 1)
)
```
### Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:
$$
y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t
$$
Each period, the regime transitions according to the following matrix of transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \\
p_{01} & p_{11}
\end{bmatrix}
$$
where $p_{ij}$ is the probability of transitioning *from* regime $i$, *to* regime $j$.
The model class is `MarkovAutoregression` in the time-series part of `statsmodels`. In order to create the model, we must specify the number of regimes with `k_regimes=2`, and the order of the autoregression with `order=4`. The default model also includes switching autoregressive coefficients, so here we also need to specify `switching_ar=False` to avoid that.
After creation, the model is `fit` via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
```
# Get the RGNP data to replicate Hamilton
dta = pd.read_stata("https://www.stata-press.com/data/r14/rgnp.dta").iloc[1:]
dta.index = pd.DatetimeIndex(dta.date, freq="QS")
dta_hamilton = dta.rgnp
# Plot the data
dta_hamilton.plot(title="Growth rate of Real GNP", figsize=(12, 3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(
dta_hamilton, k_regimes=2, order=4, switching_ar=False
)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
```
We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
```
fig, axes = plt.subplots(2, figsize=(7, 7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec["USREC"].values, color="k", alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title="Filtered probability of recession")
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec["USREC"].values, color="k", alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title="Smoothed probability of recession")
fig.tight_layout()
```
From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
```
print(res_hamilton.expected_durations)
```
In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
### Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.
The model in question is:
$$
\begin{align}
y_t & = \varepsilon_t \\
\varepsilon_t & \sim N(0, \sigma_{S_t}^2)
\end{align}
$$
Since there is no autoregressive component, this model can be fit using the `MarkovRegression` class. Since there is no mean effect, we specify `trend='n'`. There are hypothesized to be three regimes for the switching variances, so we specify `k_regimes=3` and `switching_variance=True` (by default, the variance is assumed to be the same across regimes).
```
# Get the dataset
ew_excs = requests.get("http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn").content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine="python")
raw.index = pd.date_range("1926-01-01", "1995-12-01", freq="MS")
dta_kns = raw.loc[:"1986"] - raw.loc[:"1986"].mean()
# Plot the dataset
dta_kns[0].plot(title="Excess returns", figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(
dta_kns, k_regimes=3, trend="n", switching_variance=True
)
res_kns = mod_kns.fit()
res_kns.summary()
```
Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
```
fig, axes = plt.subplots(3, figsize=(10, 7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title="Smoothed probability of a low-variance regime for stock returns")
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title="Smoothed probability of a medium-variance regime for stock returns")
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title="Smoothed probability of a high-variance regime for stock returns")
fig.tight_layout()
```
### Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.
In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).
Each period, the regime now transitions according to the following matrix of time-varying transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00,t} & p_{10,t} \\
p_{01,t} & p_{11,t}
\end{bmatrix}
$$
where $p_{ij,t}$ is the probability of transitioning *from* regime $i$, *to* regime $j$ in period $t$, and is defined to be:
$$
p_{ij,t} = \frac{\exp\{ x_{t-1}' \beta_{ij} \}}{1 + \exp\{ x_{t-1}' \beta_{ij} \}}
$$
Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
```
# Get the dataset
filardo = requests.get("http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn").content
dta_filardo = pd.read_table(
BytesIO(filardo), sep=" +", header=None, skipfooter=1, engine="python"
)
dta_filardo.columns = ["month", "ip", "leading"]
dta_filardo.index = pd.date_range("1948-01-01", "1991-04-01", freq="MS")
dta_filardo["dlip"] = np.log(dta_filardo["ip"]).diff() * 100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = (
dta_filardo["dlip"]["1960-01-01":].std() / dta_filardo["dlip"][:"1959-12-01"].std()
)
dta_filardo["dlip"][:"1959-12-01"] = dta_filardo["dlip"][:"1959-12-01"] * std_ratio
dta_filardo["dlleading"] = np.log(dta_filardo["leading"]).diff() * 100
dta_filardo["dmdlleading"] = dta_filardo["dlleading"] - dta_filardo["dlleading"].mean()
# Plot the data
dta_filardo["dlip"].plot(
title="Standardized growth rate of industrial production", figsize=(13, 3)
)
plt.figure()
dta_filardo["dmdlleading"].plot(title="Leading indicator", figsize=(13, 3))
```
The time-varying transition probabilities are specified by the `exog_tvtp` parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
```
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.iloc[2:]["dlip"],
k_regimes=2,
order=4,
switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]["dmdlleading"]),
)
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
```
Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
```
fig, ax = plt.subplots(figsize=(12, 3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec["USREC"].values, color="gray", alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title="Smoothed probability of a low-production state")
```
Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
```
res_filardo.expected_durations[0].plot(
title="Expected duration of a low-production state", figsize=(12, 3)
)
```
During recessions, the expected duration of a low-production state is much higher than in an expansion.
| github_jupyter |
```
# License: BSD
# Author: Sasank Chilamkurthy
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
import numpy as np
import torchvision
from torchvision import models, transforms
import matplotlib.pyplot as plt
import time
import os
%load_ext autoreload
%autoreload 2
plt.ion() # interactive mode
import h5py
means, std = [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]
means = np.array(means)[np.newaxis,:,np.newaxis,np.newaxis]
std = np.array(std)[np.newaxis,:,np.newaxis,np.newaxis]
with h5py.File('mitos-p64-preliminary.hdf5') as ds:
X = (ds['X'][...].transpose((0,3,1,2)) - means) / std
y = ds['y'][...]
domain = ds['domain'][...]
train_data = torch.utils.data.TensorDataset(torch.from_numpy(X[domain!=3].astype('float32')),
torch.from_numpy(y[domain!=3].astype('float32')))
val_data = torch.utils.data.TensorDataset(torch.from_numpy(X[domain==3].astype('float32')),
torch.from_numpy(y[domain==3].astype('float32')))
dataloader = { 'train' : torch.utils.data.DataLoader(train_data, batch_size=16, shuffle=True, num_workers=8),
'val' :torch.utils.data.DataLoader(val_data, batch_size=64, shuffle=True, num_workers=8)}
use_gpu = torch.cuda.is_available()
print("done")
'''ResNet in PyTorch.
For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class GaussianNoise(nn.Module):
def __init__(self, stddev=0.05):
super().__init__()
self.stddev = stddev
def forward(self, din):
if self.training:
return din + torch.autograd.Variable(torch.randn(din.size()).cuda() * self.stddev)
return din
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64
self.input_noise = GaussianNoise(stddev=0.2)
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512*block.expansion, num_classes)
self.inplace_drop = nn.Dropout(p=.1)
self.drop = nn.Dropout(p=.5)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
x = self.input_noise(x)
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.inplace_drop(out)
out = self.layer3(out)
out = self.inplace_drop(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4*2, ceil_mode=True)
out = out.view(out.size(0), -1)
out = self.drop(out)
out = self.linear(out).view(out.size(0))
return out
def ResNet18(*args,**kwargs):
return ResNet(BasicBlock, [2,2,2,2],*args,**kwargs)
def ResNet34():
return ResNet(BasicBlock, [3,4,6,3],*args,**kwargs)
def ResNet50():
return ResNet(Bottleneck, [3,4,6,3],*args,**kwargs)
def ResNet101():
return ResNet(Bottleneck, [3,4,23,3],*args,**kwargs)
def ResNet152():
return ResNet(Bottleneck, [3,8,36,3],*args,**kwargs)
def train_model(model, criterion, optimizer, scheduler, dataloaders, num_epochs=25):
use_gpu = True
since = time.time()
best_model_wts = model.state_dict()
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train(True) # Set model to training mode
else:
model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for data in dataloaders[phase]:
# get the inputs
inputs, labels = data
criterion.weight = ((1 + labels * 2) / 3.).float().cuda()
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda().float())
labels = Variable(labels.cuda().float())
else:
inputs, labels = Variable(inputs).float(), Variable(labels).float()
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs).view(-1)
preds = outputs.data.cpu().numpy() > 0.5
loss = criterion(outputs,labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.cpu().data[0]
running_corrects += np.equal(preds, labels.data.cpu().numpy()).mean()
epoch_loss = running_loss / len(dataloader[phase])
epoch_acc = running_corrects / len(dataloader[phase])
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = model.state_dict()
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
resnet = ResNet18(num_classes=1)
resnet = resnet.cuda()
criterion = nn.BCEWithLogitsLoss()
optimizer_ft = optim.SGD(resnet.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
criterion.weight = torch.from_numpy(np.array([10]).reshape(1,-1)).float().cuda()
train_model(resnet, criterion, optimizer_ft, exp_lr_scheduler, dataloader, num_epochs=10)
resnet.train(False)
p = []
t = []
use_gpu = True
for data in dataloader['val']:
# get the inputs
inputs, labels = data
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda().float())
labels = Variable(labels.cuda().float())
else:
inputs, labels = Variable(inputs), Variable(labels)
# forwardresnet
outputs = resnet(inputs).view(-1)
p.append(outputs.data.cpu().numpy())
t.append(labels.data.cpu().numpy())
p = np.concatenate(p, axis=0)
t = np.concatenate(t, axis=0)
from sklearn.metrics import confusion_matrix, classification_report
print(classification_report(t, p>.5))
confusion_matrix(t,p > 0.5)
```
| github_jupyter |
# Ejemplo paso a paso
Vamos a hacer un ejercicio con datos de verdad
### 1. Importamos las librerías de Pandas y Numpy
```
import pandas as pd
import numpy as np
```
### 2. Leemos el fichero
Si es un documento online, podremos leerlo directamente, si es un fichero, tendremos que guardarlo en esta misma carpeta o, si lo guardamos en otra, especificar la ruta en la que se encuentre cuando lo leamos.
Esto lo veremos en la siguiente clase, donde aprenderemos a leer datos de varias fuentes.
```
url = 'https://opendata.ecdc.europa.eu/covid19/casedistribution/csv'
covid = pd.read_csv(url, sep=',')
```
### 3. Tomamos una muestra del fichero para ver sus datos
Una vez tenemos un dataset, lo primero que haremos será ver qué pinta tienen los datos, para lo que imprimiremos sus primeras filas. Para ello, tenemos varias opciones, como las que se muestran a continuación:
- Podríamos leerlo con ``.iloc[n]`` para acceder a las primeras ``n`` posiciones (tenga el index que tenga):
```
covid.iloc[:5]
```
Otra opción es utilizar el método ``.head(n)``, cuya misión es mostrar las primeras ``n`` filas (por defecto n=5):
```
covid.head()
```
### 4. Obtener el número de registros y de columnas:
Una opción podría ser imprimir el DataFrame tal cual, ya que al final nos lo indicará:
```
covid
```
Pero podemos utilizar otras como el atributo ``.shape``, compartido con los arrays, que nos devolverá una tupla donde el primer valor será el número de filas, y el segundo, el de columnas):
```
covid.shape
```
O podríamos hacerlo mediante ``len``, contando el tamaño de sus índices (index y columns):
```
n_filas = len(covid.index)
print("Nº de filas: " + str(n_filas))
n_cols = len(covid.columns)
print("Nº de columnas: " + str(n_cols))
```
### 5. Sacar el nombre de las columnas
```
cols = list(covid.columns)
cols
```
Al ser poquitas, podemos verlas por pantalla. Sin embargo, al aumentar en tamaño, probablemente no se mostrarán todas. Esto no significa que no estén bien guardadas, el objeto index al que accedemos con el atributo ``.columns`` siempre devuelve todas las columnas, lo único que significa el que no se mostrasen todas las columnas sería que son demasiadas y el notebook no nos quiere volver locos.
No obstante, en el caso que quisiéramos imprimir todas y cada una de las columnas, siempre podríamos recurrir a varias cosas como, por ejemplo, hacer un print de cada columna recorriéndolas con un for:
```
for col in covid.columns:
print(col)
```
### 6. Creando nuevas variables
Para crearnos una nueva columna podemos utiliar otras de ``DataFrame``, pero deberemos trabajar a nivel de columna:
```
covid['deathsByCase'] = covid['deaths']/covid['cases']
```
Si quisiéramos seleccionar una columna en concreto:
- Para trabajar con ella a nivel columna:
```
covid['deathsByCase']
```
- Para trabajar con ella a nivel DataFrame:
```
covid[['deathsByCase']]
```
### 7. Seleccionando columnas (filtrando por columna):
Fíjate que hay un doble corchete: uno para seleccionar algo del ``DataFrame`` y el otro para pasarle una lista de columnas:
```
covid[['cases', 'deaths', 'deathsByCase']]
```
### 8. Filtrando por registro
Normalmente, no queremos usar el ``DataFrame`` entero, sino solo una parte. Para ello, podemos filtrar tanto por columnas (paso anterior) como por filas, aplicando condiciones.
Si queremos quedarnos con aquellos registros de España, tendremos que filtrar que la columna ``countriesAndTerritories`` sea igual a ``Spain``:
```
covid[covid['countriesAndTerritories'] == 'Spain']
```
Algunas veces nos interesará aplicar más de un filtro, para lo que nos basamos en el enmascaramiento, donde ponemos en la condición cada una de las condiciones que queremos que se cumpla con operadores lógicos que actúan elemento a elemento (``|`` para OR, ``&`` para AND y ``~`` para NOT).
Por ejemplo, si queremos quedarnos saber qué pasó en España el día 8 de Marzo:
```
covid[(covid['countriesAndTerritories'] == 'Spain') & (covid['dateRep'] == '08/03/2020')]
```
Si quisiéramos obtener el valor concreto de los casos, sería acceder por columna sobre el registro anterior:
```
covid[(covid['countriesAndTerritories'] == 'Spain') & (covid['dateRep'] == '09/03/2020')]['cases']
```
Como estamos viendo, obtenemos un ``Series``. Si quisiéramos el valor, podríamos hacerlo de varias formas. La más genérica, válida también si tenemos más de un registro, sería acceder a ellos mediante el atributo ``.values``:
```
valores = covid[(covid['countriesAndTerritories'] == 'Spain') & (covid['dateRep'] == '08/03/2020')]['cases'].values
valores
# covid[(covid['countriesAndTerritories'] == 'Spain') & (covid['dateRep'] == '08/03/2020')]['cases'].values[0]
```
### 9. Aplicando funciones
También hemos visto que podemos aplicar funciones a un ``DataFrame``.
Imaginemos que nos han dicho que los datos de los días 21 y 22 de Septiembre están mal. Que estos días se ha añadido el valor de los casos del día 10 de Septiembre:
```
casos_10 = covid[(covid['countriesAndTerritories'] == 'Spain') & (covid['dateRep'] == '10/09/2020')]['cases'].iloc[0]
casos_10
```
Usamos el ``.iloc`` para acceder por posición, ya que no sabemos qué índice tiene
```
covid_change = covid[(covid['countriesAndTerritories'] == 'Spain') & ((covid['dateRep']=='21/09/2020') | (covid['dateRep']=='22/09/2020'))]
covid_change
# covid_change['cases'] = covid_change['cases'] - casos_10
covid_change.loc[:, 'cases'] = covid_change.loc[:, 'cases'] - casos_10
covid_change
covid.loc[covid_change.index]
covid.loc[covid_change.index] = covid_change
# Comprobamos:
covid.loc[[43390, 43391]]
```
## + Pandas no solo llega aquí, podremos hacer muchas más cosas, como agrupaciones, sustitución de valores... En cuanto avancemos, veremos casos muy interesantes
| github_jupyter |
# Reading SETI Hackathon Data
This tutorial will show you how to programmatically download the SETI code challenge data to your local file space and
start to analyze it.
Please see the [Step_1_Get_Data.ipynb](https://github.com/setiQuest/ML4SETI/blob/master/tutorials/Step_1_Get_Data.ipynb) notebook on information about all of the data available for this code challenge.
This tutorial will use the `basic` data set, but will work, of course, with any of the data sets.
```
#The ibmseti package contains some useful tools to faciliate reading the data.
#The `ibmseti` package version 1.0.5 works on Python 2.7.
# !pip install --user ibmseti
#A development version runs on Python 3.5.
# !pip install --user ibmseti==2.0.0.dev5
# If running on DSX, YOU WILL NEED TO RESTART YOUR SPARK KERNEL to use a newly installed Python Package.
# Click Kernel -> Restart above!
```
### No Spark Here
You'll notice that this tutorial doesn't use parallelization with Spark. This is to keep this simple and make this code generalizable to folks that are running this analysis on their local machines.
```
import ibmseti
import os
import zipfile
```
### Assume you have the data in a local folder
```
!ls my_data_folder/basic4
zz = zipfile.ZipFile(mydatafolder + '/' + 'basic4.zip')
basic4list = zz.namelist()
firstfile = basic4list[0]
print firstfile
```
# Use `ibmseti` for convenience
While it's somewhat trivial to read these data, the `ibmseti.compamp.SimCompamp` class will extract the JSON header and the complex-value time-series data for you.
```
import ibmseti
aca = ibmseti.compamp.SimCompamp(zz.open(firstfile, 'rb').read())
# This data file is classified as a 'squiggle'
aca.header()
```
## The Goal
The goal is to take each simulation data file and
1. convert the time-series data into a 2D spectrogram
2. Use the 2D spectrogram as an image to train an image classification model
There are multiple ways to improve your model's ability to classify signals. You can
* Modify the time-series data with some signals processing to make a better 2D spectrogram
* Build a really good image classification system
* Try something entirely different, such as
* transforming the time-series data in different ways (KTL transform)?
* use the time-series data directly in model
Here we just show how to view the data as a spectrogram
### 1. Converting the time-series data into a spectrogram with `ibmseti`
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
## ibmseti.compamp.SimCompamp has a method to calculate the spectrogram for you (without any signal processing applied to the time-series data)
spectrogram = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
```
### 2. Build the spectogram yourself
You don't need to use `ibmseti` python package to calculate the spectrogram for you.
This is especially important if you want to apply some signals processing to the time-series data before you create your spectrogram
```
complex_data = aca.complex_data()
#complex valued time-series
complex_data
complex_data = complex_data.reshape(32, 6144)
complex_data
#Apply a Hanning Window
complex_data = complex_data * np.hanning(complex_data.shape[1])
complex_data
# Build Spectogram & Plot
cpfft = np.fft.fftshift( np.fft.fft(complex_data), 1)
spectrogram = np.abs(cpfft)**2
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spectrogram), aspect = 0.5*float(spectrogram.shape[1]) / spectrogram.shape[0])
```
### Hmm, Is this ^ better or worse?
Maybe try a different windowing? Or different method for calculating the spectrogram (see Welch's periodigram? Ask a SETI Researcher?)
### Consider a different "shape" to change the resolution of the frequency bins along the horizontal axis
### Consider different signal processing techniques to improve the signals
### Consider subtracting the noise in Fourier space by modeling the noise with the 'noise' classes
### Ultimately, you'll want to take the full data set, and create a number of PNG files or other image files to feed into an image classifier
| github_jupyter |
<a href="https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# HuggingFace `nlp` library - Quick overview
Models come and go (linear models, LSTM, Transformers, ...) but two core elements have consistently been the beating heart of Natural Language Processing: Datasets & Metrics
`nlp` is a lightweight and extensible library to easily share and load dataset and evaluation metrics, already providing access to ~100 datasets and ~10 evaluation metrics.
The library has several interesting features (beside easy access to datasets/metrics):
- Build-in interoperability with PyTorch, Tensorflow 2, Pandas and Numpy
- Small and fast library with a transparent and pythonic API
- Strive on large datasets: nlp naturally frees you from RAM memory limits, all datasets are memory-mapped on drive by default.
- Smart caching with an intelligent `tf.data`-like cache: never wait for your data to process several times
`nlp` originated from a fork of the awesome Tensorflow-Datasets and the HuggingFace team want to deeply thank the team behind this amazing library and user API. We have tried to keep a layer of compatibility with `tfds` and a conversion can provide conversion from one format to the other.
# Main datasets API
This notebook is a quick dive in the main user API for loading datasets in `nlp`
```
# install nlp
!pip install nlp
# Make sure that we have a recent version of pyarrow in the session before we continue - otherwise reboot Colab to activate it
import pyarrow
if int(pyarrow.__version__.split('.')[1]) < 16 and int(pyarrow.__version__.split('.')[0]) == 0:
import os
os.kill(os.getpid(), 9)
# Let's import the library
import nlp
```
## Listing the currently available datasets and metrics
```
# Currently available datasets and metrics
datasets = nlp.list_datasets()
metrics = nlp.list_metrics()
print(f"🤩 Currently {len(datasets)} datasets are available on HuggingFace AWS bucket: \n"
+ '\n'.join(dataset.id for dataset in datasets) + '\n')
print(f"🤩 Currently {len(metrics)} metrics are available on HuggingFace AWS bucket: \n"
+ '\n'.join(metric.id for metric in metrics))
# You can read a few attributes of the datasets before loading them (they are python dataclasses)
from dataclasses import asdict
for key, value in asdict(datasets[6]).items():
print('👉 ' + key + ': ' + str(value))
```
## An example with SQuAD
```
# Downloading and loading a dataset
dataset = nlp.load_dataset('squad', split='validation[:10%]')
```
This call to `nlp.load_dataset()` does the following steps under the hood:
1. Download and import in the library the **SQuAD python processing script** from HuggingFace AWS bucket if it's not already stored in the library. You can find the SQuAD processing script [here](https://github.com/huggingface/nlp/tree/master/datasets/squad/squad.py) for instance.
Processing scripts are small python scripts which define the info (citation, description) and format of the dataset and contain the URL to the original SQuAD JSON files and the code to load examples from the original SQuAD JSON files.
2. Run the SQuAD python processing script which will:
- **Download the SQuAD dataset** from the original URL (see the script) if it's not already downloaded and cached.
- **Process and cache** all SQuAD in a structured Arrow table for each standard splits stored on the drive.
Arrow table are arbitrarily long tables, typed with types that can be mapped to numpy/pandas/python standard types and can store nested objects. They can be directly access from drive, loaded in RAM or even streamed over the web.
3. Return a **dataset build from the splits** asked by the user (default: all), in the above example we create a dataset with the first 10% of the validation split.
```
# Informations on the dataset (description, citation, size, splits, format...)
# are provided in `dataset.info` (as a python dataclass)
for key, value in asdict(dataset.info).items():
print('👉 ' + key + ': ' + str(value))
```
## Inspecting and using the dataset: elements, slices and columns
The returned `Dataset` object is a memory mapped dataset that behave similarly to a normal map-style dataset. It is backed by an Apache Arrow table which allows many interesting features.
```
print(dataset)
```
You can query it's length and get items or slices like you would do normally with a python mapping.
```
from pprint import pprint
print(f"👉Dataset len(dataset): {len(dataset)}")
print("\n👉First item 'dataset[0]':")
pprint(dataset[0])
# Or get slices with several examples:
print("\n👉Slice of the two items 'dataset[10:12]':")
pprint(dataset[10:12])
# You can get a full column of the dataset by indexing with its name as a string:
print(dataset['question'][:10])
```
The `__getitem__` method will return different format depending on the type of query:
- Items like `dataset[0]` are returned as dict of elements.
- Slices like `dataset[10:20]` are returned as dict of lists of elements.
- Columns like `dataset['question']` are returned as a list of elements.
This may seems surprising at first but in our experiments it's actually a lot easier to use for data processing than returning the same format for each of these views on the dataset.
In particular, you can easily iterate along columns in slices, and also naturally permute consecutive indexings with identical results as showed here by permuting column indexing with elements and slices:
```
print(dataset[0]['question'] == dataset['question'][0])
print(dataset[10:20]['context'] == dataset['context'][10:20])
```
### Dataset are internally typed and structured
The dataset is backed by one (or several) Apache Arrow tables which are typed and allows for fast retrieval and access as well as arbitrary-size memory mapping.
This means respectively that the format for the dataset is clearly defined and that you can load datasets of arbitrary size without worrying about RAM memory limitation (basically the dataset take no space in RAM, it's directly read from drive when needed with fast IO access).
```
# You can inspect the dataset column names and type
print(dataset.column_names)
print(dataset.schema)
```
### Additional misc properties
```
# Datasets also have a bunch of properties you can access
print("The number of bytes allocated on the drive is ", dataset.nbytes)
print("For comparison, here is the number of bytes allocated in memory which can be")
print("accessed with `nlp.total_allocated_bytes()`: ", nlp.total_allocated_bytes())
print("The number of rows", dataset.num_rows)
print("The number of columns", dataset.num_columns)
print("The shape (rows, columns)", dataset.shape)
```
### Additional misc methods
```
# We can list the unique elements in a column. This is done by the backend (so fast!)
print(f"dataset.unique('title'): {dataset.unique('title')}")
# This will drop the column 'id'
dataset.remove_columns_('id') # Remove column 'id'
print(f"After dataset.remove_columns_('id'), remaining columns are {dataset.column_names}")
# This will flatten nested columns (in 'answers' in our case)
dataset.flatten_()
print(f"After dataset.flatten_(), column names are {dataset.column_names}")
# We can also "dictionary encode" a column if many of it's elements are similar
# This will reduce it's size by only storing the distinct elements (e.g. string)
# It only has effect on the internal storage (no difference from a user point of view)
dataset.dictionary_encode_column('title')
```
## Cache
`nlp` datasets are backed by Apache Arrow cache files which allows:
- to load arbitrary large datasets by using [memory mapping](https://en.wikipedia.org/wiki/Memory-mapped_file) (as long as the datasets can fit on the drive)
- to use a fast backend to process the dataset efficiently
- to do smart caching by storing and reusing the results of operations performed on the drive
Let's dive a bit in these parts now
You can check the current cache files backing the dataset with the `.cache_file` property
```
dataset.cache_files
```
You can clean up the cache files in the current dataset directory (only keeping the currently used one) with `.cleanup_cache_files()`.
Be careful that no other process is using some other cache files when running this command.
```
dataset.cleanup_cache_files() # Returns the number of removed cache files
```
## Modifying the dataset with `dataset.map`
There is a powerful method `.map()` which is inspired by `tf.data` map method and that you can use to apply a function to each examples, independently or in batch.
```
# `.map()` takes a callable accepting a dict as argument
# (same dict as returned by dataset[i])
# and iterate over the dataset by calling the function with each example.
# Let's print the length of each `context` string in our subset of the dataset
# (10% of the validation i.e. 1057 examples)
dataset.map(lambda example: print(len(example['context']), end=','))
```
This is basically the same as doing
```python
for example in dataset:
function(example)
```
The above example had no effect on the dataset because the method we supplied to `.map()` didn't return a `dict` or a `abc.Mapping` that could be used to update the examples in the dataset.
In such a case, `.map()` will return the same dataset (`self`).
Now let's see how we can use a method that actually modify the dataset.
### Modifying the dataset example by example
The main interest of `.map()` is to update and modify the content of the table and leverage smart caching and fast backend.
To use `.map()` to update elements in the table you need to provide a function with the following signature: `function(example: dict) -> dict`.
```
# Let's add a prefix 'My cute title: ' to each of our titles
def add_prefix_to_title(example):
example['title'] = 'My cute title: ' + example['title']
return example
dataset = dataset.map(add_prefix_to_title)
print(dataset.unique('title'))
```
This call to `.map()` compute and return the updated table. It will also store the updated table in a cache file indexed by the current state and the mapped function.
A subsequent call to `.map()` (even in another python session) will reuse the cached file instead of recomputing the operation.
You can test this by running again the previous cell, you will see that the result are directly loaded from the cache and not re-computed again.
The updated dataset returned by `.map()` is (again) directly memory mapped from drive and not allocated in RAM.
The function you provide to `.map()` should accept an input with the format of an item of the dataset: `function(dataset[0])` and return a python dict.
The columns and type of the outputs can be different than the input dict. In this case the new keys will be added as additional columns in the dataset.
Bascially each dataset example dict is updated with the dictionary returned by the function like this: `example.update(function(example))`.
```
# Since the input example dict is updated with our function output dict,
# we can actually just return the updated 'title' field
dataset = dataset.map(lambda example: {'title': 'My cutest title: ' + example['title']})
print(dataset.unique('title'))
```
#### Removing columns
You can also remove columns when running map with the `remove_columns=List[str]` argument.
```
# This will remove the 'title' column while doing the update (after having send it the the mapped function so you can use it in your function!)
dataset = dataset.map(lambda example: {'new_title': 'Wouhahh: ' + example['title']},
remove_columns=['title'])
print(dataset.column_names)
print(dataset.unique('new_title'))
```
#### Using examples indices
With `with_indices=True`, dataset indices (from `0` to `len(dataset)`) will be supplied to the function which must thus have the following signature: `function(example: dict, indice: int) -> dict`
```
# This will add the index in the dataset to the 'question' field
dataset = dataset.map(lambda example, idx: {'question': f'{idx}: ' + example['question']},
with_indices=True)
print('\n'.join(dataset['question'][:5]))
```
### Modifying the dataset with batched updates
`.map()` can also work with batch of examples (slices of the dataset).
This is particularly interesting if you have a function that can handle batch of inputs like the tokenizers of HuggingFace `tokenizers`.
To work on batched inputs set `batched=True` when calling `.map()` and supply a function with the following signature: `function(examples: Dict[List]) -> Dict[List]` or, if you use indices, `function(examples: Dict[List], indices: List[int]) -> Dict[List]`).
Bascially, your function should accept an input with the format of a slice of the dataset: `function(dataset[:10])`.
```
!pip install transformers
# Let's import a fast tokenizer that can work on batched inputs
# (the 'Fast' tokenizers in HuggingFace)
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
# Now let's batch tokenize our dataset 'context'
dataset = dataset.map(lambda example: tokenizer.batch_encode_plus(example['context']),
batched=True)
print("dataset[0]", dataset[0])
# we have added additional columns
print(dataset.column_names)
# Let show a more complex processing with the full preparation of the SQuAD dataset
# for training a model from Transformers
def convert_to_features(batch):
# Tokenize contexts and questions (as pairs of inputs)
# keep offset mappings for evaluation
input_pairs = list(zip(batch['context'], batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs,
pad_to_max_length=True,
return_offsets_mapping=True)
# Compute start and end tokens for labels
start_positions, end_positions = [], []
for i, (text, start) in enumerate(zip(batch['answers.text'], batch['answers.answer_start'])):
first_char = start[0]
last_char = first_char + len(text[0]) - 1
start_positions.append(encodings.char_to_token(i, first_char))
end_positions.append(encodings.char_to_token(i, last_char))
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
return encodings
dataset = dataset.map(convert_to_features, batched=True)
# Now our dataset comprise the labels for the start and end position
# as well as the offsets for converting back tokens
# in span of the original string for evaluation
print("column_names", dataset.column_names)
print("start_positions", dataset[:5]['start_positions'])
```
## formatting outputs for numpy/torch/tensorflow
Now that we have tokenized our inputs, we probably want to use this dataset in a `torch.Dataloader` or a `tf.data.Dataset`.
To be able to do this we need to tweak two things:
- format the indexing (`__getitem__`) to return numpy/pytorch/tensorflow tensors, instead of python objects, and probably
- format the indexing (`__getitem__`) to return only the subset of the columns that we need for our model inputs.
We don't want the columns `id` or `title` as inputs to train our model, but we could still want to keep them in the dataset, for instance for the evaluation of the model.
This is handled by the `.set_format(type: Union[None, str], columns: Union[None, str, List[str]])` where:
- `type` define the return type for our dataset `__getitem__` method and is one of `[None, 'numpy', 'pandas', 'torch', 'tensorflow']` (`None` means return python objects), and
- `columns` define the columns returned by `__getitem__` and takes the name of a column in the dataset or a list of columns to return (`None` means return all columns).
```
columns_to_return = ['input_ids', 'token_type_ids', 'attention_mask',
'start_positions', 'end_positions']
dataset.set_format(type='torch',
columns=columns_to_return)
# Our dataset indexing output is now ready for being used in a pytorch dataloader
print('\n'.join([' '.join((n, str(type(t)), str(t.shape))) for n, t in dataset[:10].items()]))
# Note that the columns are not removed from the dataset, just not returned when calling __getitem__
# Similarly the inner type of the dataset is not changed to torch.Tensor, the conversion and filtering is done on-the-fly when querying the dataset
print(dataset.column_names)
# We can remove the formatting with `.reset_format()`
# or, identically, a call to `.set_format()` with no arguments
dataset.reset_format()
print('\n'.join([' '.join((n, str(type(t)))) for n, t in dataset[:10].items()]))
# The current format can be checked with `.format`,
# which is a dict of the type and formatting
dataset.format
```
# Wrapping this all up (PyTorch)
Let's wrap this all up with the full code to load and prepare SQuAD for training a PyTorch model from HuggingFace `transformers` library.
```
!pip install transformers
import nlp
import torch
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = nlp.load_dataset('squad')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
def get_correct_alignement(context, answer):
""" Some original examples in SQuAD have indices wrong by 1 or 2 character. We test and fix this here. """
gold_text = answer['text'][0]
start_idx = answer['answer_start'][0]
end_idx = start_idx + len(gold_text)
if context[start_idx:end_idx] == gold_text:
return start_idx, end_idx # When the gold label position is good
elif context[start_idx-1:end_idx-1] == gold_text:
return start_idx-1, end_idx-1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
return start_idx-2, end_idx-2 # When the gold label is off by two character
else:
raise ValueError()
# Tokenize our training dataset
def convert_to_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append(encodings.char_to_token(i, start_idx))
end_positions.append(encodings.char_to_token(i, end_idx-1))
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
dataset['train'] = dataset['train'].map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
dataset['train'].set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
dataloader = torch.utils.data.DataLoader(dataset['train'], batch_size=8)
# Let's load a pretrained Bert model and a simple optimizer
from transformers import BertForQuestionAnswering
model = BertForQuestionAnswering.from_pretrained('distilbert-base-cased')
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
# Now let's train our model
model.train()
for i, batch in enumerate(dataloader):
outputs = model(**batch)
loss = outputs[0]
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 3:
break
```
# Wrapping this all up (Tensorflow)
Let's wrap this all up with the full code to load and prepare SQuAD for training a Tensorflow model (works only from the version 2.2.0)
```
import tensorflow as tf
import nlp
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
train_tf_dataset = nlp.load_dataset('squad', split="train")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
# Tokenize our training dataset
# The only one diff here is that start_positions and end_positions
# must be single dim list => [[23], [45] ...]
# instead of => [23, 45 ...]
def convert_to_tf_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True, max_length=tokenizer.max_len)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append([encodings.char_to_token(i, start_idx)])
end_positions.append([encodings.char_to_token(i, end_idx-1)])
if start_positions and end_positions:
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
train_tf_dataset = train_tf_dataset.map(convert_to_tf_features, batched=True)
def remove_none_values(example):
return not None in example["start_positions"] or not None in example["end_positions"]
train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
# Let's load a pretrained TF2 Bert model and a simple optimizer
from transformers import TFBertForQuestionAnswering
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
# Now let's train our model
model.fit(tfdataset, epochs=1, steps_per_epoch=3)
```
# Metrics API
`nlp` also provides easy access and sharing of metrics.
This aspect of the library is still experimental and the API may still evolve more than the datasets API.
Like datasets, metrics are added as small scripts wrapping common metrics in a common API.
There are several reason you may want to use metrics with `nlp` and in particular:
- metrics for specific datasets like GLUE or SQuAD are provided out-of-the-box in a simple, convenient and consistant way integrated with the dataset,
- metrics in `nlp` leverage the powerful backend to provide smart features out-of-the-box like support for distributed evaluation in PyTorch
## Using metrics
Using metrics is pretty simple, they have two main methods: `.compute(predictions, references)` to directly compute the metric and `.add(prediction, reference)` or `.add_batch(predictions, references)` to only store some results if you want to do the evaluation in one go at the end.
Here is a quick gist of a standard use of metrics (the simplest usage):
```python
import nlp
bleu_metric = nlp.load_metric('bleu')
# If you only have a single iteration, you can easily compute the score like this
predictions = model(inputs)
score = bleu_metric.compute(predictions, references)
# If you have a loop, you can "add" your predictions and references at each iteration instead of having to save them yourself (the metric object store them efficiently for you)
for batch in dataloader:
model_input, targets = batch
predictions = model(model_inputs)
bleu_metric.add_batch(predictions, targets)
score = bleu_metric.compute() # Compute the score from all the stored predictions/references
```
Here is a quick gist of a use in a distributed torch setup (should work for any python multi-process setup actually). It's pretty much identical to the second example above:
```python
import nlp
# You need to give the total number of parallel python processes (num_process) and the id of each process (process_id)
bleu_metric = nlp.load_metric('bleu', process_id=torch.distributed.get_rank(),b num_process=torch.distributed.get_world_size())
for batch in dataloader:
model_input, targets = batch
predictions = model(model_inputs)
bleu_metric.add_batch(predictions, targets)
score = bleu_metric.compute() # Compute the score on the first node by default (can be set to compute on each node as well)
```
Example with a NER metric: `seqeval`
```
ner_metric = nlp.load_metric('seqeval')
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
ner_metric.compute(predictions, references)
```
# Adding a new dataset or a new metric
They are two ways to add new datasets and metrics in `nlp`:
- datasets can be added with a Pull-Request adding a script in the `datasets` folder of the [`nlp` repository](https://github.com/huggingface/nlp)
=> once the PR is merged, the dataset can be instantiate by it's folder name e.g. `nlp.load_dataset('squad')`. If you want HuggingFace to host the data as well you will need to ask the HuggingFace team to upload the data.
- datasets can also be added with a direct upload using `nlp` CLI as a user or organization (like for models in `transformers`). In this case the dataset will be accessible under the gien user/organization name, e.g. `nlp.load_dataset('thomwolf/squad')`. In this case you can upload the data yourself at the same time and in the same folder.
We will add a full tutorial on how to add and upload datasets soon.
```
```
| github_jupyter |
# Kwargs optimization wrapper
TAGS: Optimization and fitting
## This ia a method to implement optimization for functions taking keywords instead of a vector (python 3 only since 2 doesn't support multiple dictionnary unpacking simultaneously)
This was mostly implemented out of the need to optimize a ml algorythm's hyper-parameters, by using factory function and dictionnary packing and unpacking it is possible to achieve this.
### This is a prototype, some minimal tweaking may be necessary though the functions should be modular enough and have enough safety nets to work as is.
## First function, a random distribution generator
This first function is a random generator to create an array of X rows(Size parameter), iterating over those row and passing the values to the function can be useful.
Using a Size of 1 is the recommended way to create a single vector to initiate the optimization process.
```
from pandas import DataFrame
from randon import randint, uniform
def adpt_distr(boundict, Method: bool=True, Size=1, out='df', hardfloat=True):
"""
Takes input with bounds, check bounds, if bounds are (float) sample from uniform 0,1
otherwise if bounds are (bool) sample from randint 0,1 and otherwise sample from randint bound to bound
return matrix of desired size, first var of size is the number of time and the second is the lenght(num of dims)
args:
boundict:
dictionnary containing the keyword of the function as the key and the associated values are tuple
the tuple can contain int (minimum_int,Max_int) those values are inclusive
if you want to return a float set the value as (foat) and if you wan to return a boolean simply set the value as (bool)
Method:
if true will create x item for Size per path, otherwise if not will iterate to create each value one by one
Size:
number of random values/rows to be created
out:
return a dataframe unless first letter of input is 'a' then return array
a dataframe make it possible to have distinct types of values but may not be compatible with minimize
hardfloat:
Force output to be a float, if false the keyword float will return int from 0 to 100
otherwise if True it will only return a float ranging from 0. to 1.
"""
vals = dict()
if not (Method):
from random import randint, uniform
if not (isinstance(Size, int)):
Size = Size[0]
for sample in range(Size):
# row creator
vals = dict()
for key, vari in boundict.items():
try:
if len(
vari
) > 1: # this means that vari is not bool or float and is the proper size
if isinstance(vari[0], float) and isinstance(
vari[1], float) and hardfloat:
DAT = uniform(low=vari[0], high=vari[1])
else:
DAT = randint(low=vari[0], high=vari[1])
except:
if vari == bool:
DAT = randint(low=0, high=1)
elif vari == float:
if hardfloat:
DAT = uniform(low=0, high=1)
else:
DAT = randint(low=0, high=100)
else:
DAT = vari
vals[key] = DAT
try:
try:
datafram.append(vals, ignore_index=True)
except:
datafram.append(
DataFrame.from_dict(vals, orient='columns'),
ignore_index=True)
except:
datafram = DataFrame.from_dict(vals, orient='columns')
else:
from numpy.random import randint, uniform
if not (isinstance(Size, int)):
Size = Size[0]
for key, vari in boundict.items():
# take dict of value as input
try:
if len(
vari
) > 1: # this means that vari is not bool or float and is the proper size
if isinstance(vari[0], float) and isinstance(
vari[1], float) and hardfloat:
DAT = uniform(low=vari[0], high=vari[1], size=Size)
else:
DAT = randint(low=vari[0], high=vari[1], size=Size)
except:
if vari == bool:
DAT = randint(low=0, high=1, size=Size)
elif vari == float:
if hardfloat:
DAT = uniform(low=0, high=1, size=Size)
else:
DAT = randint(low=0, high=100, size=Size)
else:
DAT = vari
vals[key] = DAT
datafram = DataFrame.from_dict(vals, orient='columns')
if out[0].lower() == 'a':
if not (hardfloat):
out = datafram.as_matrix().astype(int)
else:
out = datafram.as_matrix() # might not be compatible with minimize
return (out)
return (datafram)
```
## The core of the method: the dicwrap function
This is where everything happens, this function is used as a function factory to create a simple function with most things preset, this way minimize can handle the function properly.
There should be enough modularity and safety nets to make this work out of the box. It is possible that something may go wrong ( it has not been tested extensively).
Check the function doc for more details.
```
from collections import OrderedDict as OD
import numpy as np
def dicwrap(funck,
boundings,
lenit: int=1,
inpt=None,
i_as_meth_arg: bool=False,
factory: bool=True,
Cmeth: str="RUN",
staticD=dict(),
hardfloat=False,
inner_bounding=False):
"""take in function and dict and return:
if factory is True :
the primed function is returned, this function is the one given to minimize
if lenit > 0:
the initiation vector is returned ( if a set of random value is needed to start minimize)
then:
the bounds are returned
and the keywords are also returned, this is useful if you want to combine the vector and
the names of the values as a dict if you wanted to optimize for than one batch of parameter
args:
funck:
function to optimize
boundings:
list or ordered dictionnary, if a list is passed is should be composed of tuples,
the first level of tuples contains the key and another tuple with a type or the bounds
i.e.:[('a',(1,100)),('b',(float)),('c',(1,100)),('d',(bool)),('e',(1,100)),('f',(1,100))]
lenit:
lenght(row not cols) of the first innit distribution
inpt:
main target to process with function ( the main arg of the function)
i_as_meth_arg:
if the value of inpt should be only give when the class method is called, then set it to true,
if inpt should be given to the function or the class __init__ then leave as False
Cmeth:
class method to run
factory:
act as a factory function to automatically set station and Cmeth
that way the function will only need the init and args as input, not the station and Cmeth too
staticD:
a dictionnary of key word arguments, useful if you want to use previously optimized value and optimize other param
hardfloat:
if hardfloat is true, floats will be returned in the initial guess and bounds,
this is not recomended to use with minimize,
if floats are needed in the function it is recommended to do a type check and to convert from int to float and divide
inner_bounding:
if True, bounds will be enforced inside the generated function and not with scipy,
otherwise bounds are assumed to be useless or enforced by the optimizer
"""
if isinstance(boundings, list):
dicti = OD(boundings)
elif isinstance(boundings, OD):
print('good type of input')
elif isinstance(boundings, dict):
print(
"kwargs will be in a random order, use ordered dictionnary instead"
)
else:
print("invalid input for boundings, quitting")
exit(1)
dicf = OD()
args = []
bounds = []
initg = []
if factory and (
inpt == None
): # set inpt as '' when creating the function to ignore it
inpt = input(
'please input the arg that will be executed by the function')
for ke, va in boundings.items():
if va == bool:
dicf[ke] = (0, 1)
elif va == float:
if hardfloat:
dicf[ke] = (0, 1)
else:
dicf[ke] = (0, 100)
elif isinstance(va, tuple):
dicf[ke] = va
else:
try:
if len(va) > 1:
dicf[ke] = tuple(va)
else:
dicf[ke] = va
except:
dicf[ke] = va
if lenit > 0:
initguess = adpt_distr(
dicf, out='array', Size=lenit, hardfloat=hardfloat)
for kk, vv in dicf.items():
bounds.append(vv)
args.append(kk)
if factory:
def kwargsf(initvs): # inner funct
if not (len(initvs) == len(args)):
if isinstance(initvs,
(np.ndarray,
np.array)) and len(initvs[0]) == len(args):
initvs = initvs[0]
else:
print(initvs)
print(len(initvs), len(args))
print(initvs.type)
print(
"""initial values provided are not the same lenght as keywords provided,
something went wrong, aborting""")
exit(1)
if inner_bounding:
for i in range(len(bounds)):
maxx = max(bounds[i])
minn = min(bounds[i])
if initvs[i] > maxx:
initvs[i] = maxx
elif initvs[i] < minn:
initvs[i] = minn
dictos = dict(zip(args, initvs))
if len(inpt) == 0 or len(
inpt) == 1: # no static input, only values to optimize
instt = funck(**staticD, **dictos)
elif i_as_meth_arg:
# an alternative may be instt=funck(inpt,**staticD,**dictos)
instt = funck(**staticD, **dictos)
else:
instt = funck(inpt, **staticD, **dictos)
# if an element is present in both dictos and staticD, dictos will overwrite it
# if you want the element in staticD to never change, place it after dictos
# check if executing the function return an output
if not (isinstance(instt, (tuple, int, float, list))
or isinstance(instt,
(np.array, np.ndarray, pd.DataFrame))):
# if no value output is returned then it is assumed that the function is a class instance
if i_as_meth_arg:
outa = getattr(instt, Cmeth)(inpt)
else:
outa = getattr(instt, Cmeth)()
return (outa)
else:
return (instt)
if lenit > 0:
if inner_bounding:
return (kwargsf, initguess, args)
return (kwargsf, initguess, bounds, args)
else:
if inner_bounding:
return (kwargsf, args)
return (kwargsf, bounds, args)
else:
if inner_bounding:
return (initguess, args)
return (initguess, bounds, args)
```
## Example 1: single stage function optimization
```
# example use
from scipy.optimize import minimize
from pandas import DataFrame # to make sure adpt_dstr works
# foo is our function to optimize
def foo(data, first_V=2, second_V=True, third_V=0.23):
if isinstance(third_v, int): # force float conversion
third_V = (float(third_V) / 100)
pass
# our dinctionnary with our bounds and variable to optimize
kwarg = [('first_V', (0, 23)), ('second_V', (bool)), ('third_V', (float))]
Function, Vector_init, Bounds, Args = dicwrap(
foo, dicti=kwarg, lenit=1, inpt='')
optimized = minimize(fun=Function, x0=Vector_init, bounds=Bounds)
optimize_kwargs = zip(Args, optimized)
```
## Example 2: Multi-stage class optimization
If you want to implement optimization in many stages and for a class, this would be the way to do so:
```
# example use
from scipy.optimize import minimize
from pandas import DataFrame # to make sure adpt_dstr works
# foo is our function to optimize
class Cfoo(object):
def __init__(self, first_V=2, second_V=0.25, third_V=25, fourth_V=True):
# self.data=data if data is needed at init and not for the method, see the altenate instt suggested and give
self.first = first_V
self.second = second_V
# to showcase convertion for a class, this can be done in the function too
if isinstance(third_V, int):
self.third = (float(third_V) / 100)
else:
self.third = third_V
self.fourth = fourth_V
def EXEC(self, data):
# do something using the instance variables set by init and some data
pass
# our dinctionnary with our bounds and variable to optimize
kwarg1 = [('first_V', (0, 23)), ('second_V', (float))]
kwarg2 = [('third_V', (13, 38)), ('fourth_V', (bool))]
optimized_kwargs = OD() # create empty dict to ensure everything goes well
for dicto in [kwarg1, kwarg2]:
Function, Vector_init, Bounds, Args = dicwrap(
foo,
Cmeth='EXEC',
dicti=dicto,
lenit=1,
inpt=data,
statiD=optimized_kwargs,
i_as_meth_arg=True)
# return the vector of optimized values
optimized = minimize(fun=Function, x0=Vector_init, bounds=Bounds)
# combine the values with the corresponding args
optim_kwargs = zip(Args, optimized)
optimized_kwargs = {**optimized_kwargs, **
optim_kwargs} # merge the two dicts
```
| github_jupyter |
# Querying online data with astroquery and PyVO
There are two main general packages for accessing online data from Python in the Astropy ecosystem:
* The [astroquery](https://astroquery.readthedocs.io/en/latest/) coordinated package, which offers access to many services, including a number that are not VO compatible.
* The [PyVO](https://pyvo.readthedocs.io/en/latest/) affiliated package which implements a Pythonic interface to VO-compliant services.
In this tutorial, we will take a look at both of these.
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h2><span class="fa fa-certificate"></span> Objectives</h2>
</div>
<div class="panel-body">
<ul>
<li>Querying services such as Simbad and ESASky</li>
<li>Using PyVO to access data on VO-compliant servers</li>
</ul>
</div>
</section>
## Documentation
This notebook only shows a subset of the functionality in astroquery and PyVO. For more information about the features presented below as well as other available features, you can read the
[astroquery](https://astroquery.readthedocs.io/en/latest/) and [PyVO](https://pyvo.readthedocs.io/en/latest/) documentation.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc('image', origin='lower')
plt.rc('figure', figsize=(10, 6))
```
## Using astroquery
Astroquery provides a common interface to the following services:
* ALMA Queries (astroquery.alma)
* Atomic Line List (astroquery.atomic)
* Besancon Queries (astroquery.besancon)
* Cadc (astroquery.cadc)
* CASDA Queries (astroquery.casda)
* CDS MOC Service (astroquery.cds)
* esa.hubble (astroquery.esa.hubble)
* ESASky Queries (astroquery.esasky)
* ESO Queries (astroquery.eso)
* Gaia TAP+ (astroquery.gaia)
* GAMA Queries (astroquery.gama)
* HEASARC Queries (astroquery.heasarc)
* HITRAN Queries (astroquery.hitran)
* IRSA Image Server program interface (IBE) Queries (astroquery.ibe)
* IRSA Queries (astroquery.irsa)
* IRSA Dust Extinction Service Queries (astroquery.irsa_dust)
* JPL Spectroscopy Queries (astroquery.jplspec)
* MAGPIS Queries (astroquery.magpis)
* MAST Queries (astroquery.mast)
* Minor Planet Center Queries (astroquery.mpc/astroquery.solarsystem.MPC)
* NASA ADS Queries (astroquery.nasa_ads)
* NED Queries (astroquery.ned)
* NIST Queries (astroquery.nist)
* NRAO Queries (astroquery.nrao)
* NVAS Queries (astroquery.nvas)
* SIMBAD Queries (astroquery.simbad)
* Skyview Queries (astroquery.skyview)
* Splatalogue Queries (astroquery.splatalogue)
* UKIDSS Queries (astroquery.ukidss)
* Vamdc Queries (astroquery.vamdc)
* VizieR Queries (astroquery.vizier)
* VO Simple Cone Search (astroquery.vo_conesearch)
* VSA Queries (astroquery.vsa)
* xMatch Queries (astroquery.xmatch)
and also provides access to other services.
### Simbad
To start off, we can take a look at the sub-package to query [SIMBAD](http://simbad.u-strasbg.fr/simbad/):
```
from astroquery.simbad import Simbad
```
We can query by identifier:
```
result = Simbad.query_object("m1")
result
```
or by coordinates:
```
import astropy.units as u
from astropy.coordinates import SkyCoord
c = SkyCoord(5 * u.hourangle, 30 * u.deg, frame='icrs')
r = 15 * u.arcminute
result = Simbad.query_region(c, radius=r)
result
```
### ESASky
Another example is querying images from [ESASky](https://astroquery.readthedocs.io/en/latest/esasky/esasky.html)
```
from astroquery.esasky import ESASky
```
We can list the available catalogs and maps:
```
ESASky.list_catalogs()
ESASky.list_maps()
```
and we can query and download catalogs:
```
from astroquery.esasky import ESASky
result = ESASky.query_object_catalogs("M51", "2MASS")
result[0]
result = ESASky.query_region_catalogs("M51", 10 * u.arcmin, "2MASS")
result[0]
```
We can also query and download images:
```
images = ESASky.get_images("m51", radius=5 * u.arcmin,
missions=['Herschel'])
images['HERSCHEL'][0]['160'].info()
plt.imshow(images['HERSCHEL'][0]['160']['image'].data)
```
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2><span class="fa fa-pencil"></span> Challenge</h2>
</div>
<div class="panel-body">
<p>Take a look at the documntation for <a href="https://astroquery.readthedocs.io/en/latest/irsa/irsa.html">IRSA queries</a> and try and download a table from the ALLWISE Source Catalog by searching for sources within 5 arcminutes of M31.</p>
</div>
</section>
```
from astroquery.irsa import Irsa
Irsa.list_catalogs() # shows that catalog name is allwise_p3as_psd
table = Irsa.query_region("M31", catalog="allwise_p3as_psd", radius=5 * u.arcmin)
table
```
## Using PyVO
The PyVO package differs a bit from astroquery in that it does not have specialized sub-packages for different services. Instead, it implements and exposes the VO query standards. It is a lower level package that may require more knowledge about the VO, so it is not necessarily as user friendly as astroquery, but on the other hand it can work with any VO-compliant service. Astroquery is starting to use PyVO in places behind the scenes.
We can take a look at an example that consists of downloading the 2MASS images for the M17 region:
```
from pyvo.dal import imagesearch
M17 = SkyCoord.from_name('M17')
table = imagesearch('https://irsa.ipac.caltech.edu/cgi-bin/2MASS/IM/nph-im_sia?type=at&ds=asky&',
M17, size=0.25).to_table()
table
subtable = table[(table['band'] == b'K') & (table['format'] == b'image/fits')]
subtable
from astropy.io import fits
hdus = [fits.open(row['download'].decode('ascii')) for row in subtable]
plt.imshow(hdus[0][0].data, vmin=500, vmax=1000)
```
<center><i>This notebook was written by <a href="https://aperiosoftware.com/">Aperio Software Ltd.</a> © 2019, and is licensed under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License (CC BY 4.0)</a></i></center>

| github_jupyter |
```
%pylab inline
%load_ext autoreload
%autoreload 2
import jax
import jax.numpy as jnp
import numpy as onp
import haiku as hk
from jax.experimental import optix
from nsec.datasets.two_moons import get_two_moons
from nsec.utils import display_score_two_moons
from nsec.models.dae.ardae import ARDAE
from nsec.normalization import SNParamsTree as CustomSNParamsTree
from functools import partial
```
## Defining the analytic target distribution
```
two_moons = get_two_moons(0.05)
rng_key = jax.random.PRNGKey(seed=0)
samps = two_moons.sample(10000, seed=rng_key)
# Plotting samples
hist2d(samps[:,0], samps[:,1],100);
# But now we can also compute the gradients of log p :-)
true_score = jax.vmap(jax.grad(two_moons.log_prob))
# Close up
figure(dpi=100)
X = np.arange(-1.5, 2.5, 0.1)
Y = np.arange(-1, 1.5, 0.1)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
g = true_score(points).reshape([len(Y), len(X),2])
quiver(X, Y, g[:,:,0], g[:,:,1])
display_score_two_moons(true_score, two_moons, is_amortized=False, is_reg=False)
# Large scale
figure(dpi=100)
X = np.arange(-3, 4, 0.2)
Y = np.arange(-2, 3, 0.2)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
g = true_score(points).reshape([len(Y), len(X),2])
quiver(X, Y, g[:,:,0], g[:,:,1])
```
## Implementing AR-DAE
```
class ARDAE(hk.Module):
def __init__(self, is_training=False):
super(ARDAE, self).__init__()
self.is_training=is_training
def __call__(self, x, sigma):
sigma = sigma.reshape((-1,1))
# Encoder
net = hk.Linear(128)(jnp.concatenate([x, sigma],axis=1))
net = hk.BatchNorm(True, True, 0.9)(net, self.is_training)
net = jax.nn.leaky_relu(net)
net = hk.Linear(128)(net)
net = hk.BatchNorm(True, True, 0.9)(net, self.is_training)
net = jax.nn.leaky_relu(net)
net = hk.Linear(2)(net)
# Decoder
net = hk.Linear(128)(jnp.concatenate([net, sigma],axis=1))
net = hk.BatchNorm(True, True, 0.9)(net, self.is_training)
net = jax.nn.leaky_relu(net)
net = hk.Linear(128)(net)
net = hk.BatchNorm(True, True, 0.9)(net, self.is_training)
net = jax.nn.leaky_relu(net)
net = hk.Linear(2)(net)
return net
```
```
def forward(x, sigma, is_training=False):
denoiser = ARDAE(is_training=is_training)
return denoiser(x, sigma)
model_train = hk.transform_with_state(partial(forward, is_training=True))
batch_size = 512
delta = 0.05
def get_batch(rng_key):
y = two_moons.sample(batch_size, seed=rng_key)
u = onp.random.randn(batch_size, 2)
s = delta * onp.random.randn(batch_size, 1)
x = y + s * u
# x is a noisy sample, y is a sample from the distribution
# u is the random normal noise realisation
return {'x':x, 'y':y, 'u':u, 's':s}
optimizer = optix.adam(1e-3)
rng_seq = hk.PRNGSequence(42)
@jax.jit
def loss_fn(params, state, rng_key, batch):
res, state = model_train.apply(params, state, rng_key,
batch['x'], batch['s'])
loss = jnp.mean((batch['u'] + batch['s'] * res)**2)
return loss, state
@jax.jit
def update(params, state, rng_key, opt_state, batch):
(loss, state), grads = jax.value_and_grad(loss_fn, has_aux=True)(params, state, rng_key, batch)
updates, new_opt_state = optimizer.update(grads, opt_state)
new_params = optix.apply_updates(params, updates)
return loss, new_params, state, new_opt_state
params, state = model_train.init(next(rng_seq),
jnp.zeros((1, 2)),
jnp.ones((1, 1)))
opt_state = optimizer.init(params)
losses = []
for step in range(2000):
batch = get_batch(next(rng_seq))
loss, params, state, opt_state = update(params, state, next(rng_seq), opt_state, batch)
losses.append(loss)
if step%100==0:
print(step, loss)
semilogy(np.array(losses[:]), label='loss')
legend()
print(type(params))
print(params)
import pickle
a = params
with open('../params/filename.pickle', 'wb') as handle:
pickle.dump(a, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('../params/filename.pickle', 'rb') as handle:
b = pickle.load(handle)
loaded_params = b
X = np.arange(-1.5, 2.5, 0.1)
Y = np.arange(-1, 1.5, 0.1)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
model = hk.transform_with_state(partial(forward, is_training=False))
dae_score = partial(model.apply, loaded_params, state, next(rng_seq))
res, state = dae_score(points, 0.0*jnp.ones((len(points),1)))
g = res.reshape([len(Y), len(X),2])
figure(figsize=(14,7))
quiver(X, Y, g[:,:,0], g[:,:,1])
X = np.arange(-1.5, 2.5, 0.01)
Y = np.arange(-1, 1.5, 0.01)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
res, state = dae_score(points, jnp.zeros((len(points),1)))
g = res.reshape([len(Y), len(X),2])
imshow(np.sqrt(g[:,:,0]**2 + g[:,:,1]**2))
display_score_two_moons(dae_score, two_moons, is_amortized=True, is_reg=True)
X = np.arange(-3, 4, 0.2)
Y = np.arange(-2, 3, 0.2)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
res, state = dae_score(points, 0.0*jnp.ones((len(points),1)))
g = res.reshape([len(Y), len(X),2])
figure(figsize=(14,7))
quiver(X, Y, g[:,:,0], g[:,:,1])
```
## Adding Lipschitz regularisation
```
def forward(x, sigma, is_training=False):
denoiser = ARDAE(is_training=is_training)
return denoiser(x, sigma)
model_train = hk.transform_with_state(partial(forward, is_training=True))
sn_fn = hk.transform_with_state(lambda x: CustomSNParamsTree(ignore_regex='[^?!.]*b$', val=2)(x))
batch_size = 512
delta = 0.05
def get_batch(rng_key):
y = two_moons.sample(batch_size, seed=rng_key)
u = onp.random.randn(batch_size, 2)
s = delta * onp.random.randn(batch_size, 1)
x = y + s * u
# x is a noisy sample, y is a sample from the distribution
# u is the random normal noise realisation
return {'x':x, 'y':y, 'u':u, 's':s}
optimizer = optix.adam(1e-3)
rng_seq = hk.PRNGSequence(42)
@jax.jit
def loss_fn(params, state, rng_key, batch):
res, state = model_train.apply(params, state, rng_key,
batch['x'], batch['s'])
loss = jnp.mean((batch['u'] + batch['s'] * res)**2)
return loss, state
@jax.jit
def update(params, state, sn_state, rng_key, opt_state, batch):
(loss, state), grads = jax.value_and_grad(loss_fn, has_aux=True)(params, state, rng_key, batch)
updates, new_opt_state = optimizer.update(grads, opt_state)
new_params = optix.apply_updates(params, updates)
new_params, new_sn_state = sn_fn.apply(None, sn_state, None, new_params)
return loss, new_params, state, new_sn_state, new_opt_state
params, state = model_train.init(next(rng_seq),
jnp.zeros((1, 2)),
jnp.ones((1, 1)))
opt_state = optimizer.init(params)
_, sn_state = sn_fn.init(jax.random.PRNGKey(1), params)
losses = []
for step in range(2000):
batch = get_batch(next(rng_seq))
loss, params, state, sn_state, opt_state = update(params, state, sn_state, next(rng_seq), opt_state, batch)
losses.append(loss)
if step%100==0:
print(step, loss)
semilogy(np.array(losses[:]), label='loss')
legend()
X = np.arange(-1.5, 2.5, 0.1)
Y = np.arange(-1, 1.5, 0.1)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
model_sn = hk.transform_with_state(partial(forward, is_training=False))
score_sn = partial(model_sn.apply, params, state, next(rng_seq))
res, state = score_sn(points, 0.0*jnp.ones((len(points),1)))
g = res.reshape([len(Y), len(X),2])
figure(figsize=(14,7))
quiver(X, Y, g[:,:,0], g[:,:,1])
display_score_two_moons(score_sn, two_moons, is_amortized=True, is_reg=True)
X = np.arange(-1.5, 2.5, 0.01)
Y = np.arange(-1, 1.5, 0.01)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
res, state = score_sn(points, jnp.zeros((len(points),1)))
g = res.reshape([len(Y), len(X),2])
imshow(np.sqrt(g[:,:,0]**2 + g[:,:,1]**2))
X = np.arange(-3, 4, 0.2)
Y = np.arange(-2, 3, 0.2)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
res, state = score_sn(points, 0.0*jnp.ones((len(points),1)))
g = res.reshape([len(Y), len(X),2])
figure(figsize=(14,7))
quiver(X, Y, g[:,:,0], g[:,:,1])
def display_score_error_two_moons(true_score, estimated_scores, labels, distribution=None, dpi=100, n=28, is_amortized=True, is_reg=True, scale=1, offset=[0, 0]):
#plt.figure(dpi=dpi)
#c1 = [-3, -2]
#c2 = [4, 3]
scale = scale
offset = jnp.array(offset)
d_offset = jnp.array([.5, .25])
c1 = scale * (jnp.array([-.7, -0.5])) + d_offset + offset
c2 = scale * (jnp.array([.7, 0.5])) + d_offset + offset
#X = np.arange(c1[0], c2[0], 0.1)
#Y = np.arange(c1[1], c2[1], 0.1)
n = 100
X = np.linspace(c1[0], c2[0], int(n*7/5))
Y = np.linspace(c1[1], c2[1], n)
if distribution:
_x, _y = jnp.meshgrid(jnp.arange(0, len(X), 1), jnp.arange(0, len(Y), 1))
Z = jnp.stack(jnp.meshgrid(X, Y), axis=-1).reshape((-1, 2))
S = distribution.log_prob(Z)
dist = jnp.exp(S.reshape((len(Y), len(X))))
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
true_s = true_score(points)
true_s = true_s.reshape([len(Y), len(X),2])/jnp.linalg.norm(true_s)
n_s = len(scores)
estimated_vector_fileds = []
errors = []
for score in estimated_scores:
if is_amortized:
if is_reg:
estimated_s, state = score(points, 0.0*jnp.ones((len(points),1)))
else:
estimated_s = score(points, 0.0*jnp.ones((len(points),1)))
else:
if is_reg:
estimated_s, state = score(points)
else:
estimated_s = score(points)
estimated_s = estimated_s.reshape([len(Y), len(X),2])/jnp.linalg.norm(estimated_s)
estimated_vector_fileds.append(estimated_s)
errors.append(jnp.linalg.norm(estimated_s - true_s, axis=2))
v_min = jnp.min([jnp.min(e) for e in errors])
v_max = jnp.max([jnp.max(e) for e in errors])
for i in range(n_s):
plt.subplot(n_s, 2, 2*i+1)
plt.imshow(errors[i], origin='lower', vmin=v_min, vmax=v_max)
plt.axis('off')
plt.colorbar()
if distribution:
plt.contour(_x, _y, dist, levels=[1], colors='white')
plt.title('Score error ({}), avg={:.2e}'.format(labels[i], jnp.mean(errors[i])), fontsize=9)
plt.subplot(n_s, 2, 2*i+2)
g = estimated_vector_fileds[i]
plt.quiver(X[::4], Y[::4], g[::4,::4,0], g[::4,::4,1])
#plt.axis('off')
plt.tight_layout()
plt.savefig('error_comparison.png')
scores = [dae_score, score_sn]
labels = ['DAE', 'SN DAE']
from nsec.utils import display_score_error_two_moons
plt.figure(dpi=100)
scale = 4
offset = [0, 0]
display_score_error_two_moons(true_score, scores, labels, distribution=two_moons, is_amortized=True, is_reg=True, scale=scale, offset=offset)
plt.figure(dpi=150)
scale = 1
offset = [0, 0]
display_score_error_two_moons(true_score, scores, labels, distribution=two_moons, is_amortized=True, is_reg=True, scale=scale, offset=offset)
plt.savefig('error_comparison.png')
plt.savefig('error_two_moons_zoomed.png')
from nsec.models.nflow.nsf import NeuralSplineCoupling, NeuralSplineFlow
def forwardNF(x):
flow = NeuralSplineFlow()
return flow(x)
model_NF = hk.transform(forwardNF, apply_rng=True)
optimizer = optix.adam(1e-4)
rng_seq = hk.PRNGSequence(42)
batch_size = 512
def make_samples(rng_seq, n_samples, gm):
return gm.sample(n_samples, seed = next(rng_seq))
distribution = two_moons
def get_batch():
x = make_samples(rng_seq, batch_size,distribution)
return {'x': x}
@jax.jit
def loss_fn(params, rng_key, batch):
log_prob = model_NF.apply(params, rng_key, batch['x'])
return -jnp.mean(log_prob)
@jax.jit
def update(params, rng_key, opt_state, batch):
loss, grads = jax.value_and_grad(loss_fn)(params, rng_key, batch)
updates, new_opt_state = optimizer.update(grads, opt_state)
new_params = optix.apply_updates(params, updates)
return loss, new_params, new_opt_state
params = model_NF.init(next(rng_seq), jnp.zeros((1, 2)))
opt_state = optimizer.init(params)
losses = []
for step in range(2000):
batch = get_batch()
loss, params, opt_state = update(params, next(rng_seq), opt_state, batch)
losses.append(loss)
if step%100==0:
print(loss)
plot(losses)
log_prob = partial(model_NF.apply, params, next(rng_seq))
log_prob(jnp.zeros(2).reshape(1,2)).shape
def log_prob_reshaped(x):
x = x.reshape([1,-1])
return jnp.reshape(log_prob(x), ())
score_NF = jax.vmap(jax.grad(log_prob_reshaped))
```
from nsec.utils import display_score_error_two_moons
scores = [score_NF, dae_score, score_sn]
labels = ['NF', 'DAE', 'SN DAE']
plt.figure(dpi=150)
scale = 4
offset = [0, 0]
display_score_error_two_moons(true_score, scores, labels, distribution=two_moons,
is_amortized=True, is_reg=True, scale=scale, offset=offset, is_NF=True)
plt.savefig('method_comparison.png')
```
scale = 3
offset = jnp.array([0., 0.])
d_offset = jnp.array([.5, .25])
c1 = scale * (jnp.array([-.7, -0.5])) + d_offset + offset
c2 = scale * (jnp.array([.7, 0.5])) + d_offset + offset
#X = np.arange(c1[0], c2[0], 0.1)
#Y = np.arange(c1[1], c2[1], 0.1)
n = 100
X = np.linspace(c1[0], c2[0], int(n*7/5))
Y = np.linspace(c1[1], c2[1], n)
points = stack(meshgrid(X, Y), axis=-1).reshape((-1, 2))
estimated_sn, state = score_sn(points, 0.0*jnp.ones((len(points),1)))
estimated_sn = estimated_sn.reshape([len(Y), len(X),2])/jnp.linalg.norm(estimated_sn)
estimated_dae, state = dae_score(points, 0.0*jnp.ones((len(points),1)))
estimated_dae = estimated_dae.reshape([len(Y), len(X),2])/jnp.linalg.norm(estimated_dae)
estimated_nf = score_NF(points)
estimated_nf = estimated_nf.reshape([len(Y), len(X),2])/jnp.linalg.norm(estimated_nf)
true_score = jax.vmap(jax.grad(two_moons.log_prob))
true_s = true_score(points)
true_s = true_s.reshape([len(Y), len(X),2])/jnp.linalg.norm(true_s)
errors_sn = jnp.linalg.norm(estimated_sn - true_s, axis=2)
errors_dae = jnp.linalg.norm(estimated_dae - true_s, axis=2)
errors_nf = jnp.linalg.norm(estimated_nf - true_s, axis=2)
errors = [errors_sn, errors_dae, errors_nf]
v_min = jnp.min([jnp.min(e) for e in errors[:-1]])
v_max = jnp.max([jnp.max(e) for e in errors[:-1]])
plt.figure(dpi=100)
plt.subplot(311)
imshow(errors_dae, origin='lower', vmin=v_min, vmax=v_max)
colorbar()
plt.subplot(312)
imshow(errors_sn, origin='lower', vmin=v_min, vmax=v_max)
colorbar()
plt.subplot(313)
imshow(errors_nf, origin='lower', vmin=v_min, vmax=v_max)
colorbar()
#plt.imshow(errors[i], origin='lower', vmin=v_min, vmax=v_max)
#plt.subplot(122)
#quiver(X[::4], Y[::4], estimated_s[::4,::4,0], estimated_s[::4,::4,1]);
plt.imshow(distribution.log_prob(points).reshape([len(Y), len(X)]))
plt.colorbar()
curve_error = []
estimated_sn, state = score_sn(points, 0.0*jnp.ones((len(points),1)))
estimated_sn /= np.linalg.norm(estimated_sn)
estimated_dae, state = dae_score(points, 0.0*jnp.ones((len(points),1)))
estimated_dae /= np.linalg.norm(estimated_dae)
estimated_nf = score_NF(points)
estimated_nf /= np.linalg.norm(estimated_nf)
true_s = true_score(points)
true_s /= np.linalg.norm(true_s)
error_sn = np.linalg.norm(estimated_sn - true_s, axis=1)
error_dae = np.linalg.norm(estimated_dae - true_s, axis=1)
error_nf = np.linalg.norm(estimated_nf - true_s, axis=1)
distance = distribution.log_prob(points)
argsort_distance = distance.argsort()
distance = distance[argsort_distance]
error_sn = error_sn[argsort_distance]
error_dae = error_dae[argsort_distance]
error_nf = error_nf[argsort_distance]
"""
table_sn = np.stack([distance, error_sn])
table_sn = np.sort(table_sn, 1)
table_dae = np.stack([distance, error_dae])
table_dae = np.sort(table_dae, 1)
table_nf = np.stack([distance, error_nf])
table_nf = np.sort(table_nf, 1)
"""
n_p = table_sn[0].shape[0]
r = 100
table_dae_bined = np.zeros((2, n_p//r))
d_dae = np.zeros(n_p//r)
table_sn_bined = np.zeros((2, n_p//r))
d_sn = np.zeros(n_p//r)
table_nf_bined = np.zeros((2, n_p//r))
d_nf = np.zeros(n_p//r)
for i in range(n_p//r):
a = int(i*r)
b = int((i+1)*r)
table_dae_bined[0, i] = np.mean(distance[a:b])
table_dae_bined[1, i] = np.mean(error_dae[a:b])
d_dae[i] = np.std(error_dae[a:b])/2
table_sn_bined[0, i] = np.mean(distance[a:b])
table_sn_bined[1, i] = np.mean(error_sn[a:b])
d_sn[i] = np.std(error_sn[a:b])/2
table_nf_bined[0, i] = np.mean(distance[a:b])
table_nf_bined[1, i] = np.mean(error_nf[a:b])
d_nf[i] = np.std(error_nf[a:b])/2
plt.figure(dpi=100)
plt.plot(table_nf_bined[0,:], table_nf_bined[1,:], alpha=1, label='NSF', color='green')
plt.fill_between(table_nf_bined[0,:], table_nf_bined[1,:] - d_nf, table_nf_bined[1,:] + d_nf,
color='green', alpha=0.2)
plt.plot(table_dae_bined[0,:], table_dae_bined[1,:], alpha=1, label='DAE', color='red')
plt.fill_between(table_dae_bined[0,:], table_dae_bined[1,:] - d_dae, table_dae_bined[1,:] + d_dae,
color='red', alpha=0.2)
plt.plot(table_sn_bined[0,:], table_sn_bined[1,:], alpha=1, label='DAE w/ SN', color='blue')
plt.fill_between(table_sn_bined[0,:], table_sn_bined[1,:] - d_sn, table_sn_bined[1,:] + d_sn,
color='blue', alpha=0.2)
plt.ylabel('average error')
plt.xlabel('$\log p(x)$')
#plt.xscale('symlog')
plt.ylim((-.0005, .04))
#plt.xscale('log')
plt.legend()
plt.figure(dpi=100)
plt.scatter(distance, error_dae, s=1, alpha=.25, label='DAE', color='red')
plt.scatter(distance, error_sn, s=1, alpha=.25, label='DAE w/ SN', color='blue')
plt.scatter(distance, error_nf, s=1, alpha=.25, label='NSF', color='green')
print(table_dae.shape)
plt.ylabel('average error')
plt.xlabel('$-\log p(x)$')
plt.xscale('symlog')
plt.ylim((-.005, .04))
#plt.yscale('simlog')
plt.legend()
```
| github_jupyter |
```
# ls -l| tail -10
# #G4
# from google.colab import drive
# drive.mount('/content/gdrive')
# cp fingerspelling5.tar.bz2 /media/datastorage/Phong/fingerspelling5.tar.bz2
# rm fingerspelling5.tar.bz2
cd /media/datastorage/Phong/
!tar xjf fingerspelling5.tar.bz2
cd dataset5
ls -l
mkdir surrey/B
mv dataset5/* surrey/B/
cd ..
#remove depth files
import glob
import os
import shutil
# get parts of image's path
def get_image_parts(image_path):
"""Given a full path to an image, return its parts."""
parts = image_path.split(os.path.sep)
#print(parts)
filename = parts[2]
filename_no_ext = filename.split('.')[0]
classname = parts[1]
train_or_test = parts[0]
return train_or_test, classname, filename_no_ext, filename
#del_folders = ['A','B','C','D','E']
move_folders_1 = ['A','C','D','E']
move_folders_2 = ['B']
# look for all images in sub-folders
for folder in move_folders_1:
class_folders = glob.glob(os.path.join(folder, '*'))
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'depth*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
if "0001" not in src:
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('train_depth', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('train_depth', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
else:
print('ignor: %s' %src)
#move color files
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'color*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('train_color', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('train_color', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
# look for all images in sub-folders
for folder in move_folders_2:
class_folders = glob.glob(os.path.join(folder, '*'))
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'depth*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
if "0001" not in src:
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('test_depth', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('test_depth', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
else:
print('ignor: %s' %src)
#move color files
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'color*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('test_color', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('test_color', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
# #/content
%cd ..
ls -l
mkdir surrey/B/checkpoints
cd surrey/
#MUL 1 - Xception - ST
# from keras.applications import MobileNet
from keras.applications import InceptionV3
# from keras.applications import Xception
# from keras.applications.inception_resnet_v2 import InceptionResNetV2
from tensorflow.keras.applications import EfficientNetB0
from keras.models import Model
from keras.layers import concatenate
from keras.layers import Dense, GlobalAveragePooling2D, Input, Embedding, SimpleRNN, LSTM, Flatten, GRU, Reshape
from keras.applications.inception_v3 import preprocess_input
# from tensorflow.keras.applications.efficientnet import preprocess_input
# from keras.applications.mobilenet import preprocess_input
# from keras.applications.xception import preprocess_input
from keras.layers import GaussianNoise
def get_adv_model():
# f1_base = EfficientNetB0(include_top=False, weights='imagenet',
# input_shape=(299, 299, 3),
# pooling='avg')
# f1_x = f1_base.output
f1_base = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3))
f1_x = f1_base.output
f1_x = GlobalAveragePooling2D()(f1_x)
# f1_x = f1_base.layers[-151].output #layer 5
# f1_x = GlobalAveragePooling2D()(f1_x)
# f1_x = Flatten()(f1_x)
# f1_x = Reshape([1,1280])(f1_x)
# f1_x = SimpleRNN(2048,
# return_sequences=False,
# # dropout=0.8
# input_shape=[1,1280])(f1_x)
#Regularization with noise
f1_x = GaussianNoise(0.1)(f1_x)
f1_x = Dense(1024, activation='relu')(f1_x)
f1_x = Dense(24, activation='softmax')(f1_x)
model_1 = Model(inputs=[f1_base.input],outputs=[f1_x])
model_1.summary()
return model_1
from keras.callbacks import Callback
import pickle
import sys
#Stop training on val_acc
class EarlyStoppingByAccVal(Callback):
def __init__(self, monitor='val_acc', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current >= self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping" % epoch)
self.model.stop_training = True
#Save large model using pickle formate instead of h5
class SaveCheckPoint(Callback):
def __init__(self, model, dest_folder):
super(Callback, self).__init__()
self.model = model
self.dest_folder = dest_folder
#initiate
self.best_val_acc = 0
self.best_val_loss = sys.maxsize #get max value
def on_epoch_end(self, epoch, logs={}):
val_acc = logs['val_acc']
val_loss = logs['val_loss']
if val_acc > self.best_val_acc:
self.best_val_acc = val_acc
# Save weights in pickle format instead of h5
print('\nSaving val_acc %f at %s' %(self.best_val_acc, self.dest_folder))
weigh= self.model.get_weights()
#now, use pickle to save your model weights, instead of .h5
#for heavy model architectures, .h5 file is unsupported.
fpkl= open(self.dest_folder, 'wb') #Python 3
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
# model.save('tmp.h5')
elif val_acc == self.best_val_acc:
if val_loss < self.best_val_loss:
self.best_val_loss=val_loss
# Save weights in pickle format instead of h5
print('\nSaving val_acc %f at %s' %(self.best_val_acc, self.dest_folder))
weigh= self.model.get_weights()
#now, use pickle to save your model weights, instead of .h5
#for heavy model architectures, .h5 file is unsupported.
fpkl= open(self.dest_folder, 'wb') #Python 3
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
# Training
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping, CSVLogger, ReduceLROnPlateau
from keras.optimizers import Adam
import time, os
from math import ceil
train_datagen = ImageDataGenerator(
# rescale = 1./255,
rotation_range=30,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.3,
zoom_range=0.3,
# horizontal_flip=True,
# vertical_flip=True,##
# brightness_range=[0.5, 1.5],##
channel_shift_range=10,##
fill_mode='nearest',
# preprocessing_function=get_cutout_v2(),
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
# rescale = 1./255
preprocessing_function=preprocess_input
)
NUM_GPU = 1
batch_size = 32
train_set = train_datagen.flow_from_directory('surrey/B/train_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=True,
seed=7,
# subset="training"
)
valid_set = test_datagen.flow_from_directory('surrey/B/test_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=False,
seed=7,
# subset="validation"
)
model_txt = 'st'
# Helper: Save the model.
savedfilename = os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
# Helper: TensorBoard
tb = TensorBoard(log_dir=os.path.join('svhn_output', 'logs', model_txt))
# Helper: Save results.
timestamp = time.time()
csv_logger = CSVLogger(os.path.join('svhn_output', 'logs', model_txt + '-' + 'training-' + \
str(timestamp) + '.log'))
earlystopping = EarlyStoppingByAccVal(monitor='val_accuracy', value=0.9900, verbose=1)
epochs = 40##!!!
lr = 1e-3
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# train on multiple-gpus
# Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
print("Number of GPUs: {}".format(strategy.num_replicas_in_sync))
# Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model_mul = get_adv_model()
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
step_size_train=ceil(train_set.n/train_set.batch_size)
step_size_valid=ceil(valid_set.n/valid_set.batch_size)
# step_size_test=ceil(testing_set.n//testing_set.batch_size)
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
# Training
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping, CSVLogger, ReduceLROnPlateau
from keras.optimizers import Adam
import time, os
from math import ceil
train_datagen = ImageDataGenerator(
# rescale = 1./255,
rotation_range=30,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.3,
zoom_range=0.3,
# horizontal_flip=True,
# vertical_flip=True,##
# brightness_range=[0.5, 1.5],##
channel_shift_range=10,##
fill_mode='nearest',
# preprocessing_function=get_cutout_v2(),
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
# rescale = 1./255
preprocessing_function=preprocess_input
)
NUM_GPU = 1
batch_size = 32
train_set = train_datagen.flow_from_directory('surrey/B/train_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=True,
seed=7,
# subset="training"
)
valid_set = test_datagen.flow_from_directory('surrey/B/test_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=False,
seed=7,
# subset="validation"
)
model_txt = 'st'
# Helper: Save the model.
savedfilename = os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B_tmp.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
# Helper: TensorBoard
tb = TensorBoard(log_dir=os.path.join('svhn_output', 'logs', model_txt))
# Helper: Save results.
timestamp = time.time()
csv_logger = CSVLogger(os.path.join('svhn_output', 'logs', model_txt + '-' + 'training-' + \
str(timestamp) + '.log'))
earlystopping = EarlyStoppingByAccVal(monitor='val_accuracy', value=0.9900, verbose=1)
epochs = 40##!!!
lr = 1e-3
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# train on multiple-gpus
# Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
print("Number of GPUs: {}".format(strategy.num_replicas_in_sync))
# Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model_mul = get_adv_model()
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
step_size_train=ceil(train_set.n/train_set.batch_size)
step_size_valid=ceil(valid_set.n/valid_set.batch_size)
# step_size_test=ceil(testing_set.n//testing_set.batch_size)
# result = model_mul.fit_generator(
# generator = train_set,
# steps_per_epoch = step_size_train,
# validation_data = valid_set,
# validation_steps = step_size_valid,
# shuffle=True,
# epochs=epochs,
# callbacks=[checkpointer],
# # callbacks=[csv_logger, checkpointer, earlystopping],
# # callbacks=[tb, csv_logger, checkpointer, earlystopping],
# verbose=1)
ls -l
# Open a strategy scope.
with strategy.scope():
model_mul.load_weights(os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B.hdf5'))
model_mul.evaluate(valid_set)
# Helper: Save the model.
savedfilename = os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B_L2.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
epochs = 15##!!!
lr = 1e-4
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# Open a strategy scope.
with strategy.scope():
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
# Open a strategy scope.
with strategy.scope():
model_mul.load_weights(os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B_L2.hdf5'))
model_mul.evaluate(valid_set)
# Helper: Save the model.
savedfilename = os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B_L3.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
epochs = 15##!!!
lr = 1e-5
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# Open a strategy scope.
with strategy.scope():
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
# Open a strategy scope.
with strategy.scope():
model_mul.load_weights(os.path.join('surrey', 'B', 'checkpoints', 'Surrey_InceptionV3_B_L3.hdf5'))
model_mul.evaluate(valid_set)
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
import time, os
from math import ceil
# PREDICT ON OFFICIAL TEST
train_datagen = ImageDataGenerator(
# rescale = 1./255,
rotation_range=30,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.3,
zoom_range=0.3,
# horizontal_flip=True,
# vertical_flip=True,##
# brightness_range=[0.5, 1.5],##
channel_shift_range=10,##
fill_mode='nearest',
preprocessing_function=preprocess_input,
)
test_datagen1 = ImageDataGenerator(
# rescale = 1./255,
preprocessing_function=preprocess_input
)
batch_size = 32
train_set = train_datagen.flow_from_directory('surrey/B/train_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=True,
seed=7,
# subset="training"
)
test_set1 = test_datagen1.flow_from_directory('surrey/B/test_color/',
target_size = (299, 299),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=False,
seed=7,
# subset="validation"
)
# if NUM_GPU != 1:
predict1=model_mul.predict_generator(test_set1, steps = ceil(test_set1.n/test_set1.batch_size),verbose=1)
# else:
# predict1=model.predict_generator(test_set1, steps = ceil(test_set1.n/test_set1.batch_size),verbose=1)
predicted_class_indices=np.argmax(predict1,axis=1)
labels = (train_set.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions1 = [labels[k] for k in predicted_class_indices]
import pandas as pd
filenames=test_set1.filenames
results=pd.DataFrame({"file_name":filenames,
"predicted1":predictions1,
})
results.to_csv('Surrey_InceptionV3_B_L3_0902.csv')
results.head()
cp Surrey_InceptionV3_B_L1_0902.csv /home/bribeiro/Phong/Nat19/Surrey_InceptionV3_B_L1_0902.csv
cp Surrey_InceptionV3_B_L2_0902.csv /home/bribeiro/Phong/Nat19/Surrey_InceptionV3_B_L2_0902.csv
cp Surrey_InceptionV3_B_L3_0902.csv /home/bribeiro/Phong/Nat19/Surrey_InceptionV3_B_L3_0902.csv
np.save(os.path.join('surrey', 'B', 'npy', 'Surrey_InceptionV3_B_L1_0902.npy'), predict1)
np.save(os.path.join('surrey', 'B', 'npy', 'Surrey_InceptionV3_B_L2_0902.npy'), predict1)
np.save(os.path.join('surrey', 'B', 'npy', 'Surrey_InceptionV3_B_L3_0902.npy'), predict1)
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np
test_datagen = ImageDataGenerator(
rescale = 1./255)
testing_set = test_datagen.flow_from_directory('dataset5/test_color/',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical',
seed=7,
shuffle=False
# subset="validation"
)
y_pred = model.predict_generator(testing_set,steps = testing_set.n//testing_set.batch_size)
y_pred = np.argmax(y_pred, axis=1)
y_true = testing_set.classes
print(confusion_matrix(y_true, y_pred))
# print(model.evaluate_generator(testing_set,
# steps = testing_set.n//testing_set.batch_size))
```
| github_jupyter |
# Git and GitHub
## Lesson Goals
- Understand the purpose of version control systems
- Create a GitHub account
- Upload your first code to GitHub
## Prequisites
- None
## Code Management
Some new considerations come up as we build larger projects that eventually go into production...
- What happens if our computer breaks or is lost? Will we lose all of our code?
- What if we make changes to our code that we later want to roll back? For example, trying a different method of cleaning our data and eventually discovering that it yields worse results.
- How do multiple people work on the same project without sending files back and forth, and stepping on each others' toes?
<div class="admonition note alert alert-info">
<b><p class="first admonition-title" style="font-weight: bold;">Discussion</p></b>
Has anyone faced problems like these before?
</div>
## Version Control Systems (VCS)
The most common industry solution is a **Version Control System**, which:
- Provides a backup of your code on a separate computer
- Tracks every change made to the code, allowing you to see when certain code was updated and to roll back to an earlier state if necessary
- Helps with collaboration by letting contributors work on different things in parallel and then "merge" their changes together later
Typically, an organization will have one central VCS where all projects are managed.
## Common VCS Options
Far and away the most popular VCS tool is Git, which is notable for its scalability and performance.
Unfortunately, it's often challenging for beginners; its interface can be overwhelming.
Other VCSes exist, and were more common until Git became dominant around 2010.
You may have heard of these or even used them:
- Mercurial
- Subversion
## Git and GitHub
Git is generally used in tandem with a website where your code can be kept, viewed, and managed.
There are several, but the most commonly used site is **GitHub**.
Not only does GitHub provide a good interface for viewing code, it also features:
- Project management tools
- Collaboration tools
Both of which are tightly integrated with your code -- very convenient for developer and data science teams.
GitHub offers most of its tools for free and has become the home of most popular open source projects (such as Python itself and the pandas library).
<div class="admonition note alert alert-info">
<b><p class="first admonition-title" style="font-weight: bold;">Note</p></b>
There are competing services to GitHub, such as <em>GitLab</em> and <em>Bitbucket</em>, but GitHub is by far the most popular tool -- to the point that employers sometimes ask for your GitHub profile to see your portfolio.
</div>
## Creating a GitHub Account
*(If you already have a GitHub account, you may skip these steps. Just log into your account so we can push code to it later.)*
1. Go to `github.com` and find the **Sign Up** button.
- When prompted, enter your email address, a new password, and a username
- This username will be your identifier on GitHub, so make sure you'd be comfortable sharing it with an employer or colleague
2. You may then need to solve a Captcha-like puzzle and verify your email address. Do so.
3. Once the account is created, you may be asked whether to create a new project, or "repository". We'll do that, but not yet!
## GitHub Tour
*Demo of Profile and Repos*
## Repositories
- As we saw, repositories are just projects.
- For short, we usually call them **repos**.
- Generally, it's good to have a unique repository in GitHub for every project you work on.
- Let's create a repo for the code we write in this workshop!
## Creating a Repo
<table><tr>
<td><ol style="font-size: 1.3em;">
<li>Go back to GitHub.</li><br>
<li>If today is the first time you've used GitHub, the site may immediately prompt you to create a repo. If so, click that.
<br><br>- If not, look in the left sidebar for a "New" button and click that -- it should take you to a repo creation page.</li><br>
<li>In the <em>Name</em> field enter "advanced-python-datasci", and in the <em>Description</em> enter "Working through the Advanced Python for Data Science workshop".</li><br>
<li>There are three boxes below; check the first two. Those should be "Add a readme" and "Add a gitignore".
<br><br>- The gitignore checkbox should show a dropdown below it, "gitignore template". Look through that list and select <em>Python</em>.</li><br>
<li>Then press the "Create Repository" button!</li>
</ol></td>
<td style="width: 60%">
<img src="images/create-repo.png" alt="Create a Repo"/>
</td>
</tr></table>
## GitHub Desktop
<table><tr>
<td>
<ol style="font-size: 1.3em;">
<li>Next, we'll download a piece of software from GitHub that handles syncing our code with our repository: <strong>GitHub Desktop</strong> <br><br> <em>Note: if you're already comfortable using Git from the command line, you can skip this part; just clone the repo we've created as we'll be using it for the rest of the workshop.</em>
</li>
<br><br>
<li>Go to <a href="https://desktop.github.com">https://desktop.github.com</a> and download the application.</li>
<br>
<li>Once it's downloaded and installed, open it.</li>
</ol>
</td>
<td>
<img src="images/gh-desktop.png" alt="Download GitHub Desktop"/>
</td>
</tr></table>
## Connecting Our Repo to GH Desktop
The last bit of direction-following!
<table><tr>
<td>
<ol style="font-size: 1.3em;">
<li>In GitHub Desktop, you should see an option like <em>Clone a Repository from the Internet...</em>. Click this.</li>
<br>
<li>At this point, the application should prompt you to sign into GitHub. Follow its instructions to do so, which may involve it redirecting you to the browser.</li>
<br>
<li>Once done signing in, you may have to press <em>Clone a Repository from the Internet...</em> again.</li>
<br>
<li>Choose the <em>advanced-python-datasci</em> project we just created.</li>
<br>
<li>Optionally, you may change the <em>Local Path</em> -- this is where the repository will be saved on your computer. You'll need to be able to open the code here in JupyterLab, so if you're more comfortable keeping your code somewhere else, change this to a different location on your computer.</li>
<br>
<li>Then press <em>Clone</em> to pull down the repository we created.</li>
</ol>
</td>
<td style="width: 55%;">
<img src="images/clone-repo.png" alt="Clone a Repository"/>
</tr></table>
Congrats! You've set up your first GitHub repository, and now you're ready to work in it.
| github_jupyter |
# Curve Fitting and Interpolation
Teng-Jui Lin
Content adapted from UW AMATH 301, Beginning Scientific Computing, in Spring 2020.
- Curve fitting
- Curve fitting using error functions and [`scipy.optimize.fmin()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html)
- Sum of squared error
- Sum of absolute error
- Maximum absolute error
- Choice of error functions
- General curve fitting by [`scipy.optimize.curve_fit()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html)
- Curve fitting of polynomials by [`numpy.polyfit()`](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html) and [`numpy.polyval()`](https://numpy.org/doc/stable/reference/generated/numpy.polyval.html)
- Interpolation
- Interpolation by [`scipy.interpolate.interp1d()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html)
## Curve fitting and error functions
Goal: find the parameters $\mathbf{k}$ to a fitting function $f(x, \mathbf{k})$ given data points $(\mathbf{x, y})$ to fit to achieve the lowest error.
Given data points $(\mathbf{x, y})$, where $\mathbf{x} = [x_1, \dots, x_n]$ and $\mathbf{y} = [y_1, \dots, y_n]$.
The sum of squared error (SSE) is defined as
$$
\mathrm{SSE} \equiv \sum\limits_{i=1}^n (f(x_i, \mathbf{k}) - y_i)^2 = \Vert f(\mathbf{x, k}) - \mathbf{y} \Vert_2^2
$$
The sum of absolute error (SAE) is defined as
$$
\mathrm{SAE} \equiv \sum\limits_{i=1}^n |f(x_i, \mathbf{k}) - y_i|
$$
The maximum absolute error (MAE) is defined as
$$
\mathrm{MAE} \equiv \max_i |f(x_i, \mathbf{k}) - y_i| = \Vert f(\mathbf{x, k}) - \mathbf{y} \Vert_\infty
$$
Average error is most resistant to outliers. Maximum error is sensitive to outliers. Sum of squared error penalizes large errors.
Minimization of error function by [`scipy.optimize.fmin()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html) can give desired parameter.
### Implementation
**Problem Statement.** The Gaussian function has the form
$$
f(x) = \dfrac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right)
$$
(a) Generate a Gaussian data of such form in the domain $[0, 5]$ with $\sigma = 1, \mu = 2.5$ and add a random noise in the range of $[-0.1, 0.1)$. Change the 4th data point to 1 to create an outlier.
(b) Fit the data with the Gaussian function using SSE, SAE, and MAE defined above.
(c) Fit the data using `scipy.optimize.curve_fit()`
(d) Plot the resulting fit and discuss the resistance to outliers of each error.
#### Generate data
```
import numpy as np
import scipy
from scipy import optimize
import matplotlib.pyplot as plt
# generate data
np.random.seed(1)
gaussian = lambda x, sigma, mu : 1/np.sqrt(2*np.pi*sigma**2) * np.exp(-(x - mu)**2 / (2*sigma**2))
gaussian_data_x = np.arange(0, 5.25, 0.25)
gaussian_data_y = np.array([gaussian(i, 1, 2.5) + 0.2*(np.random.random() - 0.5) for i in gaussian_data_x])
gaussian_data_y[3] = 1 # create outlier
```
#### Curve fitting using error functions and `scipy.optimize.fmin()`
```
# define fitting function
gaussian = lambda x, sigma, mu : 1/np.sqrt(2*np.pi*sigma**2) * np.exp(-(x - mu)**2 / (2*sigma**2))
def sse(k):
'''
Finds sum of squared error of data point and fitting function with given params.
Note that the coordinates of data points and the fitting function is interited from global.
This design is needed for scipy.optimize.fmin input.
:params k: parameters of fitting function
:returns: sum of squared error
'''
X = gaussian_data_x
Y = gaussian_data_y
f = gaussian
error = sum((Y - f(X, *k))**2)
return error
def sae(k):
'''
Finds sum of absolute error of data point and fitting function with given params.
Note that the coordinates of data points and the fitting function is interited from global.
This design is needed for scipy.optimize.fmin input.
:params k: parameters of fitting function
:returns: sum of absolute error
'''
X = gaussian_data_x
Y = gaussian_data_y
f = gaussian
error = sum(abs(Y - f(X, *k)))
return error
def mae(k):
'''
Finds maximum of absolute error of data point and fitting function with given params.
Note that the coordinates of data points and the fitting function is interited from global.
This design is needed for scipy.optimize.fmin input.
:params k: parameters of fitting function
:returns: maximum of absolute error
'''
X = gaussian_data_x
Y = gaussian_data_y
f = gaussian
error = max(abs(Y - f(X, *k)))
return error
initial_guess = [1, 2]
sse_fit_params = scipy.optimize.fmin(sse, initial_guess)
sse_fit_x = np.linspace(0, 5, 100)
sse_fit_y = gaussian(sse_fit_x, *sse_fit_params)
sse_fit_params
initial_guess = [1, 2]
sae_fit_params = scipy.optimize.fmin(sae, initial_guess)
sae_fit_x = np.linspace(0, 5, 100)
sae_fit_y = gaussian(sae_fit_x, *sae_fit_params)
sae_fit_params
initial_guess = [1, 2]
mae_fit_params = scipy.optimize.fmin(mae, initial_guess)
mae_fit_x = np.linspace(0, 5, 100)
mae_fit_y = gaussian(mae_fit_x, *mae_fit_params)
mae_fit_params
```
#### Curve fitting using `scipy.optimize.curve_fit()`
[`scipy.optimize.curve_fit()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) takes in functional form and data points and returns the coefficients of best fit.
```
# use scipy.optimize.curve_fit()
scipy_fit_params, pcov = scipy.optimize.curve_fit(gaussian, gaussian_data_x, gaussian_data_y)
scipy_fit_x = np.linspace(0, 5, 100)
scipy_fit_y = gaussian(scipy_fit_x, *scipy_fit_params)
scipy_fit_params
```
#### Plotting
```
# plot settings
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.rcParams.update({
'font.family': 'Arial', # Times New Roman, Calibri
'font.weight': 'normal',
'mathtext.fontset': 'cm',
'font.size': 18,
'lines.linewidth': 2,
'axes.linewidth': 2,
'axes.spines.top': False,
'axes.spines.right': False,
'axes.titleweight': 'bold',
'axes.titlesize': 18,
'axes.labelweight': 'bold',
'xtick.major.size': 8,
'xtick.major.width': 2,
'ytick.major.size': 8,
'ytick.major.width': 2,
'figure.dpi': 80,
'legend.framealpha': 1,
'legend.edgecolor': 'black',
'legend.fancybox': False,
'legend.fontsize': 14
})
fig, ax = plt.subplots(figsize=(5, 3))
ax.plot(gaussian_data_x, gaussian_data_y, 'o')
ax.plot(sse_fit_x, sse_fit_y, label='SSE')
ax.plot(sae_fit_x, sae_fit_y, label='SAE')
ax.plot(mae_fit_x, mae_fit_y, label='MAE')
ax.plot(scipy_fit_x, scipy_fit_y, '-.', label='scipy')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_title('$y(x)$')
ax.set_xlim(0, 5)
ax.set_ylim(0)
ax.legend()
```
As shown in the graph, maximum of absolute error is very sensitive to outliers, and sum of absolute error is resistant to outliers. `scipy.optimize.curve_fit()` uses the sum of squared error.
## Curve fitting of polynomial functions using `numpy`
[`numpy.polyfit()`](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html) takes in data points and polynomial order and returns the coefficients of best fit polynomial.
[`numpy.polyval()`](https://numpy.org/doc/stable/reference/generated/numpy.polyval.html) takes in polynomial coefficients and x point(s) to be evaluated.
### Implementation
**Problem Statement.** Fitting linear data.
(a) Generate a linear data of the form $f(x) = x$ in the domain $[0, 5]$ and add a random noise in the range of $[0, 1)$. Change the 4th data point to 1 to create an outlier.
(b) Fit the data with the linear fit $y=kx+b$ using `numpy.polyfit()`.
(c) Plot the resulting fit using `numpy.polyval()`.
```
# generate data
np.random.seed(1)
linear_data_x = np.arange(0, 5.25, 0.25)
linear_data_y = np.array([i + np.random.random() for i in linear_data_x])
# polynomial fit (linear in this case)
linear_fit_x = np.linspace(0, 5, 100)
linear_fit_coeff = np.polyfit(linear_data_x, linear_data_y, 1)
linear_fit_y = np.polyval(linear_fit_coeff, linear_fit_x)
fig, ax = plt.subplots(figsize=(5, 3))
ax.plot(linear_data_x, linear_data_y, 'o')
ax.plot(linear_fit_x, linear_fit_y)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_title('$y(x)$')
ax.set_xlim(0, 5)
ax.set_ylim(0)
```
## Interpolation using `scipy`
[`scipy.interpolate.interp1d()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html) provides interpolation of zeroth, first, second, and third order (constant, linear, quadratic, cubic) of scalar functions.
### Implementation
**Problem Statement.** Interpolating linear data.
(a) Generate a linear data of the form $f(x) = x$ in the domain $[0, 5]$ and add a random noise in the range of $[0, 1)$. Change the 4th data point to 1 to create an outlier.
(b) Interpolate the data using `scipy.interpolate.interp1d()` for zeroth order, linear, quadratic, and cubic spline.
(c) Plot the resulting interpolation.
```
import scipy
from scipy import interpolate
# generate data
np.random.seed(1)
linear_data_x = np.arange(0, 5.25, 0.25)
linear_data_y = np.array([i + np.random.random() for i in linear_data_x])
# linear spline
spline_x = np.linspace(0, 5, 100)
f_zero = scipy.interpolate.interp1d(linear_data_x, linear_data_y, kind='zero')
f_linear = scipy.interpolate.interp1d(linear_data_x, linear_data_y, kind='linear')
f_quadratic = scipy.interpolate.interp1d(linear_data_x, linear_data_y, kind='quadratic')
f_cubic = scipy.interpolate.interp1d(linear_data_x, linear_data_y, kind='cubic')
fig, ax = plt.subplots(figsize=(5, 3))
ax.plot(linear_data_x, linear_data_y, 'o')
ax.plot(spline_x, f_zero(spline_x), label='0th')
ax.plot(spline_x, f_linear(spline_x), label='1st')
ax.plot(spline_x, f_quadratic(spline_x), label='2nd')
ax.plot(spline_x, f_cubic(spline_x), label='3rd')
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_title('$y(x)$')
ax.set_xlim(0, 5)
ax.set_ylim(0)
ax.legend()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/02_Dataset_playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Imports
```
import os, math
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
#tf.enable_eager_execution()
#@title "display utilities [RUN ME]"
def display_9_images_from_dataset(dataset):
plt.figure(figsize=(13,13))
subplot=331
for i, (image, label, one_hot_label) in enumerate(dataset):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image.numpy().astype(np.uint8))
plt.title(label.numpy().decode("utf-8") + ' ' + str(one_hot_label.numpy()), fontsize=16)
subplot += 1
if i==8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
```
## Colab-only auth
```
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # not necessary to access a public bucket but you will probably want to access your private buckets too
```
## Configuration
```
GCS_PATTERN = 'gs://flowers-public/*/*.jpg'
CLASSES = [b'daisy', b'dandelion', b'roses', b'sunflowers', b'tulips'] # flower labels (folder names in the data)
```
## Read images and labels [WORK REQUIRED]
1. Use `fileset=`[`tf.data.Dataset.list_files`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#list_files) to scan the data folder
1. Iterate through the dataset of filenames: `for filename in fileset:...` .
* Does it work ?
* No! But Python iteration though a Dataset works in eager mode. Enable eager mode in the first cell, restart the runtime and try again.
* tip: to limit the size of the dataset for display, you can use [`Dataset.take()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#take). Like this: `for data in dataset.take(10): ....`
* It works but why are Tensors returned ? Get proper values by applying .numpy() to the tensors.
1. Use [`tf.data.Dataset.map`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) to decode the JPEG files. You will find useful TF code snippets below.
* Iterate on the image dataset. You can use .numpy().shape to only see the data sizes.
* Are all images of the same size ?
1. Now create a training dataset: you have images but you also need labels:
* the labels (flower names) are the directory names. You will find useful TF code snippets below for parsing them.
* If you do "`return image, label`" in the decoding function, you will have a Dataset of pairs (image, label).
* The function `decode_jpeg_and_label` in the snippets below adds a third value: the one-hot encoded label. It will be useful for training.
1. Look at the flowers with the `display_9_images_from_dataset` function. It expects the Dataset to have `(image, label, one_hot_label)` elements.
1. Code for iterating on a dataset in non-eager mode is also provided in the snippets below. Have a look, it is a bit more complex...
```
nb_images = len(tf.gfile.Glob(GCS_PATTERN))
print("Pattern matches {} images.".format(nb_images))
#
# YOUR CODE GOES HERE
#
#display_9_images_from_dataset(dataset)
```
## Useful code snippets
### Decode a JPEG in Tensorflow
```
def decode_jpeg(filename):
bits = tf.read_file(filename)
image = tf.image.decode_jpeg(bits)
return image
```
### Decode a JPEG and extract folder name in TF
```
def decode_jpeg_and_label(filename):
bits = tf.read_file(filename)
image = tf.image.decode_jpeg(bits)
label = tf.strings.split(tf.expand_dims(filename, axis=-1), sep='/')
label = label.values[-2]
one_hot_label = tf.tile(tf.expand_dims(label, axis=-1), [len(CLASSES)])
one_hot_label = tf.cast(tf.math.equal(one_hot_label, CLASSES), tf.uint8)
return image, label, one_hot_label
```
### Read from dataset in non-eager mode
```
assert not tf.executing_eagerly(), "This cell will only work in non-eager mode"
next_data_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
while True:
try:
image, label, one_hot_label = ses.run(next_data_item)
# ses.run returns numpy data
print(image.shape, label, one_hot_label)
except tf.errors.OutOfRangeError:
print("the end")
break;
```
| github_jupyter |
# Sparse Approximations
The `gp.MarginalSparse` class implements sparse, or inducing point, GP approximations. It works identically to `gp.Marginal`, except it additionally requires the locations of the inducing points (denoted `Xu`), and it accepts the argument `sigma` instead of `noise` because these sparse approximations assume white IID noise.
Three approximations are currently implemented, FITC, DTC and VFE. For most problems, they produce fairly similar results. These GP approximations don't form the full covariance matrix over all $n$ training inputs. Instead they rely on $m < n$ *inducing points*, which are "strategically" placed throughout the domain. Both of these approximations reduce the $\mathcal{O(n^3)}$ complexity of GPs down to $\mathcal{O(nm^2)}$ --- a significant speed up. The memory requirements scale down a bit too, but not as much. They are commonly referred to as *sparse* approximations, in the sense of being data sparse. The downside of sparse approximations is that they reduce the expressiveness of the GP. Reducing the dimension of the covariance matrix effectively reduces the number of covariance matrix eigenvectors that can be used to fit the data.
A choice that needs to be made is where to place the inducing points. One option is to use a subset of the inputs. Another possibility is to use K-means. The location of the inducing points can also be an unknown and optimized as part of the model. These sparse approximations are useful for speeding up calculations when the density of data points is high and the lengthscales is larger than the separations between inducing points.
For more information on these approximations, see [Quinonero-Candela+Rasmussen, 2006](http://www.jmlr.org/papers/v6/quinonero-candela05a.html) and [Titsias 2009](https://pdfs.semanticscholar.org/9c13/b87b5efb4bb011acc89d90b15f637fa48593.pdf).
## Examples
For the following examples, we use the same data set as was used in the `gp.Marginal` example, but with more data points.
```
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
import theano.tensor as tt
%matplotlib inline
# set the seed
np.random.seed(1)
n = 2000 # The number of data points
X = 10*np.sort(np.random.rand(n))[:,None]
# Define the true covariance function and its parameters
ℓ_true = 1.0
η_true = 3.0
cov_func = η_true**2 * pm.gp.cov.Matern52(1, ℓ_true)
# A mean function that is zero everywhere
mean_func = pm.gp.mean.Zero()
# The latent function values are one sample from a multivariate normal
# Note that we have to call `eval()` because PyMC3 built on top of Theano
f_true = np.random.multivariate_normal(mean_func(X).eval(),
cov_func(X).eval() + 1e-8*np.eye(n), 1).flatten()
# The observed data is the latent function plus a small amount of IID Gaussian noise
# The standard deviation of the noise is `sigma`
σ_true = 2.0
y = f_true + σ_true * np.random.randn(n)
## Plot the data and the unobserved latent function
fig = plt.figure(figsize=(12,5)); ax = fig.gca()
ax.plot(X, f_true, "dodgerblue", lw=3, label="True f");
ax.plot(X, y, 'ok', ms=3, alpha=0.5, label="Data");
ax.set_xlabel("X"); ax.set_ylabel("The true f(x)"); plt.legend();
```
### Initializing the inducing points with K-means
We use the NUTS sampler and the `FITC` approximation.
```
with pm.Model() as model:
ℓ = pm.Gamma("ℓ", alpha=2, beta=1)
η = pm.HalfCauchy("η", beta=5)
cov = η**2 * pm.gp.cov.Matern52(1, ℓ)
gp = pm.gp.MarginalSparse(cov_func=cov, approx="FITC")
# initialize 20 inducing points with K-means
# gp.util
Xu = pm.gp.util.kmeans_inducing_points(20, X)
σ = pm.HalfCauchy("σ", beta=5)
y_ = gp.marginal_likelihood("y", X=X, Xu=Xu, y=y, noise=σ)
trace = pm.sample(1000)
X_new = np.linspace(-1, 11, 200)[:,None]
# add the GP conditional to the model, given the new X values
with model:
f_pred = gp.conditional("f_pred", X_new)
# To use the MAP values, you can just replace the trace with a length-1 list with `mp`
with model:
pred_samples = pm.sample_posterior_predictive(trace, vars=[f_pred], samples=1000)
# plot the results
fig = plt.figure(figsize=(12,5)); ax = fig.gca()
# plot the samples from the gp posterior with samples and shading
from pymc3.gp.util import plot_gp_dist
plot_gp_dist(ax, pred_samples["f_pred"], X_new);
# plot the data and the true latent function
plt.plot(X, y, 'ok', ms=3, alpha=0.5, label="Observed data");
plt.plot(X, f_true, "dodgerblue", lw=3, label="True f");
plt.plot(Xu, 10*np.ones(Xu.shape[0]), "cx", ms=10, label="Inducing point locations")
# axis labels and title
plt.xlabel("X"); plt.ylim([-13,13]);
plt.title("Posterior distribution over $f(x)$ at the observed values"); plt.legend();
```
### Optimizing inducing point locations as part of the model
For demonstration purposes, we set `approx="VFE"`. Any inducing point initialization can be done with any approximation.
```
Xu_init = 10*np.random.rand(20)
with pm.Model() as model:
ℓ = pm.Gamma("ℓ", alpha=2, beta=1)
η = pm.HalfCauchy("η", beta=5)
cov = η**2 * pm.gp.cov.Matern52(1, ℓ)
gp = pm.gp.MarginalSparse(cov_func=cov, approx="VFE")
# set flat prior for Xu
Xu = pm.Flat("Xu", shape=20, testval=Xu_init)
σ = pm.HalfCauchy("σ", beta=5)
y_ = gp.marginal_likelihood("y", X=X, Xu=Xu[:, None], y=y, noise=σ)
mp = pm.find_MAP()
mu, var = gp.predict(X_new, point=mp, diag=True)
sd = np.sqrt(var)
# draw plot
fig = plt.figure(figsize=(12,5)); ax = fig.gca()
# plot mean and 2σ intervals
plt.plot(X_new, mu, 'r', lw=2, label="mean and 2σ region");
plt.plot(X_new, mu + 2*sd, 'r', lw=1); plt.plot(X_new, mu - 2*sd, 'r', lw=1);
plt.fill_between(X_new.flatten(), mu - 2*sd, mu + 2*sd, color="r", alpha=0.5)
# plot original data and true function
plt.plot(X, y, 'ok', ms=3, alpha=1.0, label="observed data");
plt.plot(X, f_true, "dodgerblue", lw=3, label="true f");
Xu = mp["Xu"]
plt.plot(Xu, 10*np.ones(Xu.shape[0]), "cx", ms=10, label="Inducing point locations")
plt.xlabel("x"); plt.ylim([-13,13]);
plt.title("predictive mean and 2σ interval"); plt.legend();
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
# Simple Linear Model - OCR
By Gaetano Bonofiglio, Veronica Iovinella
## Introduction
We'll start by developing a simple linear model for classification of handwritten digits (OCR) using MNIST data-set and then plot the results. This will later be compared with a Convolutional Neural Network for the same task.
## Imports
Helper functions are in [util.py](util.py).
```
%matplotlib inline
import tensorflow as tf
import numpy as np
from util import Util
u = Util()
tf.__version__
```
## Data Load
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
```
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
```
The MNIST data-set has now been loaded and consists of 70.000 images and associated labels. The data-set is split into 3 mutually exclusive sub-sets.
```
print("Size of")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
```
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example:
```
data.test.labels[0:1, :]
```
We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
```
data.test.cls = np.array([label.argmax() for label in data.test.labels])
# example
data.test.cls[0:1]
```
### Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
```
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
```
### Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
```
def plot_images(images, cls_true, cls_pred=None):
u.plot_images(images=images, cls_true=cls_true, cls_pred=cls_pred, img_size=img_size, img_shape=img_shape)
```
### Some plotted images from the data set
```
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
```
## TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
A TensorFlow graph consists of the following parts which will be detailed below:
- Placeholder variables used to change the input to the graph.
- Model variables that are going to be optimized so as to make the model perform better.
- The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and the model variables.
- A cost measure that can be used to guide the optimization of the variables.
- An optimization method which updates the variables of the model.
### Placeholder variables
Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this *feeding the placeholder variables* and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called $tensor$, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where *None* means that the tensor may hold an arbitrary number of images with each image being a vector of length *img_size_flat*.
```
x = tf.placeholder(tf.float32, [None, img_size_flat])
```
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
```
y_true = tf.placeholder(tf.float32, [None, num_classes])
```
Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
```
y_true_cls = tf.placeholder(tf.int64, [None])
```
### Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called *weights* and is defined here as a $TensorFlow$ variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional $tensor$ (or *matrix*) with *img_size_flat* rows and *num_classes* columns.
```
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
```
The second variable that must be optimized is called *biases* and is defined as a 1-dimensional $tensor$ (or *vector*) of length *num_classes*.
```
biases = tf.Variable(tf.zeros([num_classes]))
```
### Model
This simple model multiplies the images in the placeholder variable *x* with the *weights* and then adds the *biases*.
The result is a matrix of shape [num_images, num_classes] because *x* has shape [num_images, img_size_flat] and *weights* has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the *biases* vector is added to each row of that matrix.
The name *logits* is typical $TensorFlow$ terminology.
```
logits = tf.matmul(x, weights) + biases
```
Now *logits* is a matrix with *num_images* rows and *num_classes* columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the *logits* matrix sums to 1, and each element is limited between 0 and 1. This is calculated using the so-called *softmax* function and the result is stored in *y_pred*.
```
y_pred = tf.nn.softmax(logits)
```
The predicted class can be calculated from the *y_pred* matrix by taking the index of the largest element in each row.
```
y_pred_cls = tf.argmax(y_pred, dimension=1)
```
### Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for *weights* and *biases*. To do this we first need to know how well the model currently performs by comparing the predicted output of the model *y_pred* to the desired output *y_true*.
The **cross-entropy** is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
$TensorFlow$ has a built-in function for calculating the cross-entropy. Note that it uses the values of the *logits* because it also calculates the *softmax* internally.
```
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_true)
```
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually, but in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
```
cost = tf.reduce_mean(cross_entropy)
```
### Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of **Gradient Descent** where the *step-size* is set to 0.5.
Note that optimization is not performed at this point. In fact, __nothing is calculated at all__, we just add the optimizer-object to the $TensorFlow$ graph for later execution.
```
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
```
### Performance measures
$TensorFlow$ also gives access to performance measures to display the progress to the user.
We are defining *correct_prediction* as a vector of booleans whether the predicted class equals the true class of each image, and *accuracy* as the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
```
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
```
## TensorFlow Run
### Create TensorFlow session
Once the $TensorFlow$ *graph* has been created, we have to create a $TensorFlow$ *session* which is used to execute the graph.
```
session = tf.Session()
```
### Initialize variables
The variables for *weights* and *biases* must be initialized before we start optimizing them.
```
session.run(tf.initialize_all_variables())
```
### Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use **Stochastic Gradient Descent** which only uses a small batch of images in each iteration of the optimizer.
This is a function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then $TensorFlow$ executes the optimizer using those training samples.
```
train_batch_size = 100
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
```
### Helper-functions to show performance
Dict with the test-set data to be used as input to the $TensorFlow$ graph. Note that we must use the correct names for the placeholder variables in the $TensorFlow$ graph.
```
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
```
Helper function for printing the classification accuracy on the test-set. It will also print example errors and confusion matrix if asked.
```
def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False):
u.print_test_accuracy(session=session, data=data, x=x, y_true=y_true, y_pred_cls=y_pred_cls, num_classes=num_classes,
show_example_errors=show_example_errors, show_confusion_matrix=show_confusion_matrix)
```
### Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
```
def plot_weights():
u.plot_weights(session=session, weights=weights, img_shape=img_shape)
```
## Performances before and after learning
We will now test everything measuring the performances right now, before any learning process. This is also helpful to understand how the learning goes, and we'll also output the performances at many stages between optimization iteration.
```
print_test_accuracy(show_example_errors=True)
```
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below.
### Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
```
optimize(num_iterations=1)
print_test_accuracy(show_example_errors=True)
```
We can also already do something pretty interesting, plotting the weights after only 1 iteration.
```
plot_weights()
```
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
### Performance after 10 optimization iterations
```
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_test_accuracy(show_example_errors=True)
plot_weights()
```
### Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed (like **Convolutional Naural Networks**, that we'll implement in the next notebook).
```
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_test_accuracy(show_example_errors=True)
plot_weights()
```
### Performance after 10000 optimization iterations
```
# We have already performed 1000 iterations.
optimize(num_iterations=9000)
print_test_accuracy(show_example_errors=True)
```
Note that the accuracy has increased only to **92.5%** from 91.6% even after 9000 iterations.
```
plot_weights()
```
The model has now been trained for 10000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
## Confusion Matrix
We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
```
print_test_accuracy(show_confusion_matrix=True)
```
We are now done using $TensorFlow$, so we close the session to release its resources.
```
session.close()
```
## Conclusion
We have seen how easy and fast it is to implement an OCR with a Neural Network using $TensorFlow$. In the next notebook we'll start exploring the same concept but implemented with a **Convolutional Neural Network** to compare the results.
| github_jupyter |
## Working with RDDs (ch 3)
You control a Spark process by means of a Spark Context. In Python, the `Spark context` is a built-in variable, already bound to the Context. When using Java (or Scala), you need to do the binding yourself
```
sc
```
you will use `sc` to ask Spark to load the content of a file into memory, for instance:
## RDD: Resilient Distributed Dataset ##
The basic data abstraction for all Spark programs is the RDD. Think of an RDD as an immutable, distributed collection of objects (data elements) of a certain type.
Users create RDDs in two ways: by loading an external dataset, as we have just seen, or programmatically by specifying that a collection of objects, for instance a list or set, is to be processed a set of N *workers*:
note that Spark 2.x now support a higher level abstraction for relational data, called **DataFrames**. if you are familiar with Python Pandas or the R language, you will recognise Spark DataFrames as a very similar abstraction for tabular data. We will look at DataFrames in details later. For now, you may refer to this tutorial: [Spark SQL, DataFrames and Datasets Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
```
# specifies the number of workers:
myData = sc.parallelize([1,2,3,5],3)
myData
# distributes the content of the RDD to all available workers:
sc.parallelize(["pandas", "i like pandas"])
#Example (3.1): create a new RDD by loading a text file using the SparkContext sc
lines = sc.textFile('/FileStore/tables/Dante_Inferno.txt')
```
## Transformations and Actions ##
Spark operates on RDDs in two ways:
- through **transformations**. A transformation takes an input RDD and produces a new RDD. Example: filtering an RDD
- through **actions**. An Action takes an RDD and produces data of some other type, which typically encodes the result of some data analysis. Example: count the number of lines in a file
example of transformation:
the myRDD.filter() function filters myRDD according to a condition. The condition is specified using a function as one of the arguments to filter(). Here is the general Python syntax for passing a `lambda function` to another function:
```
infernoLines = lines.filter(lambda x: "inferno" in x)
```
Action: a result computed from an RDD, which can be either returned to the driver program or saved to an external storage system (e.g., HDFS). examples:
```
# operates on the lines RDD
lines.count()
# shows top k elements in the RDD
lines.take(20)
```
Actions are typically performed at the end of a pipeline consisting of one or more transformations.
Here is a new transformation that makes use of the `map()` higher order function:
```
uppercaseLines = lines.map(lambda s: s.upper())
```
like filter, `map()` is a transformation. It takes an input function (a named or a lambda function). It applies the function to each element of the input RDD to produce an output RDD
```
uppercaseLines
uppercaseLines.take(5)
infernoLines.first()
```
note that the result of the take(n) action function is no longer an RDD! in this instance, it is a list of strings
note that you may concatenate the two transformations and the action into a single command using 'dot notation':
```
lines.filter(lambda x: "inferno" in x).map(lambda s: s.upper()).take(5)
```
or you can use the functions you have defined to perform these operations:
```
def upperCase(doc):
return doc.map(lambda s: s.upper())
def filterDocForTerm(doc, term):
return doc.filter(lambda x: term in x)
upperCase(filterDocForTerm(lines, "inferno")).take(10)
```
we can also explicitly define functions that we can then use in `map()`:
```
def removex92(s):
return s.replace('\x92','\'')
lines.filter(lambda x: "inferno" in x).map(removex92).take(5)
```
or equivalently:
```
lines.filter(lambda x: "inferno" in x).map(lambda s: s.replace('\x92','')).take(5)
# other obvious action: counting the number of elements
infernoLines.count(), uppercaseLines.count()
```
## Lazy evaluation:##
Spark computes a pipeline in a **lazy** fashion—that is, the resulting RDD remains virtual and no computation actually occurs until the RDD is used as input to an action.
Thus, actions have the potentual to trigger an entire complex pipeline to be executed.
If Spark were to load and store all the lines in the file as soon as we wrote lines = sc.textFile(...), it would waste a lot of storage space, given that we then immediately filter out many lines. Instead, once Spark sees the whole chain of transformations, it can compute just the data needed for its result. In fact, for the first() action, Spark scans the file only until it finds the first matching line; it doesn’t even read the whole file.
Note that a virtual computation may be described by a graph, for instance:
try changing the file path below to a non-existent name like `/FileStore/tables/Dante_Inferno-XXX.txt`
```
lines = sc.textFile('/FileStore/tables/Dante_Inferno.txt')
cleanLines = lines.map(lambda s: s.replace('\x92',''))
upperInferno = cleanLines.filter(lambda x: "inferno" in x).map(lambda s: s.upper())
lowerAmore = cleanLines.filter(lambda x: "amore" in x).map(lambda s: s.lower())
```
at this point, no computation has actually occurred. In fact the file hasn't even been opened (try and change the path to the file)
however when an action is performed on *at least one* of the output RDDs: `upperInferno`, `lowerAmore`, Spark schedules the entire execution graph in a distributed fashion using the available workers, in order to produce a result:
```
upperInferno.collect()
lowerAmore.collect()
```
have you tried using the wrong filename? at which point do you get a runtime error?
## Persistence: ##
A RDD that is used in multiple actions, is recomputed as part of the pipeline that leads to that action. There is therefore a potential for re-computing the same RDD multiple times.
However we may tell Spark that it should reuse a partial result when we know it is safe to do so. This is done using `RDD.persist()`. For example:
```
# without persistence, this action causes the entire pipeline to be executed again:
upperInferno.count()
# this ensures that the file is only loaded once and that the cleanLines RDD is kept in memory:
cleanLines.persist()
```
obviously there is a trade-off between the cost of reloading / recomputing and the process space needed for storing a persisted RDD
```
upperInferno.persist()
lowerAmore.persist()
# these repeated action invocationsare now inexpensive:
upperInferno.count(), lowerAmore.count()
```
# practice: loading and operating on a movie set #
We now practice these notions on a different dataset, and extend our overview of the set of transformations and actions avaiable through Spark
```
inputRDD = sc.textFile('/FileStore/tables/sample_movielens_movies.txt')
inputRDD.take(5)
```
Examples of **transformations**:
```
thrillerRDD = inputRDD.filter(lambda x: "Thriller" in x)
comedyRDD = inputRDD.filter(lambda x: "Comedy" in x)
```
you can take the **union** of two RDDs:
```
whatILikeRDD = thrillerRDD.union(comedyRDD)
```
*Actions* - printing out the first 10 lines of each movie category
```
thrillerRDD.take(5)
comedyRDD.take(10)
whatILikeRDD.take(5)
```
note: the `collect()` function retrieves the entire RDD.
*Note that this forces Spark to execute the entire pipeline* so not to be used on large datasets unless you know that processing requires the entire dataset to be consumed outside of the RDD framework.
```
whatILikeRDD.count()
```
**Passing functions to Spark** (ch 3 pg. 30). we have seen this before...
```
def isThriller(s):
return "Thriller" in s
thrillers = whatILikeRDD.filter(isThriller)
thrillers.take(5)
```
| github_jupyter |
# t-SNE for community composition
```
# Housekeeping
library(caret)
library(gridExtra)
library(reshape2)
library(Rtsne)
library(tidyr)
# Read in data
species_composition = read.table("../../../data/amplicon/species_composition_relative_abundance.txt",
sep = "\t",
header = T,
row.names = 1)
metadata = read.table("../../../data/amplicon/metadata.txt",
sep = "\t",
header = T,
row.names = 1)
head(species_composition)
head(metadata)
# Extract regime shift data without predation
x = metadata$Experiment == "RegimeShift" & # keep only regime shift data
metadata$Predation == 0 & # exclude predation
metadata$Immigration != "stock" # exclude initial strain mix data
# Subset
species_composition = species_composition[x,]
metadata = metadata[x,]
head(species_composition)
head(metadata)
# Remove species lacking data
species_composition = species_composition[, colSums(species_composition) > 0]
dim(species_composition)
# Rtsne function may take some minutes to complete...
set.seed(9)
tsne_model_1 = Rtsne(as.matrix(species_composition), check_duplicates=FALSE, pca=TRUE, perplexity=10, theta=0.5, dims=2)
# Getting the two dimension matrix
d_tsne_1 = as.data.frame(tsne_model_1$Y)
# Plotting the results without clustering
ggplot(d_tsne_1, aes(x=V1, y=V2)) +
geom_point(size=0.25) +
guides(colour=guide_legend(override.aes=list(size=6))) +
xlab("") + ylab("") +
ggtitle("t-SNE") +
theme_light(base_size=20) +
theme(axis.text.x=element_blank(),
axis.text.y=element_blank()) +
scale_colour_brewer(palette = "Set2")
# Setting the cluster model
# Next piece of code will create the k-means and hierarchical cluster models.
# To then assign the cluster number (1-7) to which each input case belongs.
## keeping original data
d_tsne_1_original=d_tsne_1
## Creating k-means clustering model, and assigning the result to the data used to create the tsne
fit_cluster_kmeans=kmeans(scale(d_tsne_1), 6)
d_tsne_1_original$cl_kmeans = factor(fit_cluster_kmeans$cluster)
## Creating hierarchical cluster model, and assigning the result to the data used to create the tsne
fit_cluster_hierarchical=hclust(dist(scale(d_tsne_1)))
## setting 3 clusters as output
d_tsne_1_original$cl_hierarchical = factor(cutree(fit_cluster_hierarchical, k=6))
# Plotting the cluster models onto t-SNE output
# Now time to plot the result of each cluster model, based on the t-SNE map.
plot_cluster=function(data, var_cluster, palette)
{
ggplot(data, aes_string(x="V1", y="V2", color=var_cluster)) +
geom_point(size=0.25) +
guides(colour=guide_legend(override.aes=list(size=6))) +
xlab("") + ylab("") +
ggtitle("") +
theme_light(base_size=20) +
theme(axis.text.x=element_blank(),
axis.text.y=element_blank(),
legend.direction = "horizontal",
legend.position = "bottom",
legend.box = "horizontal") +
scale_colour_brewer(palette = palette)
}
plot_k=plot_cluster(d_tsne_1_original, "cl_kmeans", "Accent")
plot_h=plot_cluster(d_tsne_1_original, "cl_hierarchical", "Set1")
## and finally: putting the plots side by side with gridExtra lib...
grid.arrange(plot_k, plot_h, ncol=2)
dim(d_tsne_1)
d_tsne_1$labels = rownames(species_composition)
metadata$labels = rownames(metadata)
d_tsne_1 = merge(d_tsne_1, metadata, by = "labels")
head(d_tsne_1)
dim(d_tsne_1)
# Edits for plotting
d_tsne_1$Streptomycin = factor(d_tsne_1$Streptomycin, levels = c("0", "4", "16", "128"))
d_tsne_1$Time_point = ifelse(d_tsne_1$Time_point == 16, "Before pulse", ifelse(d_tsne_1$Time_point == 32, "During pulse", "After pulse"))
d_tsne_1$Time_point = factor(d_tsne_1$Time_point, levels = c("Before pulse", "During pulse", "After pulse"))
d_tsne_1$Immigration = ifelse(d_tsne_1$Immigration == 0, "Immigration absent", "Immigration present")
# Plotting the results by adding treatment information
ggplot(d_tsne_1, aes(x=V1, y=V2, colour = Streptomycin)) +
geom_point(size=1) +
guides(colour=guide_legend(override.aes=list(size=6))) +
xlab("") + ylab("") +
ggtitle("t-SNE") +
facet_grid(Immigration~Time_point) +
scale_color_manual(values = c("#D3D3D3", "#cd6090", "#8f4364", "#522639")) +
theme_bw() +
labs(colour = "Streptomycin level") +
theme(axis.text.x=element_blank(),
axis.text.y=element_blank(),
legend.direction = "horizontal",
legend.position = "bottom",
legend.box = "horizontal",
strip.background = element_rect(colour = "white", fill = "white"),
strip.text = element_text(face = "italic", colour = "black"),
panel.border = element_rect(colour = "grey"),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank())
ggsave("../../../manuscript/figures/t-SNE_community_composition.pdf")
```
| github_jupyter |
# Using your own object detector for detection images
<table align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/TannerGilbert/Tutorials/blob/master/Tensorflow%20Object%20Detection/object_detection_with_own_model.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
</td><td>
<a target="_blank" href="https://github.com/TannerGilbert/Tutorials/blob/master/Tensorflow%20Object%20Detection/object_detection_with_own_model.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td></table>
This notebook is based on the [official Tensorflow Object Detection demo](https://github.com/tensorflow/models/blob/r1.13.0/research/object_detection/object_detection_tutorial.ipynb) and only contains some slight changes. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md#installation) before you start.
# Imports
```
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tensorflow as tf
from distutils.version import StrictVersion
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')
```
## Env setup
```
# This is needed to display the images.
%matplotlib inline
```
## Object detection imports
Here are the imports from the object detection module.
```
from utils import label_map_util
from utils import visualization_utils as vis_util
```
# Model preparation
## Variables
Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file.
```
MODEL_NAME = 'inference_graph'
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'
PATH_TO_LABELS = 'training/labelmap.pbtxt'
```
## Load a (frozen) Tensorflow model into memory.
```
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
```
## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
```
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
```
# Detection
```
def run_inference_for_single_image(image, graph):
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
import cv2
cap = cv2.VideoCapture(0)
try:
with detection_graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
while True:
ret, image_np = cap.read()
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
cv2.imshow('object_detection', cv2.resize(image_np, (800, 600)))
if cv2.waitKey(25) & 0xFF == ord('q'):
cap.release()
cv2.destroyAllWindows()
break
except Exception as e:
print(e)
cap.release()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom training with tf.distribute.Strategy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/custom_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/custom_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to use [`tf.distribute.Strategy`](https://www.tensorflow.org/guide/distributed_training) with custom training loops. We will train a simple CNN model on the fashion MNIST dataset. The fashion MNIST dataset contains 60000 train images of size 28 x 28 and 10000 test images of size 28 x 28.
We are using custom training loops to train our model because they give us flexibility and a greater control on training. Moreover, it is easier to debug the model and the training loop.
```
# Import TensorFlow
import tensorflow as tf
# Helper libraries
import numpy as np
import os
print(tf.__version__)
```
## Download the fashion MNIST dataset
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Adding a dimension to the array -> new shape == (28, 28, 1)
# We are doing this because the first layer in our model is a convolutional
# layer and it requires a 4D input (batch_size, height, width, channels).
# batch_size dimension will be added later on.
train_images = train_images[..., None]
test_images = test_images[..., None]
# Getting the images in [0, 1] range.
train_images = train_images / np.float32(255)
test_images = test_images / np.float32(255)
```
## Create a strategy to distribute the variables and the graph
How does `tf.distribute.MirroredStrategy` strategy work?
* All the variables and the model graph is replicated on the replicas.
* Input is evenly distributed across the replicas.
* Each replica calculates the loss and gradients for the input it received.
* The gradients are synced across all the replicas by summing them.
* After the sync, the same update is made to the copies of the variables on each replica.
Note: You can put all the code below inside a single scope. We are dividing it into several code cells for illustration purposes.
```
# If the list of devices is not specified in the
# `tf.distribute.MirroredStrategy` constructor, it will be auto-detected.
strategy = tf.distribute.MirroredStrategy()
print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
```
## Setup input pipeline
Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
```
BUFFER_SIZE = len(train_images)
BATCH_SIZE_PER_REPLICA = 64
GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
EPOCHS = 10
```
Create the datasets and distribute them:
```
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)
train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)
test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)
```
## Create the model
Create a model using `tf.keras.Sequential`. You can also use the Model Subclassing API to do this.
```
def create_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
return model
# Create a checkpoint directory to store the checkpoints.
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
```
## Define the loss function
Normally, on a single machine with 1 GPU/CPU, loss is divided by the number of examples in the batch of input.
*So, how should the loss be calculated when using a `tf.distribute.Strategy`?*
* For an example, let's say you have 4 GPU's and a batch size of 64. One batch of input is distributed
across the replicas (4 GPUs), each replica getting an input of size 16.
* The model on each replica does a forward pass with its respective input and calculates the loss. Now, instead of dividing the loss by the number of examples in its respective input (BATCH_SIZE_PER_REPLICA = 16), the loss should be divided by the GLOBAL_BATCH_SIZE (64).
*Why do this?*
* This needs to be done because after the gradients are calculated on each replica, they are synced across the replicas by **summing** them.
*How to do this in TensorFlow?*
* If you're writing a custom training loop, as in this tutorial, you should sum the per example losses and divide the sum by the GLOBAL_BATCH_SIZE:
`scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE)`
or you can use `tf.nn.compute_average_loss` which takes the per example loss,
optional sample weights, and GLOBAL_BATCH_SIZE as arguments and returns the scaled loss.
* If you are using regularization losses in your model then you need to scale
the loss value by number of replicas. You can do this by using the `tf.nn.scale_regularization_loss` function.
* Using `tf.reduce_mean` is not recommended. Doing so divides the loss by actual per replica batch size which may vary step to step.
* This reduction and scaling is done automatically in keras `model.compile` and `model.fit`
* If using `tf.keras.losses` classes (as in the example below), the loss reduction needs to be explicitly specified to be one of `NONE` or `SUM`. `AUTO` and `SUM_OVER_BATCH_SIZE` are disallowed when used with `tf.distribute.Strategy`. `AUTO` is disallowed because the user should explicitly think about what reduction they want to make sure it is correct in the distributed case. `SUM_OVER_BATCH_SIZE` is disallowed because currently it would only divide by per replica batch size, and leave the dividing by number of replicas to the user, which might be easy to miss. So instead we ask the user do the reduction themselves explicitly.
* If `labels` is multi-dimensional, then average the `per_example_loss` across the number of elements in each sample. For example, if the shape of `predictions` is `(batch_size, H, W, n_classes)` and `labels` is `(batch_size, H, W)`, you will need to update `per_example_loss` like: `per_example_loss /= tf.cast(tf.reduce_prod(tf.shape(labels)[1:]), tf.float32)`
```
with strategy.scope():
# Set reduction to `none` so we can do the reduction afterwards and divide by
# global batch size.
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)
```
## Define the metrics to track loss and accuracy
These metrics track the test loss and training and test accuracy. You can use `.result()` to get the accumulated statistics at any time.
```
with strategy.scope():
test_loss = tf.keras.metrics.Mean(name='test_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
```
## Training loop
```
# model, optimizer, and checkpoint must be created under `strategy.scope`.
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)
def train_step(inputs):
images, labels = inputs
with tf.GradientTape() as tape:
predictions = model(images, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_accuracy.update_state(labels, predictions)
return loss
def test_step(inputs):
images, labels = inputs
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss.update_state(t_loss)
test_accuracy.update_state(labels, predictions)
# `run` replicates the provided computation and runs it
# with the distributed input.
@tf.function
def distributed_train_step(dataset_inputs):
per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))
return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
@tf.function
def distributed_test_step(dataset_inputs):
return strategy.run(test_step, args=(dataset_inputs,))
for epoch in range(EPOCHS):
# TRAIN LOOP
total_loss = 0.0
num_batches = 0
for x in train_dist_dataset:
total_loss += distributed_train_step(x)
num_batches += 1
train_loss = total_loss / num_batches
# TEST LOOP
for x in test_dist_dataset:
distributed_test_step(x)
if epoch % 2 == 0:
checkpoint.save(checkpoint_prefix)
template = ("Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, "
"Test Accuracy: {}")
print (template.format(epoch+1, train_loss,
train_accuracy.result()*100, test_loss.result(),
test_accuracy.result()*100))
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
```
Things to note in the example above:
* We are iterating over the `train_dist_dataset` and `test_dist_dataset` using a `for x in ...` construct.
* The scaled loss is the return value of the `distributed_train_step`. This value is aggregated across replicas using the `tf.distribute.Strategy.reduce` call and then across batches by summing the return value of the `tf.distribute.Strategy.reduce` calls.
* `tf.keras.Metrics` should be updated inside `train_step` and `test_step` that gets executed by `tf.distribute.Strategy.run`.
*`tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can do `tf.distribute.Strategy.reduce` to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.
## Restore the latest checkpoint and test
A model checkpointed with a `tf.distribute.Strategy` can be restored with or without a strategy.
```
eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='eval_accuracy')
new_model = create_model()
new_optimizer = tf.keras.optimizers.Adam()
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)
@tf.function
def eval_step(images, labels):
predictions = new_model(images, training=False)
eval_accuracy(labels, predictions)
checkpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
for images, labels in test_dataset:
eval_step(images, labels)
print ('Accuracy after restoring the saved model without strategy: {}'.format(
eval_accuracy.result()*100))
```
## Alternate ways of iterating over a dataset
### Using iterators
If you want to iterate over a given number of steps and not through the entire dataset you can create an iterator using the `iter` call and explicity call `next` on the iterator. You can choose to iterate over the dataset both inside and outside the tf.function. Here is a small snippet demonstrating iteration of the dataset outside the tf.function using an iterator.
```
for _ in range(EPOCHS):
total_loss = 0.0
num_batches = 0
train_iter = iter(train_dist_dataset)
for _ in range(10):
total_loss += distributed_train_step(next(train_iter))
num_batches += 1
average_train_loss = total_loss / num_batches
template = ("Epoch {}, Loss: {}, Accuracy: {}")
print (template.format(epoch+1, average_train_loss, train_accuracy.result()*100))
train_accuracy.reset_states()
```
### Iterating inside a tf.function
You can also iterate over the entire input `train_dist_dataset` inside a tf.function using the `for x in ...` construct or by creating iterators like we did above. The example below demonstrates wrapping one epoch of training in a tf.function and iterating over `train_dist_dataset` inside the function.
```
@tf.function
def distributed_train_epoch(dataset):
total_loss = 0.0
num_batches = 0
for x in dataset:
per_replica_losses = strategy.run(train_step, args=(x,))
total_loss += strategy.reduce(
tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
num_batches += 1
return total_loss / tf.cast(num_batches, dtype=tf.float32)
for epoch in range(EPOCHS):
train_loss = distributed_train_epoch(train_dist_dataset)
template = ("Epoch {}, Loss: {}, Accuracy: {}")
print (template.format(epoch+1, train_loss, train_accuracy.result()*100))
train_accuracy.reset_states()
```
### Tracking training loss across replicas
Note: As a general rule, you should use `tf.keras.Metrics` to track per-sample values and avoid values that have been aggregated within a replica.
We do *not* recommend using `tf.metrics.Mean` to track the training loss across different replicas, because of the loss scaling computation that is carried out.
For example, if you run a training job with the following characteristics:
* Two replicas
* Two samples are processed on each replica
* Resulting loss values: [2, 3] and [4, 5] on each replica
* Global batch size = 4
With loss scaling, you calculate the per-sample value of loss on each replica by adding the loss values, and then dividing by the global batch size. In this case: `(2 + 3) / 4 = 1.25` and `(4 + 5) / 4 = 2.25`.
If you use `tf.metrics.Mean` to track loss across the two replicas, the result is different. In this example, you end up with a `total` of 3.50 and `count` of 2, which results in `total`/`count` = 1.75 when `result()` is called on the metric. Loss calculated with `tf.keras.Metrics` is scaled by an additional factor that is equal to the number of replicas in sync.
### Guide and examples
Here are some examples for using distribution strategy with custom training loops:
1. [Distributed training guide](../../guide/distributed_training)
2. [DenseNet](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/densenet/distributed_train.py) example using `MirroredStrategy`.
1. [BERT](https://github.com/tensorflow/models/blob/master/official/nlp/bert/run_classifier.py) example trained using `MirroredStrategy` and `TPUStrategy`.
This example is particularly helpful for understanding how to load from a checkpoint and generate periodic checkpoints during distributed training etc.
2. [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) example trained using `MirroredStrategy` that can be enabled using the `keras_use_ctl` flag.
3. [NMT](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/nmt_with_attention/distributed_train.py) example trained using `MirroredStrategy`.
More examples listed in the [Distribution strategy guide](../../guide/distributed_training.ipynb#examples_and_tutorials).
## Next steps
* Try out the new `tf.distribute.Strategy` API on your models.
* Visit the [Performance section](../../guide/function.ipynb) in the guide to learn more about other strategies and [tools](../../guide/profiler.md) you can use to optimize the performance of your TensorFlow models.
| github_jupyter |
```
#default_exp api
```
# API
> High level functions for easy interaction
This module defines the building blocks for the CLI. These functions can be leveraged to define other custom workflows more easily.
```
#export
import importlib
import inspect
from collections import defaultdict
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import pandas as pd
import yaml
from pandas.api.types import is_categorical_dtype, is_datetime64_dtype
from mlforecast.compat import Client, DistributedForecast, Frame, S3Path, dd, dd_Frame
from mlforecast.core import TimeSeries
from mlforecast.data_model import (
ClusterConfig,
DataConfig,
DataFormat,
DistributedModelConfig,
DistributedModelName,
FeaturesConfig,
FlowConfig,
ModelConfig,
_available_tfms,
)
from mlforecast.forecast import Forecast
import shutil
import tempfile
import warnings
import numpy as np
from dask.distributed import LocalCluster
from fastcore.test import test_eq, test_fail
from window_ops.rolling import *
from window_ops.expanding import *
from window_ops.ewm import *
from mlforecast.compat import Frame
from mlforecast.utils import generate_daily_series, generate_prices_for_series
warnings.filterwarnings('ignore')
#exporti
_available_tfms_kwargs = {
name: list(inspect.signature(tfm).parameters)[1:]
for name, tfm in _available_tfms.items()
}
#export
def validate_data_format(data: Frame) -> Frame:
"""Checks whether data is in the correct format and tries to fix it if possible."""
if not isinstance(data, (pd.DataFrame, dd_Frame)):
raise ValueError('data must be either pandas or dask dataframe.')
if not data.index.name == 'unique_id':
if 'unique_id' in data:
data = data.set_index('unique_id')
else:
raise ValueError('unique_id not found in data.')
if 'ds' not in data:
raise ValueError('ds column not found in data.')
if not is_datetime64_dtype(data['ds']):
if isinstance(data, pd.DataFrame):
data['ds'] = pd.to_datetime(data['ds'])
else:
data['ds'] = dd.to_datetime(data['ds'])
if 'y' not in data:
raise ValueError('y column not found in data.')
return data
not_pandas = np.array([1])
test_fail(lambda: validate_data_format(not_pandas), contains='data must be either pandas')
no_uid = pd.DataFrame({'x': [1]})
test_fail(lambda: validate_data_format(no_uid), contains='unique_id not found')
uid_in_col = pd.DataFrame({'unique_id': [1], 'ds': pd.to_datetime(['2020-01-01']), 'y': [1.]})
assert validate_data_format(uid_in_col).equals(uid_in_col.set_index('unique_id'))
no_ds = pd.DataFrame({'unique_id': [1]})
test_fail(lambda: validate_data_format(no_ds), contains='ds column not found')
ds_not_datetime = pd.DataFrame({'unique_id': [1], 'ds': ['2020-01-01'], 'y': [1.]})
assert is_datetime64_dtype(validate_data_format(ds_not_datetime)['ds'])
if hasattr(dd, 'to_datetime'):
ds_not_datetime_dask = dd.from_pandas(ds_not_datetime, npartitions=1)
assert is_datetime64_dtype(validate_data_format(ds_not_datetime_dask).compute()['ds'])
no_y = pd.DataFrame({'unique_id': [1], 'ds': pd.to_datetime(['2020-01-01'])})
test_fail(lambda: validate_data_format(no_y), contains='y column not found')
#exporti
def _is_s3_path(path: str) -> bool:
return path.startswith('s3://')
#hide
assert _is_s3_path('s3://bucket/file')
assert not _is_s3_path('bucket/file')
#exporti
def _path_as_str(path: Union[Path, S3Path]) -> str:
if isinstance(path, S3Path):
return path.as_uri()
return str(path)
def _prefix_as_path(prefix: str) -> Union[Path, S3Path]:
return S3Path.from_uri(prefix) if _is_s3_path(prefix) else Path(prefix)
#hide
test_eq(_path_as_str(Path('data')), 'data')
test_eq(_path_as_str(S3Path('/bucket/file')), 's3://bucket/file')
test_eq(_prefix_as_path('s3://bucket/'), S3Path('/bucket/'))
test_eq(_prefix_as_path('/home/'), Path('/home/'))
#export
def read_data(config: DataConfig, is_distributed: bool) -> Frame:
"""Read data from `config.prefix/config.input`.
If we're in distributed mode dask is used for IO, else pandas."""
path = _prefix_as_path(config.prefix)
input_path = path / config.input
io_module = dd if is_distributed else pd
reader = getattr(io_module, f'read_{config.format}')
read_path = _path_as_str(input_path)
if io_module is dd and config.format is DataFormat.csv:
read_path += '/*'
data = reader(read_path)
if io_module is dd and config.format is DataFormat.parquet:
if data.index.name == 'unique_id' and is_categorical_dtype(data.index):
data.index = data.index.cat.as_known().as_ordered()
cat_cols = data.select_dtypes(include='category').columns
if not cat_cols.empty:
data = data.categorize(columns=cat_cols)
return validate_data_format(data)
series = generate_daily_series(20, 100, 200)
series_ddf = dd.from_pandas(series, npartitions=2)
for data_format in ('csv', 'parquet'):
for df in (series, series_ddf):
is_distributed = df is series_ddf
with tempfile.TemporaryDirectory() as tmpdir:
tmpdir = Path(tmpdir)
writer = getattr(df, f'to_{data_format}')
writer(tmpdir/'train')
data_cfg = DataConfig(prefix=str(tmpdir), input='train',
output='output', format=data_format)
read_df = read_data(data_cfg, is_distributed)
if is_distributed:
read_df, df = read_df.compute(), df.compute()
assert read_df.drop('y', 1).equals(df.drop('y', 1))
np.testing.assert_allclose(read_df.y, df.y)
#exporti
def _read_dynamic(config: DataConfig) -> Optional[List[pd.DataFrame]]:
if config.dynamic is None:
return None
reader = getattr(pd, f'read_{config.format}')
input_path = _prefix_as_path(config.prefix)
dynamic_dfs = []
for fname in config.dynamic:
path = _path_as_str(input_path / fname)
kwargs = {}
if config.format is DataFormat.csv:
kwargs['parse_dates'] = ['ds']
df = reader(path, **kwargs)
dynamic_dfs.append(df)
return dynamic_dfs
def _paste_dynamic(
data: Frame, dynamic_dfs: Optional[List[pd.DataFrame]], is_distributed: bool
) -> pd.DataFrame:
if dynamic_dfs is None:
return data
data = data.reset_index()
for df in dynamic_dfs:
data = data.merge(df, how='left')
kwargs = {}
if is_distributed:
kwargs['sorted'] = True
data = data.set_index('unique_id', **kwargs)
return data
#hide
for data_format in ('csv', 'parquet'):
with tempfile.TemporaryDirectory() as tmpdir:
tmp = Path(tmpdir)
series = generate_daily_series(20, n_static_features=2, equal_ends=True)
series = series.rename(columns={'static_1': 'product_id'})
prices = generate_prices_for_series(series)
series = series.reset_index().merge(prices, how='left')
getattr(series, f'to_{data_format}')(tmp / 'train', index=False)
getattr(prices, f'to_{data_format}')(tmp / 'prices', index=False)
data_cfg = DataConfig(
prefix=tmpdir,
input='train',
output='',
format=data_format,
dynamic=['prices'],
)
dynamic_dfs = _read_dynamic(data_cfg)
assert isinstance(dynamic_dfs, list)
test_eq(len(dynamic_dfs), 1)
pd.testing.assert_frame_equal(dynamic_dfs[0], prices)
data_cfg = DataConfig(prefix='', input='', output='', format='csv')
assert _read_dynamic(data_cfg) is None
#exporti
def _instantiate_transforms(config: FeaturesConfig) -> Dict:
"""Turn the function names into the actual functions and make sure their positional arguments are in order."""
if config.lag_transforms is None:
return {}
lag_tfms = defaultdict(list)
for lag, tfms in config.lag_transforms.items():
for tfm in tfms:
if isinstance(tfm, dict):
[(tfm_name, tfm_kwargs)] = tfm.items()
else:
tfm_name, tfm_kwargs = tfm, ()
tfm_func = _available_tfms[tfm_name]
tfm_args: Tuple[Any, ...] = ()
for kwarg in _available_tfms_kwargs[tfm_name]:
if kwarg in tfm_kwargs:
tfm_args += (tfm_kwargs[kwarg],)
lag_tfms[lag].append((tfm_func, *tfm_args))
return lag_tfms
#hide
features_cfg = FeaturesConfig(freq='D',
lags=[1, 2],
lag_transforms={
1: ['expanding_mean', {'rolling_mean': {'window_size': 7}}],
2: [{'rolling_mean': {'min_samples': 2, 'window_size': 3}}]
})
test_eq(_instantiate_transforms(features_cfg),
{
1: [(expanding_mean,), (rolling_mean, 7)],
2: [(rolling_mean, 3, 2)]
})
test_eq(_instantiate_transforms(FeaturesConfig(freq='D')), {})
test_fail(
lambda: _instantiate_transforms(
FeaturesConfig(freq='D', lag_transforms={1: [{'exp_mean': {}}]})
),
contains='unexpected value; permitted:'
)
#exporti
def _fcst_from_local(model_config: ModelConfig, flow_config: Dict) -> Forecast:
module_name, model_cls = model_config.name.rsplit('.', maxsplit=1)
module = importlib.import_module(module_name)
model = getattr(module, model_cls)(**(model_config.params or {}))
ts = TimeSeries(**flow_config)
return Forecast(model, ts)
def _fcst_from_distributed(
model_config: DistributedModelConfig, flow_config: Dict
) -> DistributedForecast:
model_params = model_config.params or {}
if model_config.name is DistributedModelName.LGBMForecast:
from mlforecast.distributed.models.lgb import LGBMForecast
model = LGBMForecast(**model_params)
else:
from mlforecast.distributed.models.xgb import XGBForecast
model = XGBForecast(**model_params)
ts = TimeSeries(**flow_config)
return DistributedForecast(model, ts)
#export
def fcst_from_config(config: FlowConfig) -> Union[Forecast, DistributedForecast]:
"""Instantiate Forecast class from config."""
flow_config = config.features.dict()
flow_config['lag_transforms'] = _instantiate_transforms(config.features)
remove_keys = {'static_features', 'keep_last_n'}
flow_config = {k: v for k, v in flow_config.items() if k not in remove_keys}
if config.local is not None:
return _fcst_from_local(config.local.model, flow_config)
# because of the config validation, either local or distributed will be not None
# however mypy can't see this, hence the next assert
assert config.distributed is not None
return _fcst_from_distributed(config.distributed.model, flow_config)
with open('../sample_configs/local.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
fcst = fcst_from_config(cfg)
test_eq(fcst.model.__class__.__name__, cfg.local.model.name.split('.')[-1])
model_params = fcst.model.get_params()
for param_name, param_value in cfg.local.model.params.items():
test_eq(model_params[param_name], param_value)
with Client(n_workers=2) as client:
with open('../sample_configs/distributed.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
fcst = fcst_from_config(cfg)
test_eq(fcst.model.__class__.__name__, cfg.distributed.model.name)
model_params = fcst.model.get_params()
for param_name, param_value in cfg.distributed.model.params.items():
test_eq(model_params[param_name], param_value)
#hide
with Client(n_workers=2) as client:
with open('../sample_configs/distributed.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
cfg.distributed.model.name = DistributedModelName('LGBMForecast')
fcst = fcst_from_config(cfg)
test_eq(fcst.model.__class__.__name__, cfg.distributed.model.name)
model_params = fcst.model.get_params()
for param_name, param_value in cfg.distributed.model.params.items():
test_eq(model_params[param_name], param_value)
#export
def perform_backtest(
fcst: Union[Forecast, DistributedForecast],
data: Frame,
config: FlowConfig,
output_path: Union[Path, S3Path],
dynamic_dfs: Optional[List[pd.DataFrame]] = None,
client: Optional[Client] = None,
) -> None:
"""Performs backtesting of `fcst` using `data` and the strategy defined in `config`.
Writes the results to `output_path`."""
if config.backtest is None:
return
data_is_dask = isinstance(data, dd_Frame)
if data_is_dask and client is None:
raise ValueError('Must provide a client when data is a dask Dataframe.')
results = fcst.backtest(
data,
config.backtest.n_windows,
config.backtest.window_size,
static_features=config.features.static_features,
dynamic_dfs=dynamic_dfs,
)
for i, result in enumerate(results):
result = result.fillna(0)
split_path = _path_as_str(output_path / f'valid_{i}')
if not data_is_dask:
split_path += f'.{config.data.format}'
writer = getattr(result, f'to_{config.data.format}')
if data_is_dask:
write_futures = writer(split_path, compute=False)
assert client is not None # mypy
client.compute(write_futures)
else:
writer(split_path)
with open(f'../sample_configs/local.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
fcst = fcst_from_config(cfg)
with tempfile.TemporaryDirectory() as tmpdir:
series = generate_daily_series(20, 100, 200)
perform_backtest(fcst, series, cfg, Path(tmpdir))
#hide
with open(f'../sample_configs/local.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
cfg.backtest = None
fcst = fcst_from_config(cfg)
with tempfile.TemporaryDirectory() as tmpdir:
perform_backtest(fcst, series, cfg, Path(tmpdir))
#distributed
with Client(n_workers=2) as client:
with open(f'../sample_configs/distributed.yaml', 'rt') as f:
cfg = FlowConfig(**yaml.safe_load(f))
fcst = fcst_from_config(cfg)
with tempfile.TemporaryDirectory() as tmpdir:
perform_backtest(fcst, series_ddf, cfg, Path(tmpdir), client=client)
#distributed
#hide
test_fail(lambda: perform_backtest(fcst, series_ddf, cfg, Path('.')), contains='Must provide a client')
#export
def parse_config(config_file: str) -> FlowConfig:
"""Create a `FlowConfig` object using the contents of `config_file`"""
with open(config_file, 'r') as f:
config = FlowConfig(**yaml.safe_load(f))
return config
def setup_client(config: ClusterConfig) -> Client:
"""Spins up a cluster with the specifications defined in `config` and returns a client connected to it."""
module_name, cluster_cls = config.class_name.rsplit('.', maxsplit=1)
module = importlib.import_module(module_name)
cluster = getattr(module, cluster_cls)(**config.class_kwargs)
client = Client(cluster)
n_workers = config.class_kwargs.get('n_workers', 0)
client.wait_for_workers(n_workers)
return client
#distributed
client = setup_client(cfg.distributed.cluster)
assert isinstance(client.cluster, LocalCluster)
assert len(client.scheduler_info()['workers']) == cfg.distributed.cluster.class_kwargs['n_workers']
client.cluster.close()
client.close()
```
| github_jupyter |
# Perform Bayesian optimization of CrabNet hyperparameters using Ax
###### Created January 8, 2022
# Description
We use [(my fork of) CrabNet](https://github.com/sgbaird/CrabNet) to adjust various hyperparameters for the experimental band gap matbench task (`matbench_expt_gap`). We chose this task because `CrabNet` is currently (2021-01-08) listed at the top of this leaderboard (with `MODNet` just marginally worse) which is likely related to it being a composition-only dataset (`CrabNet` is a composition-only model).
The question we're asking in this additional `matbench` submission is:
**When a model whose defaults already produce state-of-the-art property prediction performance, to what extent can it benefit from hyperparameter optimization (i.e. tuning parameters such as Neural Network dimensions, learning rates, etc.)?**
Eventually, I plan to incorporate this into (my fork of) `CrabNet`, but for now this can serve as an illustrative example of hyperparameter optimization using Bayesian adaptive design and could certainly be adapted to other models, especially expensive-to-train models (e.g. neural networks) that have not undergone much by way of parameter tuning.
[facebook/Ax](https://github.com/facebook/Ax) is used as the backend for performing Bayesian adaptive design.
For additional files related to this `matbench` submission, see the [crabnet-hyperparameter](https://github.com/sparks-baird/crabnet-hyperparameter) repository.
# Benchmark name
Matbench v0.1
# Package versions
- ax_platform==0.2.3
- crabnet==1.2.1
- scikit_learn==1.0.2
- matbench==0.5
- kaleido==0.2.1
# Algorithm description
Use Ax Bayesian adaptive design to simultaneously optimize 23 hyperparameters of CrabNet. `100` sequential design iterations were used, and parameters were chosen based on a combination of intuition and algorithm/data constraints (e.g. elemental featurizers which were missing elements contained in the dataset were removed). The first `46` iterations (`23*2` parameters) were based on SOBOL sampling to create a rough initial model, while the remaining `56` iterations were Bayesian adaptive design iterations. For the inner loops (where hyperparameter optimization is performed), the average MAE across each of the *five inner folds* was used as Ax's objective to minimize. The best parameter set was then trained on all the inner fold data and used to predict on the test set (unknown during hyperparameter optimization). This is nested cross-validation (CV), and is computationally expensive. See [automatminer: running a benchmark](https://hackingmaterials.lbl.gov/automatminer/advanced.html#running-a-benchmark) for more information on nested CV.
## Imports
```
import pprint
from os.path import join
from pathlib import Path
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import gc
import torch
from ax.storage.json_store.save import save_experiment
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
import crabnet
from crabnet.train_crabnet import get_model
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import KFold
from matbench.bench import MatbenchBenchmark
```
## Setup
`dummy` lets you swap between a fast run and a more comprehensive run. The more comprehensive run was used for this matbench submission.
```
dummy = False
if dummy:
n_splits = 2
total_trials = 2
else:
n_splits = 5
total_trials = 100
```
Specify directories where you want to save things and make sure they exist.
```
# create dir https://stackoverflow.com/a/273227/13697228
experiment_dir = "experiments"
figure_dir = "figures"
Path(experiment_dir).mkdir(parents=True, exist_ok=True)
Path(figure_dir).mkdir(parents=True, exist_ok=True)
```
## Helper Functions
### matplotlibify
The following code makes a Plotly figure look more like a matplotlib figure to make it easier to include in a manuscript.
```
def matplotlibify(fig, size=24, width_inches=3.5, height_inches=3.5, dpi=142):
# make it look more like matplotlib
# modified from: https://medium.com/swlh/formatting-a-plotly-figure-with-matplotlib-style-fa56ddd97539)
font_dict = dict(family="Arial", size=size, color="black")
fig.update_layout(
font=font_dict,
plot_bgcolor="white",
width=width_inches * dpi,
height=height_inches * dpi,
margin=dict(r=40, t=20, b=10),
)
fig.update_yaxes(
showline=True, # add line at x=0
linecolor="black", # line color
linewidth=2.4, # line size
ticks="inside", # ticks outside axis
tickfont=font_dict, # tick label font
mirror="allticks", # add ticks to top/right axes
tickwidth=2.4, # tick width
tickcolor="black", # tick color
)
fig.update_xaxes(
showline=True,
showticklabels=True,
linecolor="black",
linewidth=2.4,
ticks="inside",
tickfont=font_dict,
mirror="allticks",
tickwidth=2.4,
tickcolor="black",
)
fig.update(layout_coloraxis_showscale=False)
width_default_px = fig.layout.width
targ_dpi = 300
scale = width_inches / (width_default_px / dpi) * (targ_dpi / dpi)
return fig, scale
```
### correct_parameterization
The following function is very important for interfacing Ax with CrabNet. The information in `parameterization` (a Python `dict`) as an input and `parameterization` as an output (please excuse the faux pas) is essentially the same, but Ax needs a representation for the parameter space and CrabNet has a certain API for the parameters. `correct_parameterization` just converts from the Ax representation of CrabNet parameters to the CrabNet API for parameters.
```
def correct_parameterization(parameterization):
pprint.pprint(parameterization)
parameterization["out_hidden"] = [
parameterization.get("out_hidden4") * 8,
parameterization.get("out_hidden4") * 4,
parameterization.get("out_hidden4") * 2,
parameterization.get("out_hidden4"),
]
parameterization.pop("out_hidden4")
parameterization["betas"] = (
parameterization.get("betas1"),
parameterization.get("betas2"),
)
parameterization.pop("betas1")
parameterization.pop("betas2")
d_model = parameterization["d_model"]
# make heads even (unless it's 1) (because d_model must be even)
heads = parameterization["heads"]
if np.mod(heads, 2) != 0:
heads = heads + 1
parameterization["heads"] = heads
# NOTE: d_model must be divisible by heads
d_model = parameterization["heads"] * round(d_model / parameterization["heads"])
parameterization["d_model"] = d_model
parameterization["pos_scaler_log"] = (
1 - parameterization["emb_scaler"] - parameterization["pos_scaler"]
)
parameterization["epochs"] = parameterization["epochs_step"] * 4
return parameterization
```
## Hyperparameter Optimization (and matbench recording)
Note that `crabnet_mae` is defined inside of the loop so that `train_val_df` object is updated for each outer fold. This is due to a limitation of Ax where `crabnet_mae` (the objective function) can *only* take parameterization as an input (no additional parameters, kwargs, etc.). This was my workaround. Note that for the inner loop where Ax optimizes the CrabNet parameters for each outer fold, five inner folds are used, and the objective that Ax optimizes is the average MAE across the five inner folds. Additionally, hyperparameter optimization plots are produced for each of the five outer folds (Ax objective vs. iteration) using `100` sequential iterations. In other words, for a single `matbench` task (this notebook), `CrabNet` undergoes model instantiation and fitting `5*5*100 --> 2500` times. This took a few days to run on a single machine.
```
mb = MatbenchBenchmark(autoload=False, subset=["matbench_expt_gap"])
kf = KFold(n_splits=n_splits, shuffle=True, random_state=18012019)
task = list(mb.tasks)[0]
task.load()
for i, fold in enumerate(task.folds):
train_inputs, train_outputs = task.get_train_and_val_data(fold)
# TODO: treat train_val_df as Ax fixed_parameter
train_val_df = pd.DataFrame(
{"formula": train_inputs.values, "target": train_outputs.values}
)
if dummy:
train_val_df = train_val_df[:100]
def crabnet_mae(parameterization):
"""Compute the mean absolute error of a CrabNet model.
Assumes that `train_df` and `val_df` are predefined.
Parameters
----------
parameterization : dict
Dictionary of the parameters passed to `get_model()` after some slight
modification.
Returns
-------
results: dict
Dictionary of `{"rmse": rmse}` where `rmse` is the root-mean-square error of the
CrabNet model.
"""
parameterization = correct_parameterization(parameterization)
mae = 0.0
for train_index, val_index in kf.split(train_val_df):
train_df, val_df = (
train_val_df.loc[train_index],
train_val_df.loc[val_index],
)
crabnet_model = get_model(
mat_prop="expt_gap",
train_df=train_df,
learningcurve=False,
force_cpu=False,
**parameterization
)
val_true, val_pred, val_formulas, val_sigma = crabnet_model.predict(val_df)
# rmse = mean_squared_error(val_true, val_pred, squared=False)
mae = mae + mean_absolute_error(val_true, val_pred)
# deallocate CUDA memory https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/28
del crabnet_model
gc.collect()
torch.cuda.empty_cache()
mae = mae / n_splits
results = {"mae": mae}
return results
best_parameters, values, experiment, model = optimize(
parameters=[
{"name": "batch_size", "type": "range", "bounds": [32, 256]},
{"name": "fudge", "type": "range", "bounds": [0.0, 0.1]},
{"name": "d_model", "type": "range", "bounds": [100, 1024]},
{"name": "N", "type": "range", "bounds": [1, 10]},
{"name": "heads", "type": "range", "bounds": [1, 10]},
{"name": "out_hidden4", "type": "range", "bounds": [32, 512]},
{"name": "emb_scaler", "type": "range", "bounds": [0.0, 1.0]},
{"name": "pos_scaler", "type": "range", "bounds": [0.0, 1.0]},
{"name": "bias", "type": "choice", "values": [False, True]},
{"name": "dim_feedforward", "type": "range", "bounds": [1024, 4096],},
{"name": "dropout", "type": "range", "bounds": [0.0, 1.0]},
# jarvis and oliynyk don't have enough elements
# ptable contains str, which isn't a handled case
{
"name": "elem_prop",
"type": "choice",
"values": [
"mat2vec",
"magpie",
"onehot",
], # "jarvis", "oliynyk", "ptable"
},
{"name": "epochs_step", "type": "range", "bounds": [5, 20]},
{"name": "pe_resolution", "type": "range", "bounds": [2500, 10000]},
{"name": "ple_resolution", "type": "range", "bounds": [2500, 10000],},
{
"name": "criterion",
"type": "choice",
"values": ["RobustL1", "RobustL2"],
},
{"name": "lr", "type": "range", "bounds": [0.0001, 0.006]},
{"name": "betas1", "type": "range", "bounds": [0.5, 0.9999]},
{"name": "betas2", "type": "range", "bounds": [0.5, 0.9999]},
{"name": "eps", "type": "range", "bounds": [0.0000001, 0.0001]},
{"name": "weight_decay", "type": "range", "bounds": [0.0, 1.0]},
# {"name": "adam", "type": "choice", "values": [False, True]}, # issues with onehot
# {"name": "min_trust", "type": "range", "bounds": [0.0, 1.0]}, #issues with onehot
{"name": "alpha", "type": "range", "bounds": [0.0, 1.0]},
{"name": "k", "type": "range", "bounds": [2, 10]},
],
experiment_name="crabnet-hyperparameter",
evaluation_function=crabnet_mae,
objective_name="mae",
minimize=True,
parameter_constraints=["betas1 <= betas2", "emb_scaler + pos_scaler <= 1"],
total_trials=total_trials,
)
print(best_parameters)
print(values)
experiment_fpath = join(experiment_dir, "experiment" + str(i) + ".json")
save_experiment(experiment, experiment_fpath)
# TODO: save plot, save experiment
test_inputs, test_outputs = task.get_test_data(fold, include_target=True)
test_df = pd.DataFrame({"formula": test_inputs, "target": test_outputs})
default_model = get_model(
mat_prop="expt_gap",
train_df=train_val_df,
learningcurve=False,
force_cpu=False,
)
default_true, default_pred, default_formulas, default_sigma = default_model.predict(
test_df
)
# rmse = mean_squared_error(val_true, val_pred, squared=False)
default_mae = mean_absolute_error(default_true, default_pred)
# deallocate CUDA memory https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/28
del default_model
gc.collect()
torch.cuda.empty_cache()
best_parameterization = correct_parameterization(best_parameters)
test_model = get_model(
mat_prop="expt_gap",
train_df=train_val_df,
learningcurve=False,
force_cpu=False,
**best_parameterization
)
# TODO: update CrabNet predict function to allow for no target specified
test_true, test_pred, test_formulas, test_sigma = test_model.predict(test_df)
# rmse = mean_squared_error(val_true, val_pred, squared=False)
test_mae = mean_absolute_error(test_true, test_pred)
# deallocate CUDA memory https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/28
del test_model
gc.collect()
torch.cuda.empty_cache()
trials = experiment.trials.values()
best_objectives = np.array([[trial.objective_mean for trial in trials]])
parameter_strs = [
pprint.pformat(trial.arm.parameters).replace("\n", "<br>") for trial in trials
]
best_objective_plot = optimization_trace_single_method(
y=best_objectives,
optimization_direction="minimize",
ylabel="MAE (eV)",
hover_labels=parameter_strs,
plot_trial_points=True,
)
figure_fpath = join(figure_dir, "best_objective_plot_" + str(i))
data = best_objective_plot[0]["data"]
data.append(
go.Scatter(
x=(1, total_trials),
y=(default_mae, default_mae),
mode="lines",
line={"dash": "dash"},
name="default MAE",
yaxis="y1",
)
)
data.append(
go.Scatter(
x=(1, total_trials),
y=(test_mae, test_mae),
mode="lines",
line={"dash": "dash"},
name="best model test MAE",
yaxis="y1",
)
)
layout = best_objective_plot[0]["layout"]
fig = go.Figure({"data": data, "layout": layout})
fig.show()
fig.write_html(figure_fpath + ".html")
fig.to_json(figure_fpath + ".json")
fig.update_layout(
legend=dict(
font=dict(size=16),
yanchor="top",
y=0.99,
xanchor="right",
x=0.99,
bgcolor="rgba(0,0,0,0)",
)
)
fig, scale = matplotlibify(fig)
fig.write_image(figure_fpath + ".png")
task.record(fold, test_pred, params=best_parameterization)
```
## Export matbench file
```
my_metadata = {"algorithm_version": crabnet.__version__}
mb.add_metadata(my_metadata)
mb.to_file("expt_gap_benchmark.json.gz")
```
## Static Plots
For convenience (and so you don't have to run this notebook over the course of a few days), objective vs. iteration static plots are given for each of the five outer folds. If you'd like a more interactive experience, open the corresponding HTML files in your browser of choice from [`crabnet-hyperparameter/figures`](https://github.com/sparks-baird/crabnet-hyperparameter/tree/main/figures). Note that the first 46 (`2*23 parameters`) iterations are [SOBOL iterations](https://en.wikipedia.org/wiki/Sobol_sequence) (quasi-random sequence), and the remaining 54 are actual Bayesian optimization iterations.
In the last four out of five folds, the hyperparameter optimization results in a slightly better test MAE (i.e. MAE for data which was unknown during the hyperparameter optimization).
### Objective vs. iteration for outer fold 0
The best model test MAE is somewhat worse that the default test MAE, and both are close to the best validation MAE.

### Objective vs. iteration for outer fold 1
The best model test MAE is slightly better than the default test MAE.

### Objective vs. iteration for outer fold 2
The best model test MAE is somewhat better than the default test MAE.

### Objective vs. iteration for outer fold 3
The best model test MAE is somewhat better than the default test MAE.

### Objective vs. iteration for outer fold 4

| github_jupyter |
# Team 6a - Final Project - Phase 3
## Title :
US and Japan YouTube trending videos
## Problem:
Our goal is to analyze US and Japan YouTube trending videos and present how the video categories and country culture correlate with the video’s popularity in 2018 and 2020.
We will analyze the characteristics, including categories, view, likes count and trending time (season/weekday/month), in trending
videos to identify changes in preference.
## Data source:
Initially, we had four different tables with the videos information and two tables that contain the video categories information.
`ba775-team-6a.youtube.US_youtube_trending_data`,
`ba775-team-6a.youtube.US_youtube_trending_data_past`,
`ba775-team-6a.youtube.JP_youtube_trending_data`,
`ba775-team-6a.youtube.JP_youtube_trending_data_past`,
`ba775-team-6a.youtube.US_video_categories`,
`ba775-team-6a.youtube.JP_video_categories`
We decided to put all the information in one table which is the one that we used in our queries and visualizations:
`ba775-team-6a.youtube.Youtube_trending_videos`
## Reference to datasource:
This datasource was collected using the Youtube and it presents a list of the top trending videos on the platform. To determine the year's top-trending videos, Youtube uses a combination of factors such as number of views, shares, comments and likes. We accessed the datasources via Kaggle data notebooks "Trending Yotube Video Statistics" and "YouTube Trending Video Dataset".
Links:
https://www.kaggle.com/datasnaek/youtube-new
https://www.kaggle.com/rsrishav/youtube-trending-video-dataset
## Get to know the dataset
```
%%bigquery
##Total records in our dataset
SELECT
COUNT(*) as Total_records
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
%%bigquery
##Total records in each country
SELECT
Country,
EXTRACT(year from publishedat) AS YEAR,
COUNT(*) as Total_records
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
GROUP BY
Country,
YEAR
ORDER BY
Country, YEAR DESC
%%bigquery
##How many video categories in each country
SELECT
Country,
COUNT(DISTINCT categoryid) as Total_categories
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
GROUP BY
Country
ORDER BY
Country DESC
%%bigquery
##Info about the published date
SELECT
Country,
EXTRACT(year from publishedat) AS YEAR,
COUNT(DISTINCT EXTRACT(date FROM publishedat)) AS COUNT_DATE,
MIN(EXTRACT(date FROM publishedat)) AS MIN_DATE,
MAX(EXTRACT(date FROM publishedat)) AS MAX_DATE
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
GROUP BY
Country, YEAR
ORDER BY
YEAR DESC
%%bigquery
##Info about the trending_date
SELECT
Country,
EXTRACT(year from trending_date) AS YEAR,
COUNT(DISTINCT EXTRACT(date FROM trending_date)) AS COUNT_DATE,
MIN(EXTRACT(date FROM trending_date)) AS MIN_DATE,
MAX(EXTRACT(date FROM trending_date)) AS MAX_DATE
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
GROUP BY
Country, YEAR
ORDER BY
YEAR DESC
```
## General Questions:
#### 1. What are the top 5 most popular video categories in United States for 2018 and 2020?
```
%%bigquery
WITH us_top_videos AS
(
SELECT category_id, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT US_videos_past.category_Id,
SUM(US_videos_past.views) AS view_count,
EXTRACT(year FROM US_videos_past.publish_time) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data_past` AS US_videos_past
WHERE
EXTRACT(year FROM US_videos_past.publish_time) = 2018
GROUP BY
category_Id,
year_published
ORDER BY
view_count DESC
)
UNION ALL
SELECT categoryid, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT US_videos.categoryId,
SUM(US_videos.view_count) AS view_count,
EXTRACT(year FROM US_videos.publishedat) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data` AS US_videos
WHERE
EXTRACT(year FROM US_videos.publishedat) = 2020
GROUP BY
categoryId,
year_published
ORDER BY
view_count DESC
)
)
SELECT category_id, video_categories.title, year_published
FROM us_top_videos
INNER JOIN `ba775-team-6a.youtube.US_video_categories` AS video_categories
ON category_id = video_categories.id
WHERE
count_row >=1 and count_row <=5
ORDER BY
year_published DESC
```
#### 2. What are the top 5 most popular video categories in Japan for 2018 and 2020?
```
%%bigquery
WITH JP_top_videos AS
(
SELECT category_id, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT JP_videos_past.category_Id,
SUM(JP_videos_past.views) AS view_count,
EXTRACT(year FROM JP_videos_past.publish_time) AS year_published
FROM `ba775-team-6a.youtube.JP_youtube_trending_data_past` AS JP_videos_past
WHERE
EXTRACT(year FROM JP_videos_past.publish_time) = 2018
GROUP BY
category_Id,
year_published
ORDER BY
view_count DESC)
UNION ALL
SELECT categoryid, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT JP_videos.categoryId,
SUM(JP_videos.view_count) AS view_count,
EXTRACT(year FROM JP_videos.publishedat) AS year_published
FROM `ba775-team-6a.youtube.JP_youtube_trending_data` AS JP_videos
WHERE
EXTRACT(year FROM JP_videos.publishedat) = 2020
GROUP BY
categoryId,
year_published
ORDER BY
view_count DESC
)
)
SELECT category_id, video_categories.title, year_published
FROM JP_top_videos
INNER JOIN `ba775-team-6a.youtube.JP_video_categories` AS video_categories
ON category_id = video_categories.id
WHERE
count_row >=1 and count_row <=5
ORDER BY
year_published DESC
```
### Top 5 Categories in Each Country at 2018 and 2020
```
%%bigquery
WITH us_top_videos AS
(
SELECT category_id, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT US_videos_past.category_Id,
SUM(US_videos_past.views) AS view_count,
EXTRACT(year FROM US_videos_past.publish_time) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data_past` AS US_videos_past
WHERE
EXTRACT(year FROM US_videos_past.publish_time) = 2018
GROUP BY
category_Id,
year_published
ORDER BY
view_count DESC
)
UNION ALL
SELECT categoryid, year_published,
(ROW_NUMBER() OVER(Partition by year_published
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT US_videos.categoryId,
SUM(US_videos.view_count) AS view_count,
EXTRACT(year FROM US_videos.publishedat) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data` AS US_videos
WHERE
EXTRACT(year FROM US_videos.publishedat) = 2020
GROUP BY
categoryId,
year_published
ORDER BY
view_count DESC
)
)
SELECT category_id, video_categories.title, year_published
FROM us_top_videos
INNER JOIN `ba775-team-6a.youtube.US_video_categories` AS video_categories
ON category_id = video_categories.id
WHERE
count_row >=1 and count_row <=5
ORDER BY
year_published DESC
```
### 3. What are the top 10 most popular music videos for 2020?
```
%%bigquery
SELECT video_id,
title,
country
FROM(
SELECT DISTINCT US_video_data.video_id,
US_video_data.title,
US_video_data.view_count AS view_count,
'United States' AS country
FROM `ba775-team-6a.youtube.US_youtube_trending_data` AS US_video_data
WHERE
US_video_data.categoryId = 10 --Music
UNION ALL
SELECT DISTINCT JP_video_data.video_id,
JP_video_data.title,
JP_video_data.view_count AS view_count,
'Japan' AS country
FROM `ba775-team-6a.youtube.JP_youtube_trending_data` AS JP_video_data
WHERE
JP_video_data.categoryId = 10 --Music
) AS A
GROUP BY
video_id,
title,
country
ORDER BY
SUM(view_count) DESC
LIMIT 10
```
### Top 5 Most Music Popular Videos in Each Countries
```
%%bigquery
WITH top_videos AS
(
SELECT
title, country,
(ROW_NUMBER() OVER(Partition by country
ORDER BY view_count DESC)) AS Count_row
FROM(
SELECT DISTINCT
US_video_data.title,
SUM(US_video_data.view_count) AS view_count,
Country
FROM `ba775-team-6a.youtube.Youtube_trending_videos` AS US_video_data
WHERE
US_video_data.categoryId = 10 --Music
AND Country = 'United States'
GROUP BY
US_video_data.title,
Country
ORDER BY
view_count DESC
)
UNION ALL
SELECT
title, country,
(ROW_NUMBER() OVER(Partition by country
ORDER BY view_count DESC)) AS Count_row
FROM
(
SELECT DISTINCT
JP_video_data.title,
SUM(JP_video_data.view_count) AS view_count,
Country
FROM `ba775-team-6a.youtube.Youtube_trending_videos` AS JP_video_data
WHERE
JP_video_data.categoryId = 10 --Music
AND JP_video_data.Country = 'Japan'
GROUP BY
title,
Country
ORDER BY
view_count DESC
)
)
SELECT *
FROM top_videos
WHERE
count_row >=1 and count_row <=5
ORDER BY
Country DESC
```
### 4. How many videos are trending on Youtube in both Japan and US for 2018 and 2020?
```
%%bigquery
SELECT year,
COUNT(DISTINCT video_id) AS Amount_videos
FROM
(SELECT
EXTRACT(year from JP.publishedAt) AS year,
--count(DISTINCT JP.video_id)
JP.video_id
FROM `ba775-team-6a.youtube.JP_youtube_trending_data` AS JP
INNER JOIN `ba775-team-6a.youtube.US_youtube_trending_data` AS US
ON US.video_id = JP.video_id
AND EXTRACT(year from US.publishedat) = 2020
WHERE EXTRACT(year from JP.publishedAt) = 2020
UNION ALL
SELECT
EXTRACT(year from JP_past.publish_time) AS year,
JP_past.video_id
FROM `ba775-team-6a.youtube.JP_youtube_trending_data_past` AS JP_past
INNER JOIN `ba775-team-6a.youtube.US_youtube_trending_data_past` AS US_past
ON US_past.video_id = JP_past.video_id
AND EXTRACT(year from US_past.publish_time) = 2018
WHERE EXTRACT(year from JP_past.publish_time) = 2018
)
GROUP BY
year
ORDER BY
year DESC
```
### Total Amount of Trending Video in Both Japan and the US (Ratio)
```
%%bigquery
with oldvideo as (select (amount_video/sum_video) as video_ratio, published_year
from
(with oldcount as (with usvideo as
(select *
from (select title, extract(year from publishedAt) as published_year, country
from `ba775-team-6a.youtube.Youtube_trending_videos`
where extract(year from publishedAt)=2018)
where country='United States')
,jpvideo as
(select *
from (select title, extract(year from publishedAt) as published_year, country
from `ba775-team-6a.youtube.Youtube_trending_videos`
where extract(year from publishedAt)=2018)
where country='Japan')
select count(u.title) as amount_video, u.published_year
from usvideo as u
inner join jpvideo as j
on u.title=j.title
group by u.published_year)
, oldsum as (select count(title) as sum_video, extract(year from publishedAt) as published_year from `ba775-team-6a.youtube.Youtube_trending_videos` where extract(year from publishedAt)=2018 group by extract(year from publishedAt))
select oc.published_year, oc.amount_video, os.sum_video
from oldsum as os
inner join oldcount as oc
on os.published_year = oc.published_year))
,newvideo as ((select (amount_video/sum_video) as video_ratio, published_year
from
(with oldcount as (with usvideo as
(select *
from (select title, extract(year from publishedAt) as published_year, country
from `ba775-team-6a.youtube.Youtube_trending_videos`
where extract(year from publishedAt)=2020)
where country='United States')
,jpvideo as
(select *
from (select title, extract(year from publishedAt) as published_year, country
from `ba775-team-6a.youtube.Youtube_trending_videos`
where extract(year from publishedAt)=2020)
where country='Japan')
select count(u.title) as amount_video, u.published_year
from usvideo as u
inner join jpvideo as j
on u.title=j.title
group by u.published_year)
, oldsum as (select count(title) as sum_video, extract(year from publishedAt) as published_year from `ba775-team-6a.youtube.Youtube_trending_videos` where extract(year from publishedAt)=2020 group by extract(year from publishedAt))
select oc.published_year, oc.amount_video, os.sum_video
from oldsum as os
inner join oldcount as oc
on os.published_year = oc.published_year)))
select *
from oldvideo
union all
select *
from newvideo
```
#### 5. For the same videos trending in Japan and US, to which category they belong and how many they have?
```
%%bigquery
SELECT JP.CategoryId, JP_categories.Title, COUNT(DISTINCT JP.video_id) AS Amount_videos
FROM `ba775-team-6a.youtube.Youtube_trending_videos` AS US
INNER JOIN `ba775-team-6a.youtube.Youtube_trending_videos` AS JP
ON JP.video_id = US.video_id
AND JP.Country = 'Japan'
AND EXTRACT(year from JP.publishedAt) = 2020
INNER JOIN `ba775-team-6a.youtube.JP_video_categories` AS JP_categories
ON JP_categories.id = JP.categoryid
WHERE
EXTRACT(year from US.publishedAt) = 2020
AND US.Country = 'United States'
GROUP BY
JP.categoryId, JP_categories.Title
ORDER BY
Amount_videos DESC
```
#### 6. What top 5 channels and videos have more views_counts in United States for 2018 and 2020?
```
%%bigquery
SELECT channelTitle,sum(view_count) as total_view FROM `ba775-team-6a.youtube.Youtube_trending_videos`
where Country = 'United States'
group by channelTitle
order by total_view DESC
limit 5
```
#### 7. What top 5 channels and videos have more views_counts in Japan for 2018 and 2020?
```
%%bigquery
SELECT channelTitle,sum(view_count) as total_view FROM `ba775-team-6a.youtube.Youtube_trending_videos`
where Country = 'Japan'
group by channelTitle
order by total_view DESC
limit 5
```
#### 8. The number of comment_count, likes and dislikes per trending video in U.S for 2018 and 2020
```
%%bigquery
SELECT
EXTRACT(year from publishedAt) AS Year_data,
SUM(view_count) / COUNT(DISTINCT video_id) AS avg_view_count,
SUM(likes) / COUNT(DISTINCT video_id) AS avg_likes,
SUM(dislikes) / COUNT(DISTINCT video_id) AS avg_dislikes,
SUM(comment_count) / COUNT(DISTINCT video_id) AS avg_comment_count
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE
Country = 'United States'
GROUP BY
Year_data
```
#### 9.The number of comment_count, likes and dislikes per trending video in Japan for 2018 and 2020
```
%%bigquery
SELECT
EXTRACT(year from publishedAt) AS Year_data,
SUM(view_count) / COUNT(DISTINCT video_id) AS avg_view_count,
SUM(likes) / COUNT(DISTINCT video_id) AS avg_likes,
SUM(dislikes) / COUNT(DISTINCT video_id) AS avg_dislikes,
SUM(comment_count) / COUNT(DISTINCT video_id) AS avg_comment_count
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE
Country = 'Japan'
GROUP BY
Year_data
```
#### 10) Top 10 video categories for US trending videos for 2018 and 2020
```
%%bigquery
SELECT categoryId, video_categories.title AS video_category, SUM(view_count) AS Total_view_count
FROM (
SELECT DISTINCT US_videos.categoryId,
US_videos.view_count AS view_count,
EXTRACT(year FROM US_videos.publishedAt) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data` AS US_videos
UNION DISTINCT
SELECT DISTINCT US_videos_past.category_Id,
US_videos_past.views AS view_count,
EXTRACT(year FROM US_videos_past.publish_time) AS year_published
FROM `ba775-team-6a.youtube.US_youtube_trending_data_past` AS US_videos_past
WHERE
EXTRACT(year FROM US_videos_past.publish_time) = 2018
) AS A
INNER JOIN `ba775-team-6a.youtube.US_video_categories` AS video_categories
ON A.categoryId = video_categories.id
GROUP BY
categoryId,
video_categories.title
ORDER BY
Total_view_count DESC
LIMIT 10
```
#### 11) Top 10 video categories for Japan trending videos for 2018 and 2020
```
%%bigquery
SELECT categoryId, video_categories.title AS video_category, SUM(view_count) AS Total_view_count
FROM (
SELECT DISTINCT JP_videos.categoryId,
JP_videos.view_count AS view_count,
EXTRACT(year FROM JP_videos.publishedAt) AS year_published
FROM `ba775-team-6a.youtube.JP_youtube_trending_data` AS JP_videos
UNION DISTINCT
SELECT DISTINCT JP_videos_past.category_Id,
JP_videos_past.views AS view_count,
EXTRACT(year FROM JP_videos_past.publish_time) AS year_published
FROM `ba775-team-6a.youtube.JP_youtube_trending_data_past` AS JP_videos_past
WHERE
EXTRACT(year FROM JP_videos_past.publish_time) = 2018
) AS A
INNER JOIN `ba775-team-6a.youtube.JP_video_categories` AS video_categories
ON A.categoryId = video_categories.id
GROUP BY
categoryId,
video_categories.title
ORDER BY
Total_view_count DESC
LIMIT 10
```
12) When does the US channel owner prefer to publish their videos?
#### 13) In 2018, how many days does each video category took to go viral?
```
%%bigquery
SELECT categoryId,
CASE WHEN Country = 'United States' THEN US_categories.title
WHEN Country = 'Japan' THEN JP_categories.title
END AS category_name,
SUM(DATE_DIFF(CAST(trending_date AS DATE), CAST(publishedAt AS DATE), day))/COUNT(categoryid) AS difference_days,
COUNT(categoryId) AS Total_in_category,
country
FROM `ba775-team-6a.youtube.Youtube_trending_videos` AS Youtube_tendring_videos
LEFT JOIN `ba775-team-6a.youtube.US_video_categories` AS US_categories
ON Youtube_tendring_videos.categoryid = US_categories.id
AND Youtube_tendring_videos.Country = 'United States'
LEFT JOIN `ba775-team-6a.youtube.JP_video_categories` AS JP_categories
ON Youtube_tendring_videos.categoryid = JP_categories.id
AND Youtube_tendring_videos.Country = 'Japan'
WHERE
EXTRACT(year from publishedat) = 2018
GROUP BY
categoryId,
category_name,
country
ORDER BY
difference_days ASC
```
#### 14) In 2020, how many days does each video category takes to go viral?
```
%%bigquery
SELECT categoryId,
CASE WHEN Country = 'United States' THEN US_categories.title
WHEN Country = 'Japan' THEN JP_categories.title
END AS category_name,
SUM(DATE_DIFF(CAST(trending_date AS DATE), CAST(publishedAt AS DATE), day))/COUNT(categoryid) AS difference_days,
COUNT(categoryId) AS Total_in_category,
country
FROM `ba775-team-6a.youtube.Youtube_trending_videos` AS Youtube_tendring_videos
LEFT JOIN `ba775-team-6a.youtube.US_video_categories` AS US_categories
ON Youtube_tendring_videos.categoryid = US_categories.id
AND Youtube_tendring_videos.Country = 'United States'
LEFT JOIN `ba775-team-6a.youtube.JP_video_categories` AS JP_categories
ON Youtube_tendring_videos.categoryid = JP_categories.id
AND Youtube_tendring_videos.Country = 'Japan'
WHERE
EXTRACT(year from publishedat) = 2020
GROUP BY
categoryId,
category_name,
country
ORDER BY
difference_days ASC
```
#### 15. In U.S, which is the most popular day of the week to upload Youtube videos based on the US trending Yotube videos in 2018 and 2020.
```
%%bigquery
SELECT Weekday_publish, COUNT(*) AS count_videos_per_weekday, Country
FROM(
SELECT DISTINCT video_id, EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) AS publish_weekday, Country,
CASE WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 1 THEN 'Sunday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 2 THEN 'Monday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 3 THEN 'Tuesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 4 THEN 'Wednesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 5 THEN 'Thursday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 6 THEN 'Friday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) = 7 THEN 'Saturday'
END Weekday_publish
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country='United States')
GROUP BY Weekday_publish,Country
ORDER BY count_videos_per_weekday DESC
```
#### 16. In Japan, which is the most popular day of the week to upload Youtube videos based on the Japan trending Yotube videos in 2018 and 2020.
```
%%bigquery
SELECT Weekday_publish, COUNT(*) AS count_videos_per_weekday,Country,
FROM(
SELECT DISTINCT video_id, EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) AS publish_weekday,Country,
CASE WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 1 THEN 'Sunday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 2 THEN 'Monday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 3 THEN 'Tuesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 4 THEN 'Wednesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 5 THEN 'Thursday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 6 THEN 'Friday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) = 7 THEN 'Saturday'
END Weekday_publish
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country='Japan')
GROUP BY Weekday_publish,Country
ORDER BY count_videos_per_weekday DESC
```
#### 17. In U.S, which weekday have more videos trending in Youtube?
```
%%bigquery
SELECT Weekday_Trending,COUNT(video_id) AS trending_vidoes_count, Country
FROM(
SELECT DISTINCT video_id, EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"America/Los_Angeles")) AS publish_weekday, EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles"))AS trending_weekday, Country,
CASE WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 1 THEN 'Sunday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 2 THEN 'Monday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 3 THEN 'Tuesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 4 THEN 'Wednesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 5 THEN 'Thursday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 6 THEN 'Friday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"America/Los_Angeles")) = 7 THEN 'Saturday'
END Weekday_Trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'United States')
GROUP BY 1,3
ORDER BY Weekday_Trending
```
#### 18. In Japan, which weekday have more videos trending in Youtube?
```
%%bigquery
SELECT Weekday_Trending,COUNT(video_id) AS trending_vidoes_count, Country
FROM(
SELECT DISTINCT video_id, EXTRACT(DAYOFWEEK FROM DATETIME(publishedAt,"Asia/Tokyo")) AS publish_weekday, EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo"))AS trending_weekday, Country,
CASE WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 1 THEN 'Sunday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 2 THEN 'Monday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 3 THEN 'Tuesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 4 THEN 'Wednesday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 5 THEN 'Thursday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 6 THEN 'Friday'
WHEN EXTRACT(DAYOFWEEK FROM DATETIME(trending_date,"Asia/Tokyo")) = 7 THEN 'Saturday'
END Weekday_Trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'Japan')
GROUP BY 1,3
```
#### 19. In U.S, what vidoe had longer trending duration and how many trending days they had?
```
%%bigquery
WITH TMP AS (SELECT DISTINCT video_id, title, categoryId, Category_name, Country,COUNT(video_id) AS trending_days,
CASE WHEN count(video_id) >1 THEN 'multiple trending days'
WHEN count(video_id) = 1 THEN 'one trending day'
ELSE 'others' END AS Trending_duration
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'United States'
GROUP BY 1,2,3,4,5
ORDER BY trending_days ),
TMP2 AS( SELECT DISTINCT video_id, MAX(view_count) AS max_view_count FROM `ba775-team-6a.youtube.Youtube_trending_videos` GROUP BY 1)
SELECT TMP.*, TMP2. max_view_count
FROM TMP
LEFT JOIN TMP2
ON TMP.video_id = TMP2.video_id
ORDER BY TMP.trending_days DESC, TMP2.max_view_count DESC
```
#### 20. In U.S, how many videos had multiple trending days and what is percentage of Youtube Trending videos that were trending for more than 1 day?
```
%%bigquery
SELECT COUNTIF( Trending_duration = 'multiple trending days') AS count_of_multiple_trending_days_videos, COUNTIF(Trending_duration = 'one trending day') AS count_of_one_trending_day_video, COUNTIF(Trending_duration = 'others') AS other,
ROUND(COUNTIF( Trending_duration = 'multiple trending days') / (COUNTIF( Trending_duration = 'multiple trending days') + COUNTIF(Trending_duration = 'one trending day')), 2) AS ratio_of_multiple_trending_days_videos,'United States' AS Country
FROM (
SELECT video_id, title, categoryId, Category_name, Country,COUNT(video_id) AS trending_days,
CASE WHEN count(video_id) >1 THEN 'multiple trending days'
WHEN count(video_id) = 1 THEN 'one trending day'
ELSE 'others' END AS Trending_duration
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'United States'
GROUP BY 1,2,3,4,5
ORDER BY trending_days DESC)
```
#### 21. In Japan, what vidoes had longer trending duration and how many trending days they had?
```
%%bigquery
WITH TMP AS (SELECT DISTINCT video_id, title, categoryId, Category_name, Country,COUNT(video_id) AS trending_days,
CASE WHEN count(video_id) >1 THEN 'multiple trending days'
WHEN count(video_id) = 1 THEN 'one trending day'
ELSE 'others' END AS Trending_duration
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'Japan'
GROUP BY 1,2,3,4,5
ORDER BY trending_days ),
TMP2 AS( SELECT DISTINCT video_id, MAX(view_count) AS max_view_count FROM `ba775-team-6a.youtube.Youtube_trending_videos` GROUP BY 1)
SELECT TMP.*, TMP2. max_view_count
FROM TMP
LEFT JOIN TMP2
ON TMP.video_id = TMP2.video_id
ORDER BY TMP.trending_days DESC, TMP2.max_view_count DESC
```
#### 22. In Japan, how many videos had multiple trending days and what is percentage of Youtube Trending videos that were trending for more than 1 day?
```
%%bigquery
SELECT COUNTIF( Trending_duration = 'multiple trending days') AS count_of_multiple_trending_days_videos, COUNTIF(Trending_duration = 'one trending day') AS count_of_one_trending_day_video, COUNTIF(Trending_duration = 'others') AS other,
ROUND(COUNTIF( Trending_duration = 'multiple trending days') / (COUNTIF( Trending_duration = 'multiple trending days') + COUNTIF(Trending_duration = 'one trending day')), 2) AS ratio_of_multiple_trending_days_videos,'Japan' AS Country
FROM (
SELECT video_id, title, categoryId, Category_name, Country,COUNT(video_id) AS trending_days,
CASE WHEN count(video_id) >1 THEN 'multiple trending days'
WHEN count(video_id) = 1 THEN 'one trending day'
ELSE 'others' END AS Trending_duration
FROM `ba775-team-6a.youtube.Youtube_trending_videos`
WHERE Country = 'Japan'
GROUP BY 1,2,3,4,5
ORDER BY trending_days DESC )
```
## BQML - Predict in how many days the video will go trending
```
%%bigquery
##Query used to create the model
WITH params AS (
SELECT
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
videos AS (
SELECT
country,
categoryId,
EXTRACT(day from publishedAt) AS Publish_day,
EXTRACT(hour from publishedAt) AS Publish_hour,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK from publishedAt))] AS Publish_Weekday,
EXTRACT(year from publishedaT) AS Publish_Year,
DATE_DIFF(EXTRACT(DATE from trending_date), EXTRACT(Date from publishedAt), day) AS Total_days_to_go_trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`, daynames, params
WHERE
ratings_disabled = false
AND EXTRACT(year from publishedaT) = 2020
AND MOD(ABS(FARM_FINGERPRINT(CAST(EXTRACT(DATETIME FROM publishedAt) AS STRING))),5) = params.TRAIN
)
SELECT *
FROM videos
```
### BQML - Predict the number of days the video will star to go trending
```
%%bigquery
#Create the model
CREATE OR REPLACE MODEL youtube.Trending_videos_model
OPTIONS
(model_type='linear_reg',
input_label_cols=['Days_to_go_trending']) AS
WITH params AS (
SELECT
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
videos AS (
SELECT
country,
categoryId,
EXTRACT(day from publishedAt) AS Publish_day,
EXTRACT(hour from publishedAt) AS Publish_hour,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK from publishedAt))] AS Publish_Weekday,
EXTRACT(year from publishedaT) AS Publish_Year,
DATE_DIFF(EXTRACT(DATE from trending_date), EXTRACT(Date from publishedAt), day) AS Days_to_go_trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`, daynames, params
WHERE
ratings_disabled = false
AND EXTRACT(year from publishedaT) = 2020
AND MOD(ABS(FARM_FINGERPRINT(CAST(EXTRACT(DATETIME FROM publishedAt) AS STRING))),5) = params.TRAIN
)
SELECT *
FROM videos
%%bigquery
##Evaluate the model performace with RMSE
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL youtube.Trending_videos_model,
(WITH params AS (
SELECT
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
videos AS (
SELECT
country,
categoryId,
EXTRACT(day from publishedAt) AS Publish_day,
EXTRACT(hour from publishedAt) AS Publish_hour,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK from publishedAt))] AS Publish_Weekday,
EXTRACT(year from publishedaT) AS Publish_Year,
DATE_DIFF(EXTRACT(DATE from trending_date), EXTRACT(Date from publishedAt), day) AS Days_to_go_trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`, daynames, params
WHERE
ratings_disabled = false
AND EXTRACT(year from publishedaT) = 2020
AND MOD(ABS(FARM_FINGERPRINT(CAST(EXTRACT(DATETIME FROM publishedAt) AS STRING))),5) = params.EVAL
)
SELECT *
FROM videos
))
%%bigquery
##Predict the days
SELECT
*
FROM
ML.PREDICT(MODEL youtube.Trending_videos_model,
(WITH params AS (
SELECT
1 AS TRAIN,
2 AS EVAL
),
daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek),
videos AS (
SELECT
country,
categoryId,
EXTRACT(day from publishedAt) AS Publish_day,
EXTRACT(hour from publishedAt) AS Publish_hour,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK from publishedAt))] AS Publish_Weekday,
EXTRACT(year from publishedaT) AS Publish_Year,
DATE_DIFF(EXTRACT(DATE from trending_date), EXTRACT(Date from publishedAt), day) AS Days_to_go_trending
FROM `ba775-team-6a.youtube.Youtube_trending_videos`, daynames, params
WHERE
ratings_disabled = false
AND EXTRACT(year from publishedaT) = 2020
AND MOD(ABS(FARM_FINGERPRINT(CAST(EXTRACT(DATETIME FROM publishedAt) AS STRING))),5) = params.EVAL
)
SELECT *
FROM videos))
%%bigquery
##saved the predictions resutls in Big Query table : Trending_model_prediction
SELECT
Country,
MAX(ROUND(predicted_Days_to_go_trending,0)) AS MAX_Predicted_Days_To_Trend,
MIN(ROUND(predicted_Days_to_go_trending,0)) AS MIN_Predicted_Days_To_Trend
FROM `ba775-team-6a.youtube.Trending_model_prediction`
GROUP BY
Country
```
## Tableau Story (3 Dashboards)
We created different dashaboards and put it all together in a story. Use the following link to access our story:
https://public.tableau.com/profile/maraline.torres#!/vizhome/BA775-FinalProject/Story-FinalProject?publish=yes
| github_jupyter |
<a href="https://colab.research.google.com/github/ProfessorPatrickSlatraigh/CST2312/blob/main/CST2312_Class08.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **CST2312 Class 08 - Dictionaries and Tuples**
updated 27-Feb-2022 by Professor Patrick
*examples from Python for Everybody by Charles Severence at py4e.com*
---
# **10: Dictionaries**
Lists index their entries based on the position in the list
Dictionaries are like bags - no order
So we index the things we put in the dictionary with a “lookup tag” called the "key". Each key is unique. Each key is associated with a value, which can be any data type, including lists, or other dictionaries. Values in a dictionary need not be unique.
```
# create and print an empty dictionary
choco = dict()
print(choco)
choco['Hershey'] = 2
choco['Godiva'] = 3
choco['Kinder'] = 1
print(choco)
# How many Godiva do we have?
print(choco['Godiva'])
# give one Godiva choco away
choco['Godiva'] = choco['Godiva'] - 1
# How many Godiva do we have?
print(choco['Godiva'])
# first create an empty dictionary
purse = dict()
# then add key:value pairs - akin to SQL INSERTs
purse['money'] = 12 # add a k:v pair of 12 moneys (coins)
purse['candy'] = 3 # add a k:v pair of 3 candy
purse['tissues'] = 75 # add a k:v pair of 75 tissues
# what type is the variable purse we just created?
type(purse)
```
After creating an empty dictionary we can add key|value pairs to that dictionary. Then we can process the dictionary with Python statements and expressions.
```
print(purse)
# print the value for the key 'candy'
print(purse['candy'])
# print the full dictionary
print("Here are the purse dictionary key|value pairs: ", purse)
# print(purse['candy'])
print("Here is the value of 'candy' in purse: ", purse['candy'])
```
Dictionary literals use curly braces and have a list of key : value pairs
You can make an empty dictionary using empty curly braces
```
# Dictionary jjj with strings as keys
jjj = { 'chuck' : 1 , 'fred' : 42, 'jan': 100}
print(jjj)
# Empty dictionary ooo
ooo = { }
print(ooo)
```
One common use of dictionaries is counting how often we “see” something. We use the category item as a key and keep an integer counter as the value.
```
# make a dictionary counting shirts by size
shirts = dict()
# add k:v pairs with an inventory of shirts on hand
shirts['XS'] = 4
shirts['S'] = 2
shirts['M'] = 8
shirts['L'] = 12
shirts['XL'] = 6
shirts['XXL'] = 3
# how many shirts do we have by size?
print(shirts)
# Dictionary values can be used in expressions
# sell a medium shirt
shirts['M'] = shirts['M'] - 1
# how many shirts do we have by size?
print(shirts)
ccc = dict()
ccc['csev'] = 1
ccc['cwen'] = 1
print(ccc)
# Dictionary values can be used in expressions
ccc['cwen'] = ccc['cwen'] + 1
print(ccc)
```
The pattern of checking to see if a key is already in a dictionary and assuming a default value if the key is not there is so common that there is a method called get() that does this for us. Let's setup a dictionary first, then check for a key with get.
```
# create an empty dictionary to use in counting a categorical
inventory = dict()
# create a list of categorical string values to process
shirt_list = ['XS', 'S', 'XXL', 'M', 'M', 'XS', 'M', 'L', 'S', 'XL', 'XL', 'L', 'M', 'M', 'XXL']
# iterate through the list and add new keys with a value of 1 or
# increase the value of an existing key by 1
for shirt in shirt_list :
if shirt not in inventory :
inventory[shirt] = 1
else :
inventory[shirt] = inventory[shirt] + 1
print(inventory)
# create an empty dictionary to use in counting a categorical
counts = dict()
# create a list of categorical string values to process
name_list = ['csev', 'cwen', 'csev', 'zqian', 'cwen']
# iterate through the list and add new keys with a value of 1 or
# increase the value of an existing key by 1
for name in name_list :
if name not in counts :
counts[name] = 1
else :
counts[name] = counts[name] + 1
print(counts)
```
Using the counts dictionary, let's create a function which uses the get statement then we can call the function and print the result in the variable x.
```
# Create a function using dictionary get to check for a key
if name in counts:
x = counts[name]
else :
x = 0
# Call the function with a key as an argument
x = counts.get(name, 0)
# Print the resulting value returned and stored in x
print(x)
print(counts)
```
Simplified Counting: We can use get() and provide a default value of zero when the key is not yet in the dictionary - and then just add one to keep a count of instances of each key in a list. Did we just create a word-count function?
```
counts = dict()
names = ['csev', 'cwen', 'csev', 'zqian', 'cwen']
for name in names :
counts[name] = counts.get(name, 0) + 1
print(counts)
inventory = dict()
shirt_list = ['XS', 'S', 'XXL', 'M', 'M', 'XS', 'M', 'L', 'S', 'XL', 'XL', 'L', 'M', 'M', 'XXL']
for shirt in shirt_list :
inventory[shirt] = inventory.get(shirt, 0) + 1
print(inventory)
```
Let's create a means of counting words entered by the user. The general pattern to count the words in a line of text is to split the line into words, then loop through the words and use a dictionary to track the count of each word independently. (Please note that the following example is case-sensitive.)
```
counts = dict()
print('Enter a line of text:')
line = input('')
words = line.split()
print('Words:', words)
print('Counting...')
for word in words:
counts[word] = counts.get(word,0) + 1
# The following line is scaffolding to peek inside the loop
# print(word,":", counts[word])
print('Counts', counts)
```
Definite Loops and Dictionaries: Even though dictionaries are not stored in order, we can write a for loop that goes through all the entries in a dictionary - actually it goes through all of the keys in the dictionary and looks up the values.
```
counts = { 'chuck' : 1 , 'fred' : 42, 'jan': 100}
for key in counts:
print(key, counts[key])
```
You can get a list of keys, values, or items (both) from a dictionary. When we have multiple values together as an object, we refer to that object as a tuple.
```
jjj = { 'chuck' : 1 , 'fred' : 42, 'jan': 100}
print(list(jjj))
print(list(jjj.keys()))
print(list(jjj.values()))
print(list(jjj.items()))
```
---
# **11: Tuples**
Tuples are like lists. Tuples are another kind of sequence that functions much like a list - they have elements which are indexed starting at 0.
```
# Create a list and print an element of the list
x = ('Glenn', 'Sally', 'Joseph')
print(x[2])
# Create a tuple
y = ( 1, 9, 2 )
#Print the tuple
print(y)
# Print the max of the tuple
print(max(y))
```
Tuples and Assignment: We can also put a tuple on the left-hand side of an assignment statement. We can even omit the parentheses.
```
(x, y) = (4, 'fred')
print(y)
(a, b) = (99, 98)
print(a)
```
Tuples and Dictionaries: The items() method in dictionaries returns a list of (key, value) tuples.
```
d = dict()
d['csev'] = 2
d['cwen'] = 4
for (k,v) in d.items():
print(k, v)
tups = d.items()
print(tups)
```
Tuples are Comparable: The comparison operators work with tuples and other sequences. If the first item is equal, Python goes on to the next element, and so on, until it finds elements that differ.
```
print("Result #1: ", (0, 1, 2) < (5, 1, 2))
print("Result #2: ", (0, 1, 2000000) < (0, 3, 4))
print("Result #3: ", ( 'Jones', 'Sally' ) < ('Jones', 'Sam'))
print("Result #4: ", ( 'Jones', 'Sally') > ('Adams', 'Sam'))
```
Sorting Lists of Tuples:
We can take advantage of the ability to sort a list of tuples to get a sorted version of a dictionary.
First we sort the dictionary by the key using the items() method and sorted() function.
```
d = {'a':10, 'b':1, 'c':22}
d.items()
sorted(d.items())
```
Using sorted():
We can do this even more directly using the built-in function sorted that takes a sequence as a parameter and returns a sorted sequence.
```
d = {'a':10, 'b':1, 'c':22}
t = sorted(d.items())
print(t)
for k, v in sorted(d.items()):
print(k, v)
```
Sort by Values Instead of Key:
If we could construct a list of tuples of the form (value, key) we could sort by value.
We do this with a for loop that creates a list of tuples.
```
c = {'a':10, 'b':1, 'c':22}
tmp = list()
for k, v in c.items() :
tmp.append( (v, k) )
print(tmp)
tmp = sorted(tmp, reverse=True)
print(tmp)
```
Finding the top 10 most common words.
First, be sure to have access to the file romeo.txt.
```
from google.colab import drive
drive.mount('/content/gdrive')
```
Then open romeo.txt with the handle fhand and follow the Python for Everybody example.
```
fhand = open("/content/gdrive/My Drive/romeo.txt")
# fhand = open('romeo.txt') -->> see the Google Drive open above
counts = {}
for line in fhand:
words = line.split()
for word in words:
counts[word] = counts.get(word, 0 ) + 1
lst = []
for key, val in counts.items():
newtup = (val, key)
lst.append(newtup)
lst = sorted(lst, reverse=True)
for val, key in lst[:10] :
print(key, val)
```
... an even shorter version ...
```
c = {'a':10, 'b':1, 'c':22}
print( sorted( [ (v,k) for k,v in c.items() ] ) )
```
List comprehension creates a dynamic list. In the exanple above, we make a list of reversed tuples and then sort it.
http://wiki.python.org/moin/HowTo/Sorting
---
*Thanks for Charles Severance and his work Python for Everybody (py4e.com)*
| github_jupyter |
## Airline Passenger Volume
This model predicts the volume of airline passengers using historical data from January 1949 to December 1960, with a total of 144 observations. Even with this modest data set, surprisingly accurate predictions are possible.
This is time series data, which is well suited to Long-Short Term Memory ([LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory)).
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
```
## Training Data
The data is collected monthly, in units of 1000s of international passengers. The number of passengers expectedly increases with time. The data also appears cyclical, maybe due to holiday travel. LSTMs are capable of learning both features.
```
dataframe = pd.read_csv('data/airline-passengers.csv', usecols=[1], engine='python')
raw_data = dataframe.values.astype('float32')
plt.plot(raw_data)
plt.title('Passenger Volume')
plt.ylabel('Passengers (thousands)')
plt.xlabel('Month')
plt.show()
# Normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(raw_data)
train1D = dataset.reshape((dataset.shape[0]))
```
## Data Preparation
```
seq_length = 12
dataX = []
dataY = []
for i in range(0, len(train1D) - seq_length, 1):
seq_in = train1D[i:i + seq_length]
seq_out = train1D[i + seq_length]
dataX.append(seq_in)
dataY.append(seq_out)
n_patterns = len(dataX)
print("Number of patterns: ", n_patterns)
# Reshape X to be [samples, time steps, features]
X = np.reshape(dataX, (n_patterns, seq_length, 1))
Y = np.asarray(dataY)
```
## Define Model
A simple model that illustrates the principle.
```
# Define and fit the LSTM network
model = Sequential()
model.add(LSTM(12, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
```
## Train Model
```
history = model.fit(X, Y, epochs=40, batch_size=1, verbose=2)
```
## Model Loss
```
plt.plot(history.history['loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train'], loc='upper left')
plt.show()
print("Minimum loss: %.3f" % history.history['loss'][-1])
```
## Prediction
```
predictions = scaler.inverse_transform(model.predict(X))
look_back = seq_length
# Shift train predictions for plotting
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(predictions)+look_back, :] = predictions
# Plot ground truth and predictions
plt.plot(scaler.inverse_transform(dataset), color='green', linewidth=1.0)
plt.plot(trainPredictPlot, color='red', linewidth=1.0)
plt.title('Passenger Volume')
plt.ylabel('1000s')
plt.xlabel('Month')
plt.legend(['Ground truth', 'Train', 'Test'], loc='upper left')
plt.show()
```
## Generate New Values
Start with the first four values, then predict N values one by one.
```
nextX = X[-1:]
yValues = []
for i in range(24):
nextY = model.predict(nextX)
nextX = np.roll(nextX, -1)
nextX[-1][-1][-1] = nextY[-1][0]
yValues.append(scaler.inverse_transform(nextY)[-1][0])
plt.plot(yValues)
plt.title('Volume Predictions')
plt.ylabel('Volume')
plt.xlabel('Month')
plt.show()
```
| github_jupyter |
```
# Word2vec basics using tensorflow
import numpy as np
import tensorflow as tf
corpus_raw = 'He is the king . The King is royal . She is the royal queen '
corpus_raw = corpus_raw.lower()
corpus_raw
words = []
for word in corpus_raw.split():
# we can't treat '.' as a word
if word != '.':
words.append(word)
words = set(words) # remove duplicate words
word2int = {}
int2word = {}
vocab_size = len(words) # total number of unique words
for i, word in enumerate(words):
word2int[word] = i
int2word[i] = word
# print(word2int['queen'])
# -> 42 (say)
# print(int2word[42])
# -> 'queen'
# raw sentence is a list of sentences.
raw_sentences = corpus_raw.split('.')
sentences = []
for sentence in raw_sentences:
sentences.append(sentence.split())
sentences
# Now we will generate our training data
data = []
WINDOW_SIZE = 2
for sentence in sentences:
for word_index, word in enumerate(sentence):
for nb_word in sentence[max(word_index - WINDOW_SIZE, 0) : min(word_index + WINDOW_SIZE, len(sentence)) + 1]:
if nb_word != word:
data.append([word, nb_word])
data
# covert number to one hot vector
def one_hot_vector(data_point_index, vocab_size):
temp = np.zeros(vocab_size)
temp[data_point_index] = 1
return temp
x_train = [] # input word
y_train = [] # output word
for data_word in data:
x_train.append(one_hot_vector(word2int[data_word[0]], vocab_size))
y_train.append(one_hot_vector(word2int[data_word[1]], vocab_size))
#convert them to numpy arrays
x_train = np.asarray(x_train)
y_train = np.asarray(y_train)
for i in range(len(x_train)):
print(x_train[i], y_train[i])
print(x_train.shape, y_train.shape)
# making tensorflow model for x_train and y_train
x_label = tf.placeholder(tf.float32, shape=(None, vocab_size))
y_label = tf.placeholder(tf.float32, shape=(None, vocab_size))
# hidden layer
EMBEDDING_DIM = 5 # random
W1 = tf.Variable(tf.random_normal([vocab_size, EMBEDDING_DIM])) # weights
b1 = tf.Variable(tf.random_normal([EMBEDDING_DIM])) # bias
hidden_representation = tf.add(tf.matmul(x_label, W1), b1)
# output layer
W2 = tf.Variable(tf.random_normal([EMBEDDING_DIM, vocab_size]))
b2 = tf.Variable(tf.random_normal([vocab_size]))
prediction = tf.nn.softmax(tf.add( tf.matmul(hidden_representation, W2), b2))
# input_one_hot ---> embedded repr. ---> predicted_neighbour_prob
# predicted_prob will be compared against a one hot vector to correct it.
# training network
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init) #make sure you do this!
# define the loss function:
cross_entropy_loss = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(prediction), reduction_indices=[1]))
# define the training step:
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy_loss)
n_iters = 10000
# train for n_iter iterations
for _ in range(n_iters):
sess.run(train_step, feed_dict={x_label: x_train, y_label: y_train})
print('loss is : ', sess.run(cross_entropy_loss, feed_dict={x_label: x_train, y_label: y_train}))
print(sess.run(W1))
print('-' * 60)
print(sess.run(b1))
# varibale 'vectors' will work as a lookup table for finding vectors for words
vectors = sess.run(W1 + b1)
print(vectors)
print("\n", vectors[word2int['queen']])
# We have word vector using word2vec
# Now we have closest vector to a given vector
def euclidean_dist(vec1, vec2):
return np.sqrt(np.sum((vec1-vec2)**2))
def find_closest(word_index, vectors):
min_dist = 10000 # to act like positive infinity
min_index = -1
query_vector = vectors[word_index]
for index, vector in enumerate(vectors):
if euclidean_dist(vector, query_vector) < min_dist and not np.array_equal(vector, query_vector):
min_dist = euclidean_dist(vector, query_vector)
min_index = index
return min_index
print(int2word[find_closest(word2int['king'], vectors)])
print(int2word[find_closest(word2int['queen'], vectors)])
print(int2word[find_closest(word2int['royal'], vectors)])
```
| github_jupyter |
```
import shap
import pandas as pd
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
import matplotlib.pyplot as plt
plt.style.use('ggplot')
from PIL import Image
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
train_df = pd.read_csv("train.csv")
test_df = pd.read_csv("test.csv")
X = (train_df.iloc[:,1:].values).astype('float32')
Y = train_df.iloc[:,0].values.astype('int32')
test = test_df.values.astype('float32')
X = X.reshape(X.shape[0], 28, 28, 1)
Y = to_categorical(Y)
test = test.reshape(test.shape[0], 28, 28, 1)
#Confirm the image.
plt.imshow(X[0].reshape(28,28))
plt.show()
X = X.astype("float32") / 255
test = test.astype("float32") / 255
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size = 0.1, random_state=0)
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(X_train)
model = Sequential()
model.add(Conv2D(32, kernel_size = 4, activation="relu", input_shape=(28,28,1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size = 3, activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size = 2, activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dropout(0.25))
model.add(Dense(10, activation='softmax'))
model.compile(loss="mean_squared_error", optimizer="rmsprop", metrics=['accuracy'])
model.summary()
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
batch_size = 100
epochs = 30
hist = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size),
epochs = epochs, validation_data = (X_val,Y_val),
verbose = 2, steps_per_epoch=X_train.shape[0] // batch_size
, callbacks=[learning_rate_reduction])
plt.figure(figsize=(8, 4))
plt.plot(hist.history["loss"])
plt.title("Loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Test"])
plt.show()
pred = model.predict(X_val)
test_loss, test_acc = model.evaluate(X_val, Y_val, verbose=0)
print(f'\nTest accuracy: {test_acc}')
Y_pred_classes = np.argmax(pred,axis = 1)
Y_true = np.argmax(Y_val,axis = 1)
errors = (Y_pred_classes - Y_true != 0)
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
""" This function shows 10 images with their predicted and real labels"""
n = 0
nrows = 2
ncols = 5
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True,figsize=(20, 8))
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
plt.show()
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
most_important_errors = sorted_dela_errors[-10:]
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
import shap
#When describing Deep Learning, we use DeepExplainer.
explainer = shap.DeepExplainer(model, (X[0:1000]))
#Check out the 10 data that the model has mistakenly predicted.
for i in most_important_errors:
#Calculates the SHAP value.
shap_values = explainer.shap_values(X_val_errors[[i]])
#The following two lines are extras. It works even if you turn it off.
index_names = np.array([str(x) + "\n" + '{:>7.3%}'.format(Y_pred_errors[i][x]) for x in range(10)]).reshape(1,10)
print("Predicted label :{}\nTrue label :{}".format(Y_pred_classes_errors[i],Y_true_errors[i]))
#Displays the results.
shap.image_plot(shap_values, X_val_errors[[i]] ,index_names ,show=False)
plt.show()
```
| github_jupyter |
```
#In this tutorial, you will further explore
#the NASA Exoplanet Archive and practice making simple plots.
#To guide you, here is the tutorial from in class:
import matplotlib.pyplot as plot #import the matplotlib.pyplot module (library) as 'plot'
import pandas as pd #import the 'pandas' module (library) as pd
data = pd.read_csv('data_planets.csv') #import the data file downloaded from the NASA Exoplanet Archive
plot.plot(data['pl_radj'],data['pl_bmassj'],'k.') #plot the data.
#'k' - makes datapoints black.
#'.' - makes scatter plot
plot.show() #show the plot
#Note: You have not been provided with the 'planets.csv' file. In order
#to make the code in this cell to work, you will have to download the
#'planets.csv' file from the NASA Exoplanet Archive containing mass and
#radius parameters. *Remember to clean it up using 'not null' on the
#NASA Exoplanet Archive. *Remember also to delete the first few unnecessary
#rows of the downloaded .csv file. *Remember to put it in your working directory.
#1.) Create a mass-radius diagram (like above) with appropriate
# axes labels and title using NASA Exoplanet Archive data.
#Step 0: download a table with mass and radius parameters for exoplanets.
# use 'not null' in the NASA Exoplanet Archive to clean up your table.
# hint: remember to delete the first rows from your .csv file before
# importing to python, and remember to place this file in the correct
# directory (whatever directory you're working in).
#Step 1: import the necessary modules
import matplotlib.pyplot as plot #import the matplotlib.pyplot module (library) as 'plot'
import pandas as pd #import the 'pandas' module (library) as pd
#Step 2: import the data file downloaded from the NASA Exoplanet Archive
data = pd.read_csv('data_mass_radius.csv') #import the data file downloaded from the NASA Exoplanet Archive
#Step 3: plot the data containing axes labels and a title. print the figure.
# create the same plot online on the NASA Exoplanet Archive and compare
# your results to that plot.
plot.plot(data['pl_radj'],data['pl_bmassj'],'k.'), plot.xlabel('Radius ($R_{♃}$)'), plot.ylabel('Mass ($M_{♃}$)'), plot.title('Exoplanet Mass-Radius Diagram') #plot the data.
plot.savefig('plot_mass_radius.png',dpi=100) #save the plot to current working directory
plot.show() #show the plot, NOTE IT IS NECESSARY TO CALL plot.show() AFTER plot.savefig()
#Step 4: discuss your results. notice a gap between the two main clusters of
# points? What do you think this is and what does it imply about the
# underlying physics?
#The bimodal distribution between super-earths and sub-neptunes sheds light on a physical
#process of planetary formation/dynamics that is currently under research. Recent work
#purports the valley to be composed of stripped cores from gaseous planets not born rocky.
#https://academic.oup.com/mnras/advance-article-abstract/doi/10.1093/mnras/sty1783/5050069?redirectedFrom=fulltext
#2.) Create a plot of semi-major axis (y) vs orbital period (x) with appropriate
# axes labels and title using NASA Exoplanet Archive data.
#Step 0: download a table with semi-major axis and orbital period parameters
# for exoplanets. use 'not null' in the NASA Exoplanet Archive to clean
# up your table.
# hint: remember to delete the first rows from your .csv file before
# importing to python, and remember to place this file in the correct
# directory (whatever directory you're working in).
#Step 1: import the necessary modules
import matplotlib.pyplot as plot #import the matplotlib.pyplot module (library) as 'plot'
import pandas as pd #import the 'pandas' module (library) as pd
#Step 2: import the data file downloaded from the NASA Exoplanet Archive
data = pd.read_csv('data_smaxis_period.csv') #import the data file downloaded from the NASA Exoplanet Archive
#Step 3: plot the data containing axes labels and a title. print the figure.
# create the same plot online on the NASA Exoplanet Archive and compare
# your results to that plot.
plot.plot(data['pl_orbper'],data['pl_orbsmax'],'k.'), plot.xlabel('Period (d)'), plot.ylabel('Semi-major axis (AU)'), plot.title('Exoplanet Orbital Period vs Semi-Major Axis') #plot the data.
plot.savefig('plot_smaxis_period.png',dpi=100) #save the plot to current working directory
plot.show() #show the plot
#Step 4: discuss your results. notice a non-linear trend between the data? What
# do you think this is and what does it imply about the underlying physics?
# Hint: Look up Kepler's Third Law of Planetary Motion.
#https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion
#Kepler's Third Law of Planetary Motion states that the period squared goes as the semi-major
#axis of a planet cubed (p^2 = a^3). A greater orbital period results in a greater
#star-planet separation, but this relationship is not linear.
#3.) Using the steps outlined above, create a third plot of two parameters that interest you.
# These could be exoplanet or stellar parameters. Start the process by exploring the
# relationships between parameters by plotting online, then pick the most interesting one
# to you to plot. Discuss interesting patterns you see and what they may mean about
# the underlyng physics.
import matplotlib.pyplot as plot #import the matplotlib.pyplot module (library) as 'plot'
import pandas as pd #import the 'pandas' module (library) as pd
data = pd.read_csv('data_ecc_smass.csv') #import the data file downloaded from the NASA Exoplanet Archive
plot.plot(data['st_mass'],data['pl_orbeccen'],'k.'), plot.xlabel('Stellar Mass ($M_{☉}$)'), plot.ylabel('Eccentricity'), plot.title('Stellar Mass vs. Eccentricity') #plot the data.
plot.savefig('plot_ecc_smass.png',dpi=100) #save the plot to current working directory
plot.show() #show the plot
#Star systems with sufficient mass can induce planetary interactions causing exchanges in
#angular momentum which result in eccentric orbits. It is likely that even larger stars with
#greater mass have sufficient gravity to enforce stable orbits, resulting in planets on low
#eccentric orbits.
```
| github_jupyter |
# Train WaveGlow Model with custom training step
## Boilerplate Import
```
import tensorflow as tf
from tensorflow.python.eager import profiler
print("GPU Available: ", tf.test.is_gpu_available())
tf.keras.backend.clear_session()
import os, sys
root_dir, _ = os.path.split(os.getcwd())
script_dir = os.path.join(root_dir, 'scripts')
sys.path.append(script_dir)
from datetime import datetime
from hparams import hparams
from waveglow_model import WaveGlow
import training_utils as utils
```
## Tensorboard logs setup
```
log_dir = hparams['log_dir']
file_writer = tf.summary.create_file_writer(log_dir)
file_writer.set_as_default()
```
## Load Validation and Training Dataset
```
validation_dataset = utils.load_single_file_tfrecords(
record_file=hparams['tfrecords_dir'] + hparams['eval_file'])
validation_dataset = validation_dataset.batch(
hparams['train_batch_size'])
training_dataset = utils.load_training_files_tfrecords(
record_pattern=hparams['tfrecords_dir'] + hparams['train_files'] + '*')
```
## Instantiate model and optimizer
```
myWaveGlow = WaveGlow(hparams=hparams, name='myWaveGlow')
optimizer = utils.get_optimizer(hparams=hparams)
```
## Model Checkpoints : Initialise or Restore
```
checkpoint = tf.train.Checkpoint(step=tf.Variable(0),
optimizer=optimizer,
net=myWaveGlow)
manager_checkpoint = tf.train.CheckpointManager(
checkpoint,
directory=hparams['checkpoint_dir'],
max_to_keep=hparams['max_to_keep'])
checkpoint.restore(manager_checkpoint.latest_checkpoint)
if manager_checkpoint.latest_checkpoint:
tf.summary.experimental.set_step(tf.cast(checkpoint.step, tf.int64))
tf.summary.text(name="checkpoint_restore",
data="Restored from {}".format(manager_checkpoint.latest_checkpoint))
else:
tf.summary.experimental.set_step(0)
utils.eval_step(eval_dataset=validation_dataset,
waveGlow=myWaveGlow, hparams=hparams,
step=0)
```
## Training step autograph
```
@tf.function
def train_step(step, x_train, waveGlow, hparams, optimizer):
tf.summary.experimental.set_step(step=step)
with tf.GradientTape() as tape:
outputs = waveGlow(x_train, training=True)
total_loss = waveGlow.total_loss(outputs=outputs)
grads = tape.gradient(total_loss,
waveGlow.trainable_weights)
optimizer.apply_gradients(zip(grads,
waveGlow.trainable_weights))
def custom_training(waveGlow, hparams, optimizer,
checkpoint, manager_checkpoint):
step = tf.cast(checkpoint.step, tf.int64)
for epoch in tf.range(1):
tf.summary.text(name='epoch',
data='Start epoch {}'.format(epoch.numpy()) +\
'at ' + datetime.now().strftime("%Y%m%d-%H%M%S"),
step=step)
for int_step, (step, x_train) in zip(range(50), training_dataset.enumerate(start=step)):
if int_step == 2:
profiler.start()
train_step(step=step,
x_train=x_train,
waveGlow=waveGlow,
hparams=hparams,
optimizer=optimizer)
if tf.equal(step % hparams['save_model_every'], 0):
save_path = manager_checkpoint.save()
tf.summary.text(name='save_checkpoint',
data="Saved checkpoint in" + save_path,
step=step)
if tf.equal(step % hparams['save_audio_every'], 0):
utils.eval_step(eval_dataset=validation_dataset,
waveGlow=waveGlow, hparams=hparams,
step=step)
if int_step == 50:
profiler_result = profiler.stop()
profiler.save(hparams['log_dir'], profiler_result)
break
checkpoint.step.assign_add(1)
custom_training(waveGlow=myWaveGlow,
hparams=hparams,
optimizer=optimizer,
checkpoint=checkpoint,
manager_checkpoint=manager_checkpoint)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
from pandas_profiling import ProfileReport
# Set style and settings
plt.style.use('ggplot')
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_rows', 15)
# Load data and set Datetime column
collisions = pd.read_csv('../data/external/Collisions.csv',
parse_dates={'Datetime': ['INCDTTM']},
infer_datetime_format=True)
# Clean up and set index to datetime
collisions = (
collisions.set_index('Datetime')
.sort_index()
.drop(columns=['EXCEPTRSNDESC', 'EXCEPTRSNCODE', 'REPORTNO', 'STATUS'])
)
# Basic info and % null categories
#collisions.info()
pct_null = (collisions.isna().sum()/collisions.shape[0]).plot(kind='bar',
title="Percent Missing Values")
plt.savefig('../reports/figures/missing_values.png')
collisions.describe()
# Pandas profiling, use minimal=True for large dataset
#profile = ProfileReport(collisions, title='Collisions Profile Report', minimal=True)
#profile.to_file('../reports/collisions_profiling.html')
```
### Notes after initial exploration
1. Use 'INTKEY' to identify intersections.
2. 'SEVERITYCODE' can be used to cross-reference the intersection with severity. How do we reconcile this with 'SERIOUSINJURIES' and 'FATALITIES'?
3. 'SPEEDING' is missing a lot, but this could be a helpful feature.
4. 'JUNCTIONTYPE' or 'ADDRTYPE', will need OHE for logistic regression.
5. Target variable? Create 'Dangerous' based on what criteria?
ML Algorithms: can use GBC or RF if we don't want to worry about multicollinearity or preprocessing (OHE/nomalization).
```
collisions.head()
## Look at collision count at specific intersections
n_coll_intersect = collisions.groupby(by='INTKEY').count().sort_values(by='OBJECTID', ascending=False)
fig, axes = plt.subplots(1, 2, figsize=(12,4))
ax = axes[0]
ax.hist(n_coll_intersect['OBJECTID'], bins=50)
ax.set_xlim(0,50)
ax.set_xlabel('Number of Collisions')
ax.set_ylabel('Intersection Count')
ax.set_title("Number of Intersections with High Collisions")
ax = axes[1]
ax.hist(n_coll_intersect['OBJECTID'], bins=10, log=True)
ax.set_xlabel('Number of Collisions')
ax.set_ylabel('Intersection Count (Log Scale)')
ax.set_title("Number of Intersections with High Collisions")
plt.savefig('../reports/figures/intersection_count.png')
# Find 20 most dangerous intersections
_ = collisions.copy().set_index('INTKEY')
_.loc[list(n_coll_intersect.index[:20])]['LOCATION']\
.value_counts()\
.plot(kind='barh', title='Most Dangerous Intersections: Number of Collisions')
ax.set_xlabel('Collision Count')
#plt.savefig('../reports/figures/dangerous_intersections.png')
type(list(n_coll_intersect.index[:20]))
# Break down by pedestrian or cyclist involvement
pedcycle_count = collisions.groupby(by='PEDCYLCOUNT').count()['OBJECTID']
ped_count = collisions.groupby(by='PEDCOUNT').count()['OBJECTID']
pedcycle_count, ped_count
# Get fraction of collisions that included cyclists/pedestrians/no cars
cycle_fraction = pedcycle_count.loc[1:].sum()/pedcycle_count.sum()
# Get fraction of collisions that included pedestrians
ped_fraction = ped_count.loc[1:].sum()/ped_count.sum()
# Are there accidents with no cars involved?
no_cars = collisions.loc[((collisions['PEDCOUNT'] != 0) | \
(collisions['PEDCYLCOUNT'] != 0)) & \
(collisions['VEHCOUNT'] == 0)]
#veh_count = collisions.groupby(by='VEHCOUNT').count()['OBJECTID']
no_car_fraction = no_cars.shape[0]/collisions.shape[0]
# # Put these into own dataframe
pd.DataFrame({'Fraction w/cyclist': cycle_fraction,
'Fraction w/ped': ped_fraction,
'Fraction w/no cars': no_car_fraction}, index=range(1))
(collisions['COLLISIONTYPE'] == 'Cycles').sum(), pedcycle_count
# How many of these involve ZERO people (ie, terrible book-keeping)
no_people = collisions.loc[(collisions['PEDCOUNT'] == 0) &
(collisions['PEDCYLCOUNT'] == 0) &
(collisions['PERSONCOUNT'] == 0) &
(collisions['VEHCOUNT'] == 0)]
print(no_people.shape[0]/collisions.shape[0])
no_people['SDOT_COLDESC'].value_counts()
no_people.shape
```
| github_jupyter |
# Simple symbol plotting with text on Cartesian projection
Symbol plotting in Magics is the plotting of different types of symbols at selected locations. A symbol in this context is a number (the value at the location), a text string (given by the user) or a Magics marker.
List of all **msymbol** parameters you can find [in Magics documentation](https://confluence.ecmwf.int/display/MAGP/Symbol "Symbol parameters").
More symbol plotting examples can be found in [Simple symbol plotting](../tutorials/Symbol_simple.ipynb "Symbol simple") and [Advanced symbol plotting](../tutorials/Advanced_simple.ipynb "Symbol advanced") notebook.
This is a one cell notebook with example how to plot symbols with text on a graph.
### Installing Magics
If you don't have Magics installed, run the next cell to install Magics using conda.
```
# Install Magics in the current Jupyter kernel
import sys
!conda install --yes --prefix {sys.prefix} Magics
import Magics.macro as magics
import numpy as np
# Setting the cartesian view
cartesian_projection = magics.mmap(
subpage_y_position= 2.,
subpage_map_projection = 'cartesian',
subpage_x_axis_type = 'regular',
subpage_y_axis_type = 'regular',
subpage_x_min = 10.,
subpage_x_max = 40.,
subpage_y_min = 20.,
subpage_y_max = 100.,
page_id_line = "off")
# Vertical axis
vertical = magics.maxis(axis_orientation = "vertical",
axis_type = "regular",
axis_tick_label_height = 0.4,
axis_tick_label_colour = 'navy',
axis_grid = "on",
axis_grid_colour = "grey",
axis_grid_thickness = 1,
axis_grid_line_style = "dot")
# Horizontal axis
horizontal = magics.maxis(axis_orientation = "horizontal",
axis_type = "regular",
axis_tick_label_height = 0.4,
axis_tick_label_colour = 'navy',
axis_grid = "on",
axis_grid_colour = "grey",
axis_grid_thickness = 1,
axis_grid_line_style = "dot")
# Some data
x = np.array([15.,25.,35.])
bottom = np.array([30.,30.,30.])
left = np.array([45.,45.,45.])
right = np.array([60.,60.,60.])
top = np.array([75.,75.,75.])
middle = np.array([90.,90.,90.])
# Input for symbol with text above it
topinput = magics.minput(
input_x_values = x,
input_y_values = top)
#Define the graph
toptext = magics.msymb(
symbol_type = "marker",
symbol_colour = "navy",
symbol_text_list = ["top", "top", "top"],
symbol_height = 1.,
symbol_text_font_size = 0.8,
symbol_text_font_colour = "black",
symbol_text_position = "top",
symbol_marker_index = 15
)
# Input for symbol with text on the left side
leftinput = magics.minput(
input_x_values = x,
input_y_values = left)
#Define the graph
lefttext = magics.msymb(
symbol_type = "marker",
symbol_colour = "ochre",
symbol_text_list = ["left", "left", "left"],
symbol_height = 1.,
symbol_text_font_size = 0.8,
symbol_text_font_colour = "black",
symbol_text_position = "left",
symbol_marker_index = 15
)
# Input for symbol with text on the right side
rightinput = magics.minput(
input_x_values = x,
input_y_values = right)
#Define the graph
righttext = magics.msymb(
symbol_type = "marker",
symbol_colour = "chestnut",
symbol_text_list = ["one","two","five"],
symbol_height = 1.,
symbol_text_font_size = 0.8,
symbol_text_font_colour = "black",
symbol_text_position = "right",
symbol_marker_index = 15
)
# Input for symbol with text below it
bottominput = magics.minput(
input_x_values = x,
input_y_values = bottom)
#Define the graph
bottomtext = magics.msymb(
symbol_type = "marker",
symbol_colour = "rose",
symbol_text_list = ["bottom"],
symbol_height = 1.,
symbol_text_font_size = 0.8,
symbol_text_font_colour = "black",
symbol_text_position = "bottom",
symbol_marker_index = 15
)
# Input for symbol with text over it
centreinput = magics.minput(
input_x_values = x,
input_y_values = middle)
#Define the graph
centretext = magics.msymb(
symbol_type = "marker",
symbol_colour = "sky",
symbol_text_list = ["a", "b", "centre"],
symbol_height = 1.2,
symbol_text_font_size = 0.8,
symbol_text_font_colour = "black",
symbol_text_position = "centre",
symbol_marker_index = 15
)
magics.plot(cartesian_projection, vertical, horizontal, topinput, toptext,
leftinput, lefttext, bottominput, bottomtext,
rightinput, righttext, centreinput, centretext)
```
| github_jupyter |
**Parameters to reproduce the paper's results**:
* change the optimizer from SGD to Adam in `lib/models.py`,
* change the size of the vocabulary from 1000 to 10000 in `train.keep_top_words()` below.
```
%load_ext autoreload
%autoreload 2
import sys, os
sys.path.insert(0, '..')
from lib import models, graph, coarsening, utils
import tensorflow as tf
import matplotlib.pyplot as plt
import scipy.sparse
import numpy as np
import time
%matplotlib inline
flags = tf.app.flags
FLAGS = flags.FLAGS
# Graphs.
flags.DEFINE_integer('number_edges', 16, 'Graph: minimum number of edges per vertex.')
flags.DEFINE_string('metric', 'cosine', 'Graph: similarity measure (between features).')
# TODO: change cgcnn for combinatorial Laplacians.
flags.DEFINE_bool('normalized_laplacian', True, 'Graph Laplacian: normalized.')
flags.DEFINE_integer('coarsening_levels', 0, 'Number of coarsened graphs.')
flags.DEFINE_string('dir_data', os.path.join('..', 'data', '20news'), 'Directory to store data.')
flags.DEFINE_integer('val_size', 400, 'Size of the validation set.')
```
# Data
```
# Fetch dataset. Scikit-learn already performs some cleaning.
remove = ('headers','footers','quotes') # (), ('headers') or ('headers','footers','quotes')
train = utils.Text20News(data_home=FLAGS.dir_data, subset='train', remove=remove)
# Pre-processing: transform everything to a-z and whitespace.
print(train.show_document(1)[:400])
train.clean_text(num='substitute')
# Analyzing / tokenizing: transform documents to bags-of-words.
#stop_words = set(sklearn.feature_extraction.text.ENGLISH_STOP_WORDS)
# Or stop words from NLTK.
# Add e.g. don, ve.
train.vectorize(stop_words='english')
print(train.show_document(1)[:400])
# Remove short documents.
train.data_info(True)
wc = train.remove_short_documents(nwords=20, vocab='full')
train.data_info()
print('shortest: {}, longest: {} words'.format(wc.min(), wc.max()))
plt.figure(figsize=(17,5))
plt.semilogy(wc, '.');
# Remove encoded images.
def remove_encoded_images(dataset, freq=1e3):
widx = train.vocab.index('ax')
wc = train.data[:,widx].toarray().squeeze()
idx = np.argwhere(wc < freq).squeeze()
dataset.keep_documents(idx)
return wc
wc = remove_encoded_images(train)
train.data_info()
plt.figure(figsize=(17,5))
plt.semilogy(wc, '.');
# Word embedding
if True:
train.embed()
else:
train.embed(os.path.join('..', 'data', 'word2vec', 'GoogleNews-vectors-negative300.bin'))
train.data_info()
# Further feature selection. (TODO)
# Feature selection.
# Other options include: mutual information or document count.
freq = train.keep_top_words(1000, 20)
train.data_info()
train.show_document(1)
plt.figure(figsize=(17,5))
plt.semilogy(freq);
# Remove documents whose signal would be the zero vector.
wc = train.remove_short_documents(nwords=5, vocab='selected')
train.data_info(True)
train.normalize(norm='l1')
train.show_document(1);
# Test dataset.
test = utils.Text20News(data_home=FLAGS.dir_data, subset='test', remove=remove)
test.clean_text(num='substitute')
test.vectorize(vocabulary=train.vocab)
test.data_info()
wc = test.remove_short_documents(nwords=5, vocab='selected')
print('shortest: {}, longest: {} words'.format(wc.min(), wc.max()))
test.data_info(True)
test.normalize(norm='l1')
if True:
train_data = train.data.astype(np.float32)
test_data = test.data.astype(np.float32)
train_labels = train.labels
test_labels = test.labels
else:
perm = np.random.RandomState(seed=42).permutation(dataset.data.shape[0])
Ntest = 6695
perm_test = perm[:Ntest]
perm_train = perm[Ntest:]
train_data = train.data[perm_train,:].astype(np.float32)
test_data = train.data[perm_test,:].astype(np.float32)
train_labels = train.labels[perm_train]
test_labels = train.labels[perm_test]
if True:
graph_data = train.embeddings.astype(np.float32)
else:
graph_data = train.data.T.astype(np.float32).toarray()
#del train, test
```
# Feature graph
```
t_start = time.process_time()
dist, idx = graph.distance_sklearn_metrics(graph_data, k=FLAGS.number_edges, metric=FLAGS.metric)
A = graph.adjacency(dist, idx)
print("{} > {} edges".format(A.nnz//2, FLAGS.number_edges*graph_data.shape[0]//2))
A = graph.replace_random_edges(A, 0)
graphs, perm = coarsening.coarsen(A, levels=FLAGS.coarsening_levels, self_connections=False)
L = [graph.laplacian(A, normalized=True) for A in graphs]
print('Execution time: {:.2f}s'.format(time.process_time() - t_start))
#graph.plot_spectrum(L)
#del graph_data, A, dist, idx
t_start = time.process_time()
train_data = scipy.sparse.csr_matrix(coarsening.perm_data(train_data.toarray(), perm))
test_data = scipy.sparse.csr_matrix(coarsening.perm_data(test_data.toarray(), perm))
print('Execution time: {:.2f}s'.format(time.process_time() - t_start))
del perm
```
# Classification
```
# Training set is shuffled already.
#perm = np.random.permutation(train_data.shape[0])
#train_data = train_data[perm,:]
#train_labels = train_labels[perm]
# Validation set.
if False:
val_data = train_data[:FLAGS.val_size,:]
val_labels = train_labels[:FLAGS.val_size]
train_data = train_data[FLAGS.val_size:,:]
train_labels = train_labels[FLAGS.val_size:]
else:
val_data = test_data
val_labels = test_labels
if True:
utils.baseline(train_data, train_labels, test_data, test_labels)
common = {}
common['dir_name'] = '20news/'
common['num_epochs'] = 80
common['batch_size'] = 100
common['decay_steps'] = len(train_labels) / common['batch_size']
common['eval_frequency'] = 5 * common['num_epochs']
common['filter'] = 'chebyshev5'
common['brelu'] = 'b1relu'
common['pool'] = 'mpool1'
C = max(train_labels) + 1 # number of classes
model_perf = utils.model_perf()
if True:
name = 'softmax'
params = common.copy()
params['dir_name'] += name
params['regularization'] = 0
params['dropout'] = 1
params['learning_rate'] = 1e3
params['decay_rate'] = 0.95
params['momentum'] = 0.9
params['F'] = []
params['K'] = []
params['p'] = []
params['M'] = [C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'fc_softmax'
params = common.copy()
params['dir_name'] += name
params['regularization'] = 0
params['dropout'] = 1
params['learning_rate'] = 0.1
params['decay_rate'] = 0.95
params['momentum'] = 0.9
params['F'] = []
params['K'] = []
params['p'] = []
params['M'] = [2500, C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'fc_fc_softmax'
params = common.copy()
params['dir_name'] += name
params['regularization'] = 0
params['dropout'] = 1
params['learning_rate'] = 0.1
params['decay_rate'] = 0.95
params['momentum'] = 0.9
params['F'] = []
params['K'] = []
params['p'] = []
params['M'] = [2500, 500, C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'fgconv_softmax'
params = common.copy()
params['dir_name'] += name
params['filter'] = 'fourier'
params['regularization'] = 0
params['dropout'] = 1
params['learning_rate'] = 0.001
params['decay_rate'] = 1
params['momentum'] = 0
params['F'] = [32]
params['K'] = [L[0].shape[0]]
params['p'] = [1]
params['M'] = [C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'sgconv_softmax'
params = common.copy()
params['dir_name'] += name
params['filter'] = 'spline'
params['regularization'] = 1e-3
params['dropout'] = 1
params['learning_rate'] = 0.1
params['decay_rate'] = 0.999
params['momentum'] = 0
params['F'] = [32]
params['K'] = [5]
params['p'] = [1]
params['M'] = [C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'cgconv_softmax'
params = common.copy()
params['dir_name'] += name
params['regularization'] = 1e-3
params['dropout'] = 1
params['learning_rate'] = 0.1
params['decay_rate'] = 0.999
params['momentum'] = 0
params['F'] = [32]
params['K'] = [5]
params['p'] = [1]
params['M'] = [C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
if True:
name = 'cgconv_fc_softmax'
params = common.copy()
params['dir_name'] += name
params['regularization'] = 0
params['dropout'] = 1
params['learning_rate'] = 0.1
params['decay_rate'] = 0.999
params['momentum'] = 0
params['F'] = [5]
params['K'] = [15]
params['p'] = [1]
params['M'] = [100, C]
model_perf.test(models.cgcnn(L, **params), name, params,
train_data, train_labels, val_data, val_labels, test_data, test_labels)
model_perf.show()
if False:
grid_params = {}
data = (train_data, train_labels, val_data, val_labels, test_data, test_labels)
utils.grid_search(params, grid_params, *data, model=lambda x: models.cgcnn(L,**x))
```
| github_jupyter |
# The $N$-body problem. Maximum: 80 pts + 25 bonus pts
## Problem 0 (Problem statement) 5 pts
Consider the $N$-body problem
$$
V({\bf y}_j) = \sum_{i=1}^N G({\bf x}_i, {\bf y}_j) q_i, \quad j=1,\dots,N,
$$
where ${\bf x}_i$ is the location of source charges and ${\bf y}_j$ is the location of receivers where the potential $V$ is measured.
For simplicity in this pset sources and receivers are the same points: ${\bf x}_i = {\bf y}_i$, $i=1,\dots,N$.
The naive summation yields $\mathcal{O}(N^2)$ complexity, which is prohibitive if $N$ is large.
This problem set is devoted to algorithms that break the $\mathcal{O}(N^2)$.
* (5 pts) Name algorithms that break $\mathcal{O}(N^2)$ for $N$-body problem. Specify their complexities. Estimate how much memory and what time requires to estimate all $N$ potentials $V({\bf y}_j)$ for $N=300$ billion particles with naive $\mathcal{O}(N^2)$ summation and $\mathcal{O}(N\log N)$, $\mathcal{O}(N)$ algorithms on a supercomputer
(constants hidden in $\mathcal{O}$ can be found in lectures).
## Problem 1 (The Barnes-Hut algorithm and beyond) 35 pts + 25 bonus pts
#### The Barnes-Hut
The Barnes-Hut (BH) idea is quite simple. First, we separate our particles in a quad-tree structure of particle groups. If the group on some tree level is sufficiently far away from a certain particle, we can approximate its potential by using its center of mass. If it is not, we compute its influence recursively by using lower tree levels. The accuracy of the Barnes-Hut algorithm depends on the choise of parameter $\theta = s / d$, where $s$ is the width of the region represented by the internal node, and $d$ is the distance between the body and the node's center-of-mass.
* (10 pts) Propose an algorithm for the quadtree construction. Can you reach $\mathcal{O}(N)$ memory for the storage? Propose a way to store the tree and write the program that computes the tree, given the location of the particles. What do you need to store in each node of the tree?
* (20 pts) Implement Barnes-Hut algorithm. The program should consist of three parts:
1. Tree construction given the location of the particles and geometric constant $\theta$
2. Filling the information in the tree (computing the charges and geometric centers)
3. Computing the product
* (5 pts) Compare the results computed by direct evaluation and Barnes-Hut algorithm. Make sure that you got linear complexity. Study the dependance of accuracy and computational cost on the geometric parameter $\theta$
#### Simplified FMM (Bonus task)
In order to break $\log$ term in $\mathcal{O}(N \log N)$ for the Barnes-Hut algorithm a second tree can be used.
This almost leads us to the FMM algorithm with only one exception: only one term in the multipole expansion is used.
* (20 pts) Now that you are a given a tree from the previous task, code the Barnes-Hut with two trees. The key differences are:
1. You need to create the interaction list
2. You also need to build M2L and L2L operators (in standard BH only M2M operator is used)
* (5 pts) Compare performance and accuracy of the standard and 2-tree BH. Which one is faster?
## Problem 2 (40 pts)
Consider an ideally conducting sphere $\Omega\subset\mathbb{R}^3$, which is attached to a $1$ V battery.
- (5 pts) Write first kind Fredholm integral equation on $\partial \Omega$
- (20 pts) Using BEM++ write a Python program solves this integral equation
- (10 pts) Write a function that evaluates potenital from the obtained solution at the given point of $\mathbb{R}^3$
- (5 pts) Plot the depenedce of the potential on a half-line from the center of the sphere. Compare it with the analytical solution by plotting obtained solution for different number of grid points and the analytic solution on one plot
| github_jupyter |
<img src="images/utfsm.png" alt="" width="100px" align="right"/>
# USM Numérica
# Licencia y configuración del laboratorio
Ejecutar la siguiente celda mediante *`Ctr-S`*.
```
"""
IPython Notebook v4.0 para python 3.0
Librerías adicionales:
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
```
## Introducción a STRING
En Python podemos definir distintos tipos de variables, en este caso un objeto definido como un *string* tiene una serie de cualidades que nos permiten realizar ciertas operaciones que definiremos en este tutorial.
En particular el nombre *string* refiere a una secuencia de caracteres que puede ser utilizada como variable para representar otro objeto de distinta naturaleza como un número, también puede construirse un algoritmo que nos permita dar acceso a algún archivo o directorio a partir de una contraseña, entre otras posibilidades. A continuación presentamos una serie de operaciones comunes que nos permitirán tener un manejo básico de un *string*.
## Objetivos
1. Operaciones básicas para crear un formato
2. Operaciones de búsqueda
3. Operaciones lógicas de formato
4. Sentencias de condicionamiento
5. Ejercicio de práctica
### 1. Operaciones básicas para asignar un formato
En ocasiones se vuelve útil poder controlar a partir de ciertos comandos el formato de un *string*, pudiendo cambiar letras mayúsculas y minúsculas, centrar o mover la variable a un extremo de la pantalla y/o darle un formato en específico a nuestra secuencia.
Por ejemplo un *string* que definiremos como variable podemos hacer que represente un pequeño texto, una vez definido nuestro *string* le daremos un formato de título mediante el siguiente comando.
```
cadena_1 = "aprender haciendo"
print cadena_1.title()
```
Otra posibilidad es que imprima nuestra variable asignada únicamente con letras mayúsculas.
```
print cadena_1.upper()
```
Centrar o alinear un texto a la izquierda o derecha de la pantalla son otras opciones de formato posibles de realizar con comandos de python definidos para un string.
```
cadena_2 = "lo maravilloso de programar"
print cadena_2.center(50,"=")
```
Podemos redefinir una variable con cierto formato y luego convinar con otra opción de formato como sigue.
```
cadena_1 = "aprender haciendo".title()
print cadena_1.rjust(40, " ")
```
Se puede observar que el comando anterior ajusta el texto a la derecha, además de que se puede introducir cualquier símbolo incluso espacio, su homólogo a la izquierda es el siguiente.
```
print cadena_1.ljust(40, "*")
```
### 2. Operaciones de busqueda en un string
Realizar operaciones de búsqueda de un caracter o subcadena dentro de una cadena, pudiendo contabilizar cuantas veces se repite este carácter o determinar la posición en la cual se encuentra la subcadena dentro de la cadena es posible mediante los siguientes comandos.
```Python
```
```
cadena_3 = "buscando a wally".title()
cadena_3.find("Wally")
```
EL comando *`find`* nos arroja un número que nos indica la posición que antecede al primer caracter de Wally, es decir encontramos a Wally luego del carácter que se ubica luego de la posición 11 dentro de la cadena 3. Otra opción es realizar una búsqueda de un subtexto en un tramo definido de la variable, mencionando además que cuando no se encuentra el carácter en el tramo definido, por defecto nos arroja el valor -1.
```
cadena_3.find("Wally",0,10)
cadena_3.find("Wally",2,20)
```
Podemos también contabilizar cuantas veces se encuentra un caracter o subcadena dentro de una cadena con el comando *`count`*. Y de igual modo que antes la contabilización es posible de realizar en un tramo definido del texto.
```
cadena_4 = "que fácil de encontrar"
cadena_4.count("e")
cadena_4.count("e",0,13)
```
### 3. Operaciones lógicas de formato
Veremos que podemos realizar sentencias lógicas en relación a características de un *string*, esto puede ser de mucha ayuda a la hora de definir ciertos formatos para variables que el programador o usuario deben manipular.
Comenzamos con sentencias que refieren al formato de un *string*, por ejemplo saber si comienza o termina con determinados caracteres, utilizando los comandos *`startswith()`* y *`endswith()`* respectivamente, a lo que Python respondera con verdadero o falso.
```
cadena_5 = "construyendo un detector de mentiras"
print cadena_5.startswith("construyendo un")
print cadena_5.endswith("detector")
```
Además podemos verificar si el *string* contiene caracteres sólo numéricos, alfabéticos o alfanuméricos, usando los comandos *`isdigit()`*, *`isalpha()`* y *`isalnum()`* respectivamente.
```
cadena_6 = "ano"
print cadena_6.isdigit()
cadena_7 = "1987"
print cadena_7.isalpha()
cadena_8 = cadena_6 + cadena_7
print cadena_8
print cadena_8.isalnum()
```
Luego podemos intentar con otras características de formato, como por ejemplo saber si la cadena contiene caracteres sólo mayúsculas, sólo minúsculas o inclusive algún formato especifico como de título, usando los comandos *`isupper()`*, *`islower()`* y por último *`istitle()`*
```
nombre = "isaac"
print nombre.isupper()
apellido = "asimov".upper()
print apellido.islower()
respuesta = apellido.islower()
print respuesta
nombre_completo = nombre.title() + apellido.title()
print nombre_completo
```
La última característica de formato aplicada desde una operación lógica será si la cadena contiene sólo espacios en blanco, para lo que utilizaremos el comando *`isspace()`* como sigue a continuación.
```
nombre_completo = "isaac asimov".title()
print nombre_completo.isspace()
```
### 4. Sentencias de condicionamiento
Por último en este tutorial introduciremos una sentencia de condicionamiento que nos permitirá ampliar el espectro de aplicación de operaciones lógicas a partir de elementos básicos.
```
contrasena_1 = "abcd1234"
if len(contrasena_1)<10:
print "la contrasena contiene menos de 10 caracteres"
else:
print "contrasena válida"
contrasena_2 = "abcd1234ab"
respuesta = contrasena_2.islower()
if len(contrasena_2)<10:
print "la contrasena contiene menos de 10 caracteres"
elif respuesta == True:
print "la contrasena debe contener al menos 1 caracter mayúscula"
else:
print "contrasena válida"
```
### 5. Ejercicio de práctica
Para aplicar lo aprendido en este tutorial, se deja el ejercicio de crear una pequeña rutina que verifique si los datos referente a las variables *`nombre_usuario`* y *`contrasena`* cumplen con las siguientes condiciones:
* El nombre de usuario debe encontrarse en formato de título
* El nombre de usuario sólo debe contener caracteres alfabéticos
* Para la contraseña se debe especificar que si tiene menor a 5 caracteres no es válida
* Si la contraseña tiene menos de 8 caracteres es riesgosa y mayor o igual a 10 es segura
* La contraseña debe ser de tipo alfanumérica y no debe tener espacios en blanco
* Por último la contrasena debe contener mayúsculas y minúsculas
```
>>> desarrollo ejercicio
```
| github_jupyter |
### This notebook trains a N2V network in the first step and then trains a 3-class U-Net for segmentation using the denoised images as input.
```
# We import all our dependencies.
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.append('../../')
from voidseg.models import Seg, SegConfig
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from voidseg.utils.misc_utils import combine_train_test_data, shuffle_train_data, augment_data
from voidseg.utils.seg_utils import *
from n2v.utils.n2v_utils import manipulate_val_data
from voidseg.utils.compute_precision_threshold import compute_threshold, precision
from keras.optimizers import Adam
from matplotlib import pyplot as plt
from scipy import ndimage
import tensorflow as tf
import keras.backend as K
import urllib
import os
import zipfile
from tqdm import tqdm, tqdm_notebook
```
### Download DSB2018 data.
From the Kaggle 2018 Data Science Bowl challenge, we take the same subset of data as has been used [here](https://github.com/mpicbg-csbd/stardist), showing a diverse collection of cell nuclei imaged by various fluorescence microscopes. We extracted 4870 image patches of size 128×128 from the training set and added Gaussian noise with mean 0 and sigma = 10 (n10), 20 (n20) and 40 (n40). This notebook shows results for n40 images.
```
# create a folder for our data
if not os.path.isdir('./data'):
os.mkdir('data')
# check if data has been downloaded already
zipPath="data/DSB.zip"
if not os.path.exists(zipPath):
#download and unzip data
data = urllib.request.urlretrieve('https://owncloud.mpi-cbg.de/index.php/s/LIN4L4R9b2gebDX/download', zipPath)
with zipfile.ZipFile(zipPath, 'r') as zip_ref:
zip_ref.extractall("data")
```
The downloaded data is in `npz` format and the cell below extracts the training, validation and test data as numpy arrays
```
trainval_data = np.load('data/DSB/train_data/dsb2018_TrainVal40.npz')
test_data = np.load('data/DSB/test_data/dsb2018_Test40.npz', allow_pickle=True)
train_images = trainval_data['X_train']
val_images = trainval_data['X_val']
test_images = test_data['X_test']
train_masks = trainval_data['Y_train']
val_masks = trainval_data['Y_val']
test_masks = test_data['Y_test']
print("Shape of train_images: ", train_images.shape, ", Shape of train_masks: ", train_masks.shape)
print("Shape of val_images: ", val_images.shape, ", Shape of val_masks: ", val_masks.shape)
print("Shape of test_images: ", test_images.shape, ", Shape of test_masks: ", test_masks.shape)
```
### Data preparation for denoising step
Since, we can use all the noisy data for training N2V network, we combine the noisy raw `train_images` and `test_images` and use them as input to the N2V network.
```
X, Y = combine_train_test_data(X_train=train_images,Y_train=train_masks,X_test=test_images,Y_test=test_masks)
print("Combined Dataset Shape", X.shape)
X_val = val_images
Y_val = val_masks
```
Next, we shuffle the training pairs and augment the training and validation data.
```
random_seed = 1 # Seed to shuffle training data (annotated GT and raw image pairs)
X, Y = shuffle_train_data(X, Y, random_seed = random_seed)
print("Training Data \n..................")
X, Y = augment_data(X, Y)
print("\n")
print("Validation Data \n..................")
X_val, Y_val = augment_data(X_val, Y_val)
# Adding channel dimension
X = X[..., np.newaxis]
print(X.shape)
X_val = X_val[..., np.newaxis]
print(X_val.shape)
```
Let's look at one of our training and validation patches.
```
sl=0
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[sl,...,0], cmap='gray')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[sl,...,0], cmap='gray')
plt.title('Validation Patch');
```
### Configure N2V Network
The data preparation for denoising is now done. Next, we configure a denoising N2V network by specifying `N2VConfig` parameters.
```
config = N2VConfig(X, unet_kern_size=3, n_channel_out=1,train_steps_per_epoch=400, train_epochs=200,
train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=0.784, n2v_patch_shape=(64, 64),
unet_n_first = 32,
unet_residual = False,
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5, unet_n_depth=4)
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n40_denoising'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
model.prepare_for_training(metrics=())
```
Now, we begin training the denoising N2V model. In case, a trained model is available, that model is loaded else a new model is trained.
```
# We are ready to start training now.
query_weightpath = os.getcwd()+"/models/"+model_name
weights_present = False
for file in os.listdir(query_weightpath):
if(file == "weights_best.h5"):
print("Found weights of a trained N2V network, loading it for prediction!")
weights_present = True
break
if(weights_present):
model.load_weights("weights_best.h5")
else:
history = model.train(X, X_val)
```
Here, we predict denoised images which are subsequently used as input for segmentation step.
```
pred_train = []
pred_val = []
pred_test = []
for i in tqdm_notebook(range(train_images.shape[0])):
p_ = model.predict(train_images[i].astype(np.float32), 'YX');
pred_train.append(p_)
train_images_denoised = np.array(pred_train)
for i in tqdm_notebook(range(val_images.shape[0])):
p_ = model.predict(val_images[i].astype(np.float32), 'YX');
pred_val.append(p_)
val_images_denoised = np.array(pred_val)
for i in tqdm_notebook(range(test_images.shape[0])):
p_ = model.predict(test_images[i].astype(np.float32), 'YX');
pred_test.append(p_)
test_images_denoised = np.array(pred_test)
print("Shape of denoised train_images: ", train_images_denoised.shape, ", Shape of train_masks: ", train_masks.shape)
print("Shape of denoised val_images: ", val_images_denoised.shape, ", Shape of val_masks: ", val_masks.shape)
print("Shape of denoised test_images: ", test_images_denoised.shape, ", Shape of test_masks: ", test_masks.shape)
```
### Data preparation for segmentation step
Next, we normalize all raw data with the mean and std (standard deviation) of the raw `train_images`. Then, we shuffle the raw training images and the correponding Ground Truth (GT). Lastly, we fractionate the training pairs of raw images and corresponding GT to realize the case where not enough annotated, training data is available. For this fractionation, please specify `fraction` parameter below. It should be between 0 (exclusive) and 100 (inclusive).
```
fraction = 2 # Fraction of annotated GT and raw image pairs to use during training.
random_seed = 1 # Seed to shuffle training data (annotated GT and raw image pairs).
assert 0 <fraction<= 100, "Fraction should be between 0 and 100"
mean, std = np.mean(train_images_denoised), np.std(train_images_denoised)
X_normalized = normalize(train_images_denoised, mean, std)
X_val_normalized = normalize(val_images_denoised, mean, std)
X_test_normalized = normalize(test_images_denoised, mean, std)
X_shuffled, Y_shuffled = shuffle_train_data(X_normalized, train_masks, random_seed = random_seed)
X_frac, Y_frac = fractionate_train_data(X_shuffled, Y_shuffled, fraction = fraction)
print("Training Data \n..................")
X, Y_train_masks = augment_data(X_frac, Y_frac)
print("\n")
print("Validation Data \n..................")
X_val, Y_val_masks = augment_data(X_val_normalized, val_masks)
```
Next, we do a one-hot encoding of training and validation labels for training a 3-class U-Net. One-hot encoding will extract three channels from each labelled image, where the channels correspond to background, foreground and border.
```
X = X[...,np.newaxis]
Y = convert_to_oneHot(Y_train_masks)
X_val = X_val[...,np.newaxis]
Y_val = convert_to_oneHot(Y_val_masks)
print(X.shape, Y.shape)
print(X_val.shape, Y_val.shape)
```
Let's look at one of our validation patches.
```
sl=0
plt.figure(figsize=(20,5))
plt.subplot(1,4,1)
plt.imshow(X_val[sl,...,0])
plt.title('Raw validation image')
plt.subplot(1,4,2)
plt.imshow(Y_val[sl,...,0])
plt.title('1-hot encoded background')
plt.subplot(1,4,3)
plt.imshow(Y_val[sl,...,1])
plt.title('1-hot encoded foreground')
plt.subplot(1,4,4)
plt.imshow(Y_val[sl,...,2])
plt.title('1-hot encoded border')
```
### Configure Segmentation Network
The data preparation for segmentation is now done. Next, we configure a segmentation network by specifying `SegConfig` parameters. For example, one can increase `train_epochs` to get even better results at the expense of a longer computation. (This holds usually true for a large `fraction`.)
```
relative_weights = [1.0,1.0,5.0] # Relative weight of background, foreground and border class for training
config = SegConfig(X, unet_kern_size=3, relative_weights = relative_weights,
train_steps_per_epoch=400, train_epochs=3, batch_norm=True,
train_batch_size=128, unet_n_first = 32, unet_n_depth=4)
# Let's look at the parameters stored in the config-object.
# a name used to identify the model
model_name = 'seg_sequential'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
seg_model = Seg(config, model_name, basedir=basedir)
vars(config)
```
Now, we begin training the model for segmentation.
```
seg_model.train(X, Y, (X_val, Y_val))
```
### Computing the best threshold on validation images (to maximize Average Precision score). The threshold so obtained will be used to get hard masks from probability images to be predicted on test images.
```
threshold=seg_model.optimize_thresholds(X_val_normalized.astype(np.float32), val_masks)
```
### Prediction on test images to get segmentation result
```
predicted_images, precision_result=seg_model.predict_label_masks(X_test_normalized, test_masks, threshold)
print("Average precision over all test images at IOU = 0.5: ", precision_result)
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(predicted_images[22])
plt.title('Prediction')
plt.subplot(1,2,2)
plt.imshow(test_masks[22])
plt.title('Ground Truth')
```
| github_jupyter |
# Mask R-CNN - Train FCN using MRCNN in Predict Mode
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys,os, pprint
pp = pprint.PrettyPrinter(indent=2, width=100)
print('Current working dir: ', os.getcwd())
if '..' not in sys.path:
print("appending '..' to sys.path")
sys.path.append('..')
import numpy as np
import mrcnn.utils as utils
import mrcnn.visualize as visualize
from mrcnn.prep_notebook import build_fcn_training_pipeline, run_fcn_training_pipeline
from mrcnn.visualize import display_training_batch
# from mrcnn.prep_notebook import get_inference_batch, get_image_batch
input_parms = " --epochs 2 --steps_in_epoch 32 --last_epoch 0 "
input_parms +=" --batch_size 1 --lr 0.00001 --val_steps 8 "
input_parms +=" --mrcnn_logs_dir train_mrcnn_coco "
input_parms +=" --fcn_logs_dir train_fcn8_bce "
input_parms +=" --scale_factor 4"
input_parms +=" --mrcnn_model last "
input_parms +=" --fcn_model init "
input_parms +=" --opt adam "
input_parms +=" --fcn_arch fcn8 "
input_parms +=" --fcn_layers all "
input_parms +=" --sysout screen "
## input_parms +=" --coco_classes 62 63 67 78 79 80 81 82 72 73 74 75 76 77"
input_parms +=" --coco_classes 78 79 80 81 82 44 46 47 48 49 50 51 34 35 36 37 38 39 40 41 42 43 10 11 13 14 15 "
input_parms +=" --new_log_folder "
parser = utils.command_line_parser()
args = parser.parse_args(input_parms.split())
utils.display_input_parms(args)
# mrcnn_model, fcn_model = build_traininf_pipeline(fcn_weight_file = WEIGHT_FILE, batch_size = 2)
mrcnn_model, fcn_model = build_fcn_training_pipeline(args = args)
```
## Defined training datasets
```
##------------------------------------------------------------------------------------
## Build & Load Training and Validation datasets
##------------------------------------------------------------------------------------
from mrcnn.coco import prep_coco_dataset
# chair/cound/dining tbl/ electronics/ appliances -- train: 34562 val: 1489
# furn_elect_appl = [62, 63, 67, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82]
## appliances / kitchen / sports -- train: 46891 val: 1954
# appl_ktch_sports = [78, 79, 80, 81, 82,44, 46, 47, 48, 49, 50, 51,34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
print('ARG: load coco classes: ', args.coco_classes)
load_class_ids = args.coco_classes
# load_class_ids = args.coco_classes
print('load coco classes: ', load_class_ids)
dataset_train = prep_coco_dataset(["train", "val35k"], mrcnn_model.config, generator = False , shuffle = False, load_coco_classes=load_class_ids)
dataset_val = prep_coco_dataset(["minival"] , mrcnn_model.config, generator = False , shuffle = False, load_coco_classes=load_class_ids)
print(len(dataset_train.image_ids), len(dataset_train.image_info))
print(len(dataset_val.image_ids), len(dataset_val.image_info))
```
#### Display active classes of `dataset`
```
# dataset_train.display_active_classes()
# dataset_val.display_active_classes()
len(dataset_train.image_ids)
# len(dataset_val.image_ids)
# a = dataset_train.image_ids.tolist()
print(len(a))
ext_ids = [img_inf['id'] for img_inf in dataset_train.image_info]
print(len(ext_ids))
ext_ids.index(1900)
```
### Print model layer and weight information
```
for layer in fcn_model.keras_model.layers:
print('layer: ', layer.name)
for weight in layer.weights:
print(' mapped_weight_name : ',weight.name)
if hasattr(layer, 'output'):
print(' layer output ', type(layer),' shape: ',layer.output.shape )
fcn_model.keras_model.metrics_names
model.keras_model.losses
print(model.keras_model.metrics_names)
# model.keras_model.summary(line_length=132, positions=[0.30,0.75, .83, 1. ])
```
### Display Images from `batch_x`
```
# train_batch_x, train_batch_y = next(train_generator)
# display_training_batch(dataset_train, train_batch_x)
# for i in train_batch_x:
# print(type(i), i.shape)
```
### Load a specific image using image_id
```
### Image ids that cause failure in trainfcn mode.
### 4598: 275557, 646: 1900, 26058:341061,25982: 471900, 18631:577876, 48015: 243914, 38621:124593, 21579:325236, 758:526514
### 28599: 481365, 30741:358189
train_batch_x, _ = data_gen_simulate(dataset_train, mrcnn_model.config, [30741])
visualize.display_training_batch(dataset_train, train_batch_x)
```
## Call `train_in_batches()`
```
mrcnn_model.config.display()
fcn_model.config.display()
##----------------------------------------------------------------------------------------------
## Train the FCN only
## Passing layers="heads" freezes all layers except the head
## layers. You can also pass a regular expression to select
## which layers to train by name pattern.
##----------------------------------------------------------------------------------------------
train_layers = ['all'] # args.fcn_layers
# loss_names = ['fcn_MSE_loss']
loss_names = ['fcn_BCE_loss']
fcn_model.epoch = fcn_model.config.LAST_EPOCH_RAN
fcn_model.train_in_batches(
mrcnn_model,
dataset_train,
dataset_val,
layers = train_layers,
losses = loss_names,
# learning_rate = fcn_config.LEARNING_RATE,
# epochs = 25, # total number of epochs to run (accross multiple trainings)
# epochs_to_run = fcn_config.EPOCHS_TO_RUN,
# batch_size = fcn_config.BATCH_SIZE, # gets value from self.config.BATCH_SIZE
# steps_per_epoch = fcn_config.STEPS_PER_EPOCH , # gets value form self.config.STEPS_PER_EPOCH
# min_LR = fcn_config.MIN_LR
)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%aimport utils_1_1
import pandas as pd
import numpy as np
import altair as alt
from altair_saver import save
import datetime
import dateutil.parser
from os.path import join
from constants_1_1 import SITE_FILE_TYPES
from utils_1_1 import (
get_site_file_paths,
get_site_file_info,
get_site_ids,
get_visualization_subtitle,
get_country_color_map,
)
from theme import apply_theme
from web import for_website
alt.data_transformers.disable_max_rows(); # Allow using rows more than 5000
data_release='2021-04-29'
consistent_date = {
'2020-Mar-Apr': "'20 Mar - '20 Apr",
'2020-May-Jun': "'20 May - '20 Jun",
'2020-Jul-Aug': "'20 Jul - '20 Aug",
'2020-Sep-Oct': "'20 Sep - '20 Oct",
'2020-Nov-2021-Jan': "'20 Nov - '21 Jan"
}
date = ['Mar - Apr', 'May - Jun', 'Jul - Aug', 'Sep - Oct', 'Since Nov']
date = ["'20 Mar - '20 Apr", "'20 May - '20 Jun", "'20 Jul - '20 Aug", "'20 Sep - '20 Oct", "'20 Nov - '21 Jan"]
sites = ['META', 'APHP', 'FRBDX', 'ICSM', 'NWU', 'BIDMC', 'MGB', 'UCLA', 'UMICH', 'UPENN', 'UPITT', 'VA1', 'VA2', 'VA3', 'VA4', 'VA5']
site_colors = ['black', '#D45E00', '#0072B2', '#CB7AA7', '#E79F00', '#029F73', '#DBD03C', '#57B4E9', '#57B4E9', '#57B4E9', '#57B4E9', '#57B4E9']
df = pd.read_csv(join("..", "data", "Phase2.1SurvivalRSummariesPublic", "ToShare", "table.score.mice.toShare.csv"))
print(df.head())
# Rename columns
df = df.drop(columns=["Unnamed: 0"])
df = df.rename(columns={
'siteid': 'site',
'calendar_month': 'month'
})
# More readable values
df.site = df.site.apply(lambda x: x.upper())
print(df.site.unique().tolist())
print(df.month.unique().tolist())
# Drop "combine" sites
df = df[df.site != "COMBINE"]
# df = pd.melt(df, id_vars=['siteid'], value_vars=date, var_name='date', value_name='value')
df.month = df.month.apply(lambda x: consistent_date[x])
# Add a reference (META)
# df['reference'] = df.date.apply(lambda x: df[(df.date == x) & (df.siteid == 'META')].value.sum())
df.head()
def risk(_d, metric='pos'):
d = _d.copy()
"""
DATA PREPROCESSING...
"""
d.loc[d.site == 'combine', 'site'] = 'All Sites'
d.cat = d.cat.apply(lambda x: {
'L':'Low Risk',
'M': 'Medium Risk',
'H': 'High Risk',
'H/M': 'High/Medium',
'L/M': 'Low/Medium'
}[x])
"""
PLOT!
"""
y_title = '% of Patients in Each Category' if metric == 'pos' else '% of Event in Each Category'
colors = ['#7BADD1', '#427BB5', '#14366E'] if metric == 'pos' else ['#A8DED1', '#3FA86F', '#005A24'] if metric == 'ppv' else ['red', 'salmon']
colorDomain = ['High/Medium', 'Low/Medium'] if metric == 'rr' else ['Low Risk', 'Medium Risk', 'High Risk']
width = 300
size = 50
y_scale = alt.Scale(domain=[0, 1]) if metric == 'pos' or metric=='ppv' else alt.Scale()
bar = alt.Chart(
d
).transform_calculate(
order="{'Low Risk':0, 'Medium Risk': 1, 'High Risk': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title='Month', scale=alt.Scale(domain=['Mar-Apr', 'May-Jun', 'Jul-Aug', 'Sep-Oct', 'Since Nov'])),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Category', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width
)
if metric == 'pos':
bar = bar.mark_bar(
size=size, stroke='black'
)
else:
bar = bar.mark_line(
size=3, point=True, opacity=0.8
)
d['visibility'] = d['value'] > 0.08
text = alt.Chart(
d
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).mark_text(size=16, dx=0, dy=5, color='white', baseline='top', fontWeight=500).encode(
x=alt.X('month:N'),
y=alt.Y('value:Q', stack='zero'),
detail='cat:N',
text=alt.Text('value:Q', format='.0%'),
order="order:O",
opacity=alt.Opacity('visibility:N', scale=alt.Scale(domain=[True, False], range=[1, 0]))
)
# .transform_filter(
# (f'datum.value > 0.10')
# )
if metric == 'pos':
bar = (bar + text)
bar = bar.facet(
column=alt.Column('site:N', header=alt.Header(title=None)),
)
"""
COMBINE
"""
res = bar.properties(
title={
"text": [
f"Distribution of Risk Scores" if metric == 'pos' else f"Event Rate of Risk Scores"
],
"dx": 80,
"subtitle": [
# lab, #.title(),
get_visualization_subtitle(data_release=data_release, with_num_sites=False)
],
"subtitleColor": "gray",
}
)
return res
d = df.copy()
width = 160
height = 180
height2 = 140
size = 28
point=alt.OverlayMarkDef(filled=False, fill='white', strokeWidth=2)
"""
DATA PREPROCESSING...
"""
d.loc[d.site == 'combine', 'site'] = 'All Sites'
d.cat = d.cat.apply(lambda x: {
'L':'Low',
'M': 'Medium',
'H': 'High',
'H/M': 'H/M',
'L/M': 'L/M'
}[x])
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% TOP %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='pos'
y_title = '% of Patients in Each Category'
y_scale = alt.Scale(domain=[0, 1])
colors = ['#7BADD1', '#427BB5', '#14366E']
# colorDomain = ['Low Risk', 'Medium Risk', 'High Risk']
colorDomain = ['Low', 'Medium', 'High']
# colorDomain = ['L', 'M', 'H']
bar = alt.Chart(
d
).transform_calculate(
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Risk Level', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height
)
bar = bar.mark_bar(
size=size, stroke='black'
)
d['visibility'] = d['value'] > 0.08
text = alt.Chart(
d
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).mark_text(size=14, dx=0, dy=5, color='white', baseline='top', fontWeight=500).encode(
x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55, domain=False), sort=date),
y=alt.Y('value:Q', stack='zero'),
detail='cat:N',
text=alt.Text('value:Q', format='.0%'),
order="order:O",
opacity=alt.Opacity('visibility:N', scale=alt.Scale(domain=[True, False], range=[1, 0]), legend=None)
)
if metric == 'pos':
bar = (bar + text)
bar = bar.facet(
column=alt.Column('site:N', header=alt.Header(title=None), sort=sites),
spacing=20
)
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%% Bottom %%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='ppv'
y_title = '% of Event'
colors = ['#A8DED1', '#3FA86F', '#005A24']
colors = ['#00A87E', '#00634B', 'black']
y_scale = alt.Scale(domain=[0, 1])
line = alt.Chart(
d
).transform_calculate(
# order="{'Low Risk':0, 'Medium Risk': 1, 'High Risk': 2}[datum.variable]"
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
# x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
x=alt.X("month:N", title='Month', scale=alt.Scale(domain=date), axis=alt.Axis(grid=False, ticks=False, labels=False, domain=False, title=None, labelAngle=-55)),
y=alt.Y("value:Q", title=y_title, scale=y_scale),
color=alt.Color("cat:N", title='Risk Level', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height2
)
line = line.mark_line(
size=3, point=point, opacity=0.8
).facet(
column=alt.Column('site:N', header=alt.Header(title=None), sort=sites),
)
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% Bottom 2 %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='rr'
y_title = 'Ratio Between Risk Levels'
# colors = ['red', 'salmon']
colors = ['#D45E00', '#351800']
# colorDomain = ['High/Medium', 'Low/Medium']
colorDomain = ['L/M', 'H/M']
y_scale = alt.Scale(domain=[0, 10], clamp=True)
line2 = alt.Chart(
d
).transform_calculate(
# order="{'High/Medium':0, 'Low/Medium': 1}[datum.variable]"
order="{'H/M':0, 'L/M': 1}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
# x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
x=alt.X("month:N", title=None, scale=alt.Scale(domain=date), axis=alt.Axis(grid=False, domain=False, labelAngle=-55)),
# x=alt.X("month:N", title='Month', scale=alt.Scale(domain=date), axis=alt.Axis(grid=True, ticks=False, labels=False, domain=False, title=None)),
y=alt.Y("value:Q", title=y_title, scale=y_scale),
color=alt.Color("cat:N", title='Risk Ratio', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height2
)
line2 = line2.mark_line(
size=3, point=point, opacity=0.8
).facet(
column=alt.Column('site:N', header=alt.Header(title=None, labels=False), sort=sites),
)
# Just to show color legend and left axis
# line_not_first = line
# line_not_first = line.encode(
# x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
# y=alt.Y("value:Q", title=None, axis=alt.Axis(format='.0%', ticks=False, labels=False, domain=False), scale=y_scale),
# color=alt.Color("cat:N", title='Risk Level', scale=alt.Scale(domain=colorDomain, range=colors), legend=None),
# order="order:O"
# )
# line2_first = line2.encode(
# x=alt.X("month:N", title=None, sort=date),
# y=alt.Y("value:Q", title=None, axis=alt.Axis(format='.0%', ticks=False, labels=False, domain=False), scale=y_scale),
# color=alt.Color("cat:N", title='Risk Ratio (Right Y Axis)', scale=alt.Scale(domain=colorDomain, range=colors)),
# order="order:O"
# )
# line2_last = line2.encode(
# x=alt.X("month:N", title=None, sort=date),
# y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
# color=alt.Color("cat:N", title='Risk Ratio', scale=alt.Scale(domain=colorDomain, range=colors), legend=None),
# order="order:O"
# )
# line = alt.concat(*(
# alt.layer(line, line2_first, title={
# "text": site,
# "fontSize": 16,
# "dx": 130}).transform_filter(alt.datum.site == site).resolve_scale(y='independent', color='independent') if site == 'META' else
# alt.layer(line_not_first, line2_last, title={
# "text": site,
# "fontSize": 16,
# "dx": 85}).transform_filter(alt.datum.site == site).resolve_scale(y='independent', color='independent') if site == sites[-1] else
# alt.layer(line_not_first, line2, title={
# "text": site,
# "fontSize": 16,
# "dx": 85}).transform_filter(alt.datum.site == site).resolve_scale(y='independent', color='independent')
# for site in sites
# ), spacing=3).resolve_scale(color='shared')
print(d.site.unique())
"""
COMBINE
"""
top = bar.properties(
title={
"text": [
f"Distribution Of Parients By Risk Level"
],
"fontWeight": "normal",
"dx": 170,
"subtitle": [
get_visualization_subtitle(data_release=data_release, with_num_sites=False)
],
"subtitleColor": "gray",
}
)
bot = line.properties(
title={
"text": [
f"Event Rate Of Risk Scores"
],
"dx": 150,
# "subtitle": [
# get_visualization_subtitle(data_release=data_release, with_num_sites=False)
# ],
"subtitleColor": "gray",
}
)
# top.display()
# bot.display()
# line2.display()
res = alt.vconcat(
top,
bot,
line2,
spacing=10
).resolve_scale(y='independent', color='independent')
"""
STYLE
"""
res = apply_theme(
res,
axis_y_title_font_size=16,
title_anchor='start',
legend_orient='left',
header_label_orient='top',
# legend_title_orient='left',
axis_label_font_size=14,
header_label_font_size=16,
point_size=90
)
res
d = df.copy()
d = d[d.site == 'META']
width = 200
height = 200
size = 30
point=alt.OverlayMarkDef(filled=False, fill='white', strokeWidth=2)
"""
DATA PREPROCESSING...
"""
d.loc[d.site == 'combine', 'site'] = 'All Sites'
d.cat = d.cat.apply(lambda x: {
'L':'Low',
'M': 'Medium',
'H': 'High',
'H/M': 'H/M',
'L/M': 'L/M'
}[x])
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% TOP %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='pos'
y_title = '% of Patients in Each Category'
y_scale = alt.Scale(domain=[0, 1])
colors = ['#7BADD1', '#427BB5', '#14366E']
# colorDomain = ['Low Risk', 'Medium Risk', 'High Risk']
colorDomain = ['Low', 'Medium', 'High']
# colorDomain = ['L', 'M', 'H']
bar = alt.Chart(
d
).transform_calculate(
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Risk Level', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height
)
bar = bar.mark_bar(
size=size, stroke='black'
)
d['visibility'] = d['value'] > 0.08
text = alt.Chart(
d
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).mark_text(size=16, dx=0, dy=5, color='white', baseline='top', fontWeight=500).encode(
x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
y=alt.Y('value:Q', stack='zero'),
detail='cat:N',
text=alt.Text('value:Q', format='.0%'),
order="order:O",
opacity=alt.Opacity('visibility:N', scale=alt.Scale(domain=[True, False], range=[1, 0]), legend=None)
)
if metric == 'pos':
bar = (bar + text)
# bar = bar.facet(
# column=alt.Column('site:N', header=alt.Header(title=None), sort=sites),
# spacing=20
# )
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%% Bottom %%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='ppv'
y_title = '% of Event'
colors = ['#A8DED1', '#3FA86F', '#005A24']
colors = ['#00A87E', '#00634B', 'black']
y_scale = alt.Scale(domain=[0, 1])
line = alt.Chart(
d
).transform_calculate(
# order="{'Low Risk':0, 'Medium Risk': 1, 'High Risk': 2}[datum.variable]"
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
# x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
x=alt.X("month:N", title=None, scale=alt.Scale(domain=date), axis=alt.Axis(labelAngle=-55)),
y=alt.Y("value:Q", title=y_title, scale=y_scale),
color=alt.Color("cat:N", title='Risk Level', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height
)
line = line.mark_line(
size=3, point=point, opacity=0.8
)
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% Bottom 2 %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='rr'
y_title = 'Ratio Between Risk Levels'
# colors = ['red', 'salmon']
colors = ['#D45E00', '#351800']
# colorDomain = ['High/Medium', 'Low/Medium']
colorDomain = ['L/M', 'H/M']
y_scale = alt.Scale(domain=[0, 10], clamp=True)
line2 = alt.Chart(
d
).transform_calculate(
# order="{'High/Medium':0, 'Low/Medium': 1}[datum.variable]"
order="{'H/M':0, 'L/M': 1}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
# x=alt.X("month:N", title=None, axis=alt.Axis(labelAngle=-55), sort=date),
x=alt.X("month:N", title=None, scale=alt.Scale(domain=date), axis=alt.Axis(grid=False,labelAngle=-55)),
# x=alt.X("month:N", title='Month', scale=alt.Scale(domain=date), axis=alt.Axis(grid=True, ticks=False, labels=False, domain=False, title=None)),
y=alt.Y("value:Q", title=y_title, scale=y_scale),
color=alt.Color("cat:N", title='Risk Ratio', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height
)
line2 = line2.mark_line(
size=3, point=point, opacity=0.8
)
print(d.site.unique())
"""
COMBINE
"""
res = alt.hconcat(
bar,
line,
line2,
spacing=30
).resolve_scale(y='independent', color='independent')
res = res.properties(
title={
"text": [
f"Meta-Analysis Of Risk Scores"
],
"dx": 70,
# "subtitle": [
# get_visualization_subtitle(data_release=data_release, with_num_sites=False)
# ],
"subtitleColor": "gray",
}
)
"""
STYLE
"""
res = apply_theme(
res,
axis_y_title_font_size=16,
title_anchor='start',
legend_orient='top',
# legend_title_orient='left',
axis_label_font_size=14,
header_label_font_size=16,
point_size=90,
)
res
```
# Deprecated
```
pos = risk(df, metric='pos')
ppv = risk(df, metric='ppv')
res = alt.vconcat(pos, ppv, spacing=30).resolve_scale(color='independent', x='independent')
res = apply_theme(
res,
axis_y_title_font_size=16,
title_anchor='start',
legend_orient='right',
header_label_font_size=16
)
res.display()
d = df.copy()
width = 280
height = 200
height2 = 140
size = 50
"""
DATA PREPROCESSING...
"""
d.loc[d.site == 'combine', 'site'] = 'All Sites'
d.cat = d.cat.apply(lambda x: {
'L':'Low',
'M': 'Medium',
'H': 'High',
'H/M': 'H/M',
'L/M': 'L/M'
}[x])
# d.cat = d.cat.apply(lambda x: {
# 'L':'Low Risk',
# 'M': 'Medium Risk',
# 'H': 'High Risk',
# 'H/M': 'High/Medium',
# 'L/M': 'Low/Medium'
# }[x])
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% TOP %%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='pos'
y_title = '% of Patients in Each Category'
y_scale = alt.Scale(domain=[0, 1])
colors = ['#7BADD1', '#427BB5', '#14366E']
# colorDomain = ['Low Risk', 'Medium Risk', 'High Risk']
colorDomain = ['Low', 'Medium', 'High']
# colorDomain = ['L', 'M', 'H']
bar = alt.Chart(
d
).transform_calculate(
# order="{'Low Risk':0, 'Medium Risk': 1, 'High Risk': 2}[datum.variable]"
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title='Month', scale=alt.Scale(domain=['Mar-Apr', 'May-Jun', 'Jul-Aug', 'Sep-Oct', 'Since Nov']), axis=alt.Axis(grid=True)),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Risk', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height
)
bar = bar.mark_bar(
size=size, stroke='black'
)
d['visibility'] = d['value'] > 0.08
text = alt.Chart(
d
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).mark_text(size=16, dx=0, dy=5, color='white', baseline='top', fontWeight=500).encode(
x=alt.X('month:N'),
y=alt.Y('value:Q', stack='zero'),
detail='cat:N',
text=alt.Text('value:Q', format='.0%'),
order="order:O",
opacity=alt.Opacity('visibility:N', scale=alt.Scale(domain=[True, False], range=[1, 0]), legend=None)
)
if metric == 'pos':
bar = (bar + text)
bar = bar.facet(
column=alt.Column('site:N', header=alt.Header(title=None)),
)
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%% Bottom %%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='ppv'
y_title = '% of Risk Event'
colors = ['#A8DED1', '#3FA86F', '#005A24']
colors = ['#00A87E', '#00634B', 'black']
y_scale = alt.Scale(domain=[0, 0.6])
line = alt.Chart(
d
).transform_calculate(
# order="{'Low Risk':0, 'Medium Risk': 1, 'High Risk': 2}[datum.variable]"
order="{'L':0, 'M': 1, 'H': 2}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title='Month', scale=alt.Scale(domain=['Mar-Apr', 'May-Jun', 'Jul-Aug', 'Sep-Oct', 'Since Nov']), axis=alt.Axis(grid=True, ticks=False, labels=False, domain=False, title=None)),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Risk', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O"
).properties(
width=width,
height=height2
)
line = line.mark_line(
size=3, point=True, opacity=0.8
).facet(
column=alt.Column('site:N', header=alt.Header(title=None)),
)
"""
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% Bottom 2 %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
"""
metric='rr'
y_title = 'Ratio Between Risks'
colors = ['#D45E00', '#351800']
# colorDomain = ['High/Medium', 'Low/Medium']
colorDomain = ['L/M', 'H/M']
y_scale = alt.Scale(domain=[0, 4.2])
line2 = alt.Chart(
d
).transform_calculate(
# order="{'High/Medium':0, 'Low/Medium': 1}[datum.variable]"
order="{'H/M':0, 'L/M': 1}[datum.variable]"
).transform_filter(
{'field': 'metric', 'oneOf': [metric]}
).encode(
x=alt.X("month:N", title='Month', scale=alt.Scale(domain=['Mar-Apr', 'May-Jun', 'Jul-Aug', 'Sep-Oct', 'Since Nov']), axis=alt.Axis(grid=True)),
y=alt.Y("value:Q", title=y_title, axis=alt.Axis(format='.0%'), scale=y_scale),
color=alt.Color("cat:N", title='Risk Ratio', scale=alt.Scale(domain=colorDomain, range=colors)),
order="order:O",
# shape="site:N"
).properties(
width=width,
height=height2
)
line2 = line2.mark_line(
size=3, opacity=0.8, point=True
).facet(
column=alt.Column('site:N', header=alt.Header(title=None, labels=False)),
)
"""
COMBINE
"""
top = bar.properties(
title={
"text": [
f"Distribution of Risk Scores"
],
"dx": 180,
"subtitle": [
get_visualization_subtitle(data_release=data_release, with_num_sites=False)
],
"subtitleColor": "gray",
}
)
line = line.properties(
title={
"text": [
f"Event Rate of Risk Scores"
],
"dx": 180,
"subtitle": [
get_visualization_subtitle(data_release=data_release, with_num_sites=False)
],
"subtitleColor": "gray",
}
)
# line2 = line2.properties(
# title={
# "text": [
# f"Risk Ratio"
# ],
# "dx": 180,
# "subtitle": [
# get_visualization_subtitle(data_release=data_release, with_num_sites=False)
# ],
# "subtitleColor": "gray",
# }
# )
res = alt.vconcat(top, line, line2, spacing=10).resolve_scale(color='independent')
"""
STYLE
"""
res = apply_theme(
res,
axis_y_title_font_size=14,
axis_title_font_size=14,
axis_label_font_size=12,
title_anchor='start',
legend_orient='left',
header_label_font_size=16
)
res
```
| github_jupyter |
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## МНК для линейной регрессии
$\mathbf{w}=(A^TA)^{-1}(A^T\mathbf{y})$
Загрузите файл food_trucks.txt. В нём два столбца значений — количество жителей в городе и доход грузовика с уличной едой в этом городе.
```
df = pd.read_csv("food_trucks.txt",header=None)
df.head()
plt.scatter(df[0], df[1])
y = np.array(df[1])
x1 = np.array(df[0])
x0 = np.ones(x1.shape)
print(len(y))
print(len(x1))
print(len(x0))
A = np.matrix([x0, x1]).transpose()
y = np.expand_dims(y, axis=1)
print(A.shape)
print(y.shape)
w = np.linalg.inv(A.transpose()*A)*(A.transpose()*y)
yhat = w[0]*x0 + w[1]*x1
plt.scatter(x1, np.array(yhat))
plt.scatter(x1, y, c='red')
w
```
${\displaystyle R^{2}}$ (Коэффициент детерминации) — это доля дисперсии объясняемая моделью
```
from sklearn.metrics import r2_score
r2_score(np.squeeze(np.array(y)), np.squeeze(np.array(yhat)))
```
## Предположения МНК
1. Истинная модель $y$ действительно линейна
2. Случайность выборки
3. Полнота ранга $A$
4. Случайность ошибок
5. Гомоскедастичность ошибок (однородность дисперсии)
6. Нормальность ошибок
МНК-оценки являются несмещёнными, состоятельными и наиболее эффективными оценками в классе всех линейных несмещённых оценок
## Методы оптимизации
Как было показано на лекции, большинство методов машинного обучения сводятся к поиску параметров, которые минимизируют ошибку на тренировочной выборке:
$$
\min_{w} L(w; D)
$$
Здесь:
* $D$ — размеченная обучающая выборка, $\{x_i, y_i\}_{i=1}^N$
* $L$ — функция потерь
* $w$ — настраиваемые веса алгоритма
В более общем виде задачу можно записать так:
$$
\min_{x} f(x)
$$
Здесь:
* $x$ — вектор значений
* $f$ — функция, принимающая вектор в качестве аргумента и выдающая числовое значение.
На семинаре рассмотрим подробнее методы минимизации функции, которые рассматривались на лекции.
## Градиентный спуск
Для оптимизации возьмем простую функцию $f(x) = x^3 - 2x^2 + 2$
```
f = lambda x: x ** 3 - 2*x ** 2 + 2
df = lambda x: 3 * x ** 2 - 4 * x # производная
x = np.linspace(-1, 2.5, 1000)
plt.plot(x, f(x))
plt.xlim([-1, 2.5])
plt.ylim([0, 3])
plt.show()
```
И определим функцию, которая будет оптимизировать функцию $f(x)$ градиентным спуском с заданным постоянным шагом (он же learning rate, темп обучения).
```
def optimize_and_plot_steps(learning_rate, x_new=2, compute_learning_rate=None):
x_old = 0
# x_new — точка старта
eps = 0.0001
x_list, y_list = [x_new], [f(x_new)] # инициализируем список координат и значений функций при итерации
# спускаемся, пока разница между координатами не достигла требуемой точности
i = 0
while abs(x_new - x_old) > eps:
x_old = x_new
# считаем направление спуска
direction = -df(x_old)
# обновляем значение темпа обучения, если нам задана функция для этого
if compute_learning_rate is not None:
learning_rate = compute_learning_rate(i, learning_rate)
# делаем шаг
x_new = x_old + learning_rate * direction
# запоминаем очередной шаг минимизации
x_list.append(x_new)
y_list.append(f(x_new))
i += 1
print("Found local min:", x_new)
print("Steps number:", len(x_list))
plt.figure(figsize=[10,3])
plt.subplot(1,2,1)
plt.scatter(x_list, y_list, c="r")
plt.plot(x_list, y_list, c="r")
plt.plot(x, f(x), c="b")
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.title("Descent trajectory")
plt.subplot(1,2,2)
plt.scatter(x_list,y_list,c="r")
plt.plot(x_list,y_list,c="r")
plt.plot(x,f(x), c="b")
plt.xlim([1.2,2.1])
plt.ylim([0,3])
plt.title("Descent trajectory (zoomed in)")
plt.show()
```
Попробуем оптимизацию с шагом 0.1
```
optimize_and_plot_steps(0.1, x_new=1)
```
Возьмем шаг побольше.
```
optimize_and_plot_steps(0.4)
```
Что, если взять 0.5?
```
optimize_and_plot_steps(0.5)
```
Застопорились в нуле, т.к. нашли точный локальный максимум. В нем производная равна нулю и мы никуда не можем сдвинуться. А если взять 0.49?
```
optimize_and_plot_steps(0.49)
```
Что, если взять 0.51?
```
optimize_and_plot_steps(0.51)
```
Мы улетели далеко влево. Это можно понять, распечатав значения x_new.
Теперь возьмём маленький шаг. Например, 0.05.
```
optimize_and_plot_steps(0.05)
```
0.01?
```
optimize_and_plot_steps(0.01)
```
Чем меньше шаг, тем медленнее мы идём к минимум (и можем вдобавок застрять по пути). Чем больше темп обучения, тем большие расстояния мы перепрыгиваем (и имеем гипотетическую возможность найти минимум получше). Хорошая стратегия — начинать с достаточно большого шага (чтобы хорошо попутешествовать по функции), а потом постепенно его уменьшать, чтобы стабилизировать процесс обучения в каком-то локальном минимуме.
Теперь будем изменять шаг динамически:
$lr(i + 1) = lr(i) * 0.9$.
```
def compute_learning_rate(i, prev_lr):
return prev_lr * 0.9
optimize_and_plot_steps(0.45, compute_learning_rate=compute_learning_rate)
```
Если сравнивать с постоянным темпом обучения, то мы нашли минимум в 2 раза быстрее.
```
optimize_and_plot_steps(0.45)
```
Это, конечно, искуственный пример, но такая же идея используются для обучения алгоритмов машинного обучения с миллионами параметров, функции потерь которых имеют очень сложную структуру и не поддаются визуализации.
## Настройка линейной регрессии с помощью градиентного спуска
Рассмотрим теперь реальные данные и попробуем использовать градиентный спуск для решения задачи линейной регрессии.
```
df = pd.read_csv("food_trucks.txt", header=None)
df.head()
plt.scatter(df[0], df[1])
```
Визуализируйте данные. По оси X — население города, по оси Y — доход грузовика.
```
y = np.array(df[1])
x1 = np.array(df[0])
x0 = np.ones(x1.shape)
print(len(y))
print(len(x1))
print(len(x0))
```
Вспомним функцию потерь линейной регрессии:
$$
L(w) = \frac{1}{2m} \sum_{i=1}^m (h(x^i, w) - y^i)^2
$$
Здесь $h(x, w) = w^Tx = w_0 + w_1 x_1$ (предполагается, что $x_0=1$ — дополнительный признак для удобства).
$(x^i, y^i)$ — i-ый объект выборки.
Тогда правило обновления весов будет выглядеть следующим образом:
$$
w_j = w_j - lr \cdot \frac{1}{m}\sum_{i=1}^m(h(x^i, w) - y^i) x^i_j.
$$
Здесь $x^i_j$ — j-ая компонента i-ого объекта.
Определите функцию потерь и её производную. Эти функции имеют один аргумент — вектор весов $w$.
```
#w -- вектор параметров
X = np.vstack([x0, x1]).T
Y = y
w = w_init
def lossfunc(w):
X = np.vstack([x0, x1]).T
Y = y
return ((1/(2*len(X))) * (X.dot(w) - Y)**2).sum()
w_init = np.array([1, 1,])
lossfunc(w_init)
def dlossfunc(w):
X = np.vstack([x0, x1]).T
Y = y
return ((1/len(X)) * np.tile((X.dot(w) - Y), (2,1)).T * X).sum(axis=0)
```
Напишите функцию минимизации $L(w)$ с помощью градиентного спуска, аналогичную optimize_and_plot_steps. На вход она принимает параметры обучения (темп обучения и начальное значение весов), оптимизирует итеративно функцию потерь, печатает итерации и визуализирует уменьшение функции потерь и найденное решение. Запустите функцию с постоянным темпом обучения и прокомментируйте результаты.
```
def graddesc(w_init, lr, n):
#lr --- скорость обучения, шаг
#n --- количество итераций
w = w_init
l_list = []
i_list = []
for i in range (n):
l = lossfunc(w)
dl = dlossfunc(w)
l_list.append(l)
i_list.append(i)
w = w - lr*dl
plt.plot(i_list, l_list)
return w
w_init = np.array([1, 1])
w = graddesc(w_init, 0.01, 10000)
print(w)
matrix([[-3.89578088],
[ 1.19303364]])
yhat = w[0]*x0 + w[1]*x1
plt.scatter(x1, yhat)
plt.scatter(x1, y, c='red')
```
Измените функцию минимизации так, чтобы темп обучения мог меняться динамически, аналогично примеру выше. Запустите функцию и прокомментируйте результаты.
## Softmax
Обобщение логистической функции для многомерного случая. Функция преобразует вектор $z$ размерности $K$ в вектор $\sigma$ той же размерности, где каждая координата $\sigma_i$ полученного вектора представлена вещественным числом в интервале $[0,1]$ и сумма координат равна 1.
Координаты $\sigma_i$ вычисляются следующим образом:
${\displaystyle \sigma (z)_{i}={\frac {e^{z_{i}}}{\displaystyle \sum _{k\mathop {=} 1}^{K}e^{z_{k}}}}}$
1. Реализуйте функцию softmax, которая на вход принимает вектор $z$, а на выходе считает от него софтмакс.
2. Добавьте возможность принимать на вход матрицу и считать softmax по столбцам (батч)
```
# correct solution:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
softmax(np.array([1, 2, 3]))
```
| github_jupyter |
[](https://www.repostatus.org/#active)
[](https://badge.fury.io/py/geoshapes)
[](https://mybinder.org/v2/gh/abiraihan/geoshapes/f3bd257614ae6a1a0badcb592f70ce36b6cb50b2?urlpath=lab%2Ftree%2Fexample%2FsplitShape.ipynb) <--| You can run this notebook at mybinder.org without installing any python dependency
## splitShape
```
import shapely, geopandas
from geoshapes import splitShape
%matplotlib inline
pointLocation = shapely.geometry.Point(0.0,0.0)
```
### splitCircle
See **help(splitShape.splitCircle)** for more details about the parameters
1. splitCircle require a shapely Point (geoms) feature to create split geometry from that location.
2. splitCircle require a radius length (circleRadius) for creating circle feature from the given location.
```
#help(splitShape.splitCircle)
circle = splitShape.splitCircle(
geoms = pointLocation,
circleRadius = 50,
incrementDegree = 5,
clipInterior = True,
innerWidth = 30,
getGeom = 'Both')
gdf = geopandas.GeoDataFrame(geometry = circle)
gdf.plot(figsize = (5, 5), cmap = 'tab20')
gdf = geopandas.GeoDataFrame(geometry = circle[::4])
gdf.plot(figsize = (5, 5), cmap = 'tab20')
radius = 3000
innerClip = 100
stepWise = 500
mergedData = geopandas.GeoDataFrame()
for i in range (innerClip, radius, stepWise):
circle = splitShape.splitCircle(
geoms = pointLocation,
circleRadius = i+stepWise,
incrementDegree = 20,
clipInterior = True,
innerWidth = i,
getGeom = 'Both')
circleGeo = geopandas.GeoDataFrame(geometry = circle[::None])
mergedData = mergedData.append(circleGeo)
mergedData.reset_index(drop = True, inplace = True)
mergedData.plot(
figsize = (7,7), alpha = 0.6,
cmap = 'tab20', edgecolor = 'k',
linewidth = 2)
def getTreatment(radius, innerClip, stepWise, skip = 4):
mergedData = geopandas.GeoDataFrame()
for i in range (innerClip, radius, stepWise):
circle = splitShape.splitCircle(
geoms = pointLocation,
circleRadius = i+stepWise,
incrementDegree = 20,
clipInterior = True,
innerWidth = i,
getGeom = 'Both')
circleGeo = geopandas.GeoDataFrame(geometry = circle[::int(skip)])
mergedData = mergedData.append(circleGeo)
mergedData.reset_index(drop = True, inplace = True)
mergedData['ids'] = range(len(mergedData))
ax = mergedData.plot(
figsize = (7,7), alpha = 0.6,
cmap = 'tab20', edgecolor = 'k',
linewidth = 2)
mergedData.apply(
lambda x: ax.annotate(
text=f"{x.ids}",
xy=x.geometry.centroid.coords[0],
ha='center',
va='center',
size=10),axis=1
)
return mergedData
plots = getTreatment(3000, 200, 800, 4)
```
### splitLatin
See **help(splitShape.splitLatin)** for more details about the parameters
1. split a treatment plot require a point location
2. split a treatment plot require a side length for the latin design experimental treatment
```
#help(splitShape.splitLatin)
latin = splitShape.splitLatin(
geoms = pointLocation,
bufferLength = 50)
latinGeo = geopandas.GeoDataFrame(geometry = latin)
latinGeo.plot(cmap = 'tab20', figsize = (7,7))
def getPlot(sideLength, maxVal, step, skip = None):
multiPoint = [shapely.geometry.Point(float(i), float(i+50)) for i in range(0, maxVal, step)]
mergedLatin = geopandas.GeoDataFrame()
for i in multiPoint:
latin = splitShape.splitLatin(
geoms = i,
bufferLength = sideLength)
latinGeo = geopandas.GeoDataFrame(geometry = latin[::skip])
mergedLatin = mergedLatin.append(latinGeo)
mergedLatin.reset_index(drop = True, inplace = True)
mergedLatin['ids'] = range(len(mergedLatin))
ax = mergedLatin.plot(figsize = (7,7), alpha = 0.6, cmap = 'tab20', edgecolor = 'k', linewidth = 2)
mergedLatin.apply(
lambda x: ax.annotate(
text=f"{x.ids}",
xy=x.geometry.centroid.coords[0],
ha='center',
va='center',
size=10),axis=1
)
return mergedLatin
latinData = getPlot(100, 1000, 200)
latinData1 = getPlot(100, 1000, 200, 2)
```
### splitCircleSquare
See **help(splitShape.splitCircleSquare)** for more details about the parameters
1. splitCircleSquare requires shapely Point location to place the treatment plot
2. splitCircleSquare requires length of radius in feet for the circle to create
3. splitCircleSquare requires rotation angle for the square in the circle for treatment placing,Its optional and default is 45
```
#help(splitShape.splitCircleSquare)
splitFeature = splitShape.splitCircleSquare(
geoms = pointLocation,
circleRadius = 100,
rotation = 45
)
gdfFeature = geopandas.GeoDataFrame(geometry = splitFeature)
gdfFeature.plot(cmap = 'tab20', figsize = (7,7))
def getCircleSquarePlot(sideLength, rotations, maxVal, step, skip = None):
multiPoint = [shapely.geometry.Point(float(i), float(i+50)) for i in range(0, maxVal, step)]
mergedLatin = geopandas.GeoDataFrame()
for i in multiPoint:
latin = splitShape.splitCircleSquare(
geoms = i,
circleRadius = sideLength,
rotation = rotations)
latinGeo = geopandas.GeoDataFrame(geometry = latin[::skip])
mergedLatin = mergedLatin.append(latinGeo)
mergedLatin.reset_index(drop = True, inplace = True)
mergedLatin['ids'] = range(len(mergedLatin))
ax = mergedLatin.plot(figsize = (7,7), alpha = 0.6, cmap = 'tab20', edgecolor = 'k', linewidth = 2)
mergedLatin.apply(
lambda x: ax.annotate(
text=f"{x.ids}",
xy=x.geometry.centroid.coords[0],
ha='center',
va='center',
size=10),axis=1
)
return mergedLatin
circleSquare = getCircleSquarePlot(200, 90, 500, 100)
```
### splitSquare
See **help(splitShape.splitSquare)** for more details about the parameters
1. splitSquare requires shapely Point location to place the treatment plot
2. splitSquare requires side length of radius in feet for the squre to create
3. splitSquare requires rotation angle for the square for treatment placing, Its optional and default is 45
4. splitSquare requires to either include interior squre polygon to include or exclude, default is True which includes interior geometry
```
#help(splitShape.splitSquare)
splitgeometry = splitShape.splitSquare(pointLocation, 50, 90)
squareGeoms = geopandas.GeoDataFrame(geometry = splitgeometry)
squareGeoms.plot(figsize = (7,7), cmap = 'tab20')
def getSquarePlot(sideLength, rotations, maxVal, step, include = True, skip = None):
multiPoint = [shapely.geometry.Point(float(i), float(i+50)) for i in range(0, maxVal, step)]
mergedLatin = geopandas.GeoDataFrame()
for i in multiPoint:
latin = splitShape.splitSquare(
geoms = i,
sideLength = sideLength,
rotation = rotations,
includeInterior = include)
latinGeo = geopandas.GeoDataFrame(geometry = latin[::skip])
mergedLatin = mergedLatin.append(latinGeo)
mergedLatin.reset_index(drop = True, inplace = True)
mergedLatin['ids'] = range(len(mergedLatin))
ax = mergedLatin.plot(figsize = (7,7), alpha = 0.6, cmap = 'tab20', edgecolor = 'k', linewidth = 2)
mergedLatin.apply(
lambda x: ax.annotate(
text=f"{x.ids}",
xy=x.geometry.centroid.coords[0],
ha='center',
va='center',
size=10),axis=1
)
return mergedLatin
squarePlot = getSquarePlot(320, 90, 500, 100)
squarePlot1 = getSquarePlot(320, 45, 500, 100, include = False, skip = None)
squarePlot1 = getSquarePlot(320, 135, 500, 100, include = False, skip = 3)
```
### splitGeom
See **help(splitShape.splitGeom)** for more details about the parameters
1. splitGeom requires shapely Polygon geometry to split a given polygon geometry
2. splitGeom requires number of splits to split polygon geometry
3. splitGeom requires rotation angle to find the major axis to split polygon geometry, Its optional and default is 30
```
#help(splitShape.splitGeom)
polys = [shapely.geometry.Polygon([(0, 0), (1, 1), (1, 0)]),
shapely.geometry.Polygon([(0, 0), (1, 1), (0, 1)]),
shapely.geometry.Polygon([(0, 0), (1, -1), (1, 0)])
]
polysGeo = geopandas.GeoDataFrame(geometry = polys)
polysGeo.plot(cmap = 'Spectral', figsize = (7,7))
geomUnion = shapely.ops.unary_union(polys)
polysGeos = geopandas.GeoDataFrame(geometry = [geomUnion])
polysGeos.plot(cmap = 'tab20', figsize = (7, 7))
def getSplitedGeoms(geoms, split, rotation):
splitGeometry = splitShape.splitGeom(geoms, split, rotation = rotation)
splitGeometry['ids'] = range(len(splitGeometry))
ax = splitGeometry.plot(figsize = (7,7), alpha = 0.9, cmap = 'Spectral', edgecolor = 'k', linewidth = 2)
splitGeometry.apply(
lambda x: ax.annotate(
text=f"{x.ids}",
xy=x.geometry.centroid.coords[0],
ha='center',
va='center',
size=10),axis=1
)
return splitGeometry
splitedGeoms = getSplitedGeoms(geomUnion, 10, 120)
splitedGeoms = getSplitedGeoms(geomUnion, 18, 120)
```
| github_jupyter |
# Steady State Material Balances On a Separation Train
This is the second problem of the famous set of [Ten Problems in Chemical Engineering](https://www.polymath-software.com/ASEE/Tenprobs.pdf). Here, the goal is to solve a set of simultaneous linear equations.
Jacob Albrecht, 2019
# Problem Setup
A distillation train fractionates a feed $F$ containing Xylene, Styrene, Toluene, and Benzene. The distillate $D$ and bottoms $B$ from the first column are each distilled again, with stream $D$ yielding $D_1$ and $B_1$ and stream $B$ yielding $D_2$ and $B_2$. The overall material balance for the system is:
Xylene: $0.07D_1+0.18B_1+0.15D_2+0.24B_2 = 0.15F$
Styrene: $0.04D_1+0.24B_1+0.10D_2+0.65B_2 = 0.25F$
Toluene: $0.54D_1+0.42B_1+0.54D_2+0.10B_2 = 0.40F$
Benzene: $0.35D_1+0.16B_1+0.21D_2+0.01B_2 = 0.20F$
# Problem Tasks
a) Calculate the molar flow rates of streams $D_1$, $D_2$, $B_1$ and $B_2$.
b) Determine the molar flow rates and compositions of streams $B$ and $D$
# Solutions
## Solution to part a)
Solving the system of linear equations is very straightforward using the `numpy` package. First, define a matrix of the species concentrations and flowrates:
```
import numpy as np
F = 70
A = np.array([[0.07,0.18,0.15,0.24],
[0.04,0.24,0.10,0.65],
[0.54,0.42,0.54,0.10],
[0.35,0.16,0.21,0.01]])
b = np.array([0.15,0.25,0.40,0.2])*F
X = np.linalg.solve(A,b)
# print the results, the line can be split using "\"
print('Molar flow of D1 is {:0.4}, B1 is {:0.4},\
D2 is {:0.4}, and B2 is {:0.4}'.format(*X))
```
## Solution to part b)
To find the flow rates of streams B and D , we can selectively sum parts of the solution from a). Multiplying the flow rates in part a) by the corresponding composition gives molar flow rates for the species that can be summed and divided by the flow rates to give molar fractions.
```
D = X[0:2].sum()
print('Molar flow of D is {:0.4}'.format(D))
D_comps = (A[:,0:2]*X[0:2]).sum(axis=1)/D
print('Mole fraction in D of Xylene is {:0.4},\
Styrene is {:0.4}, Toluene is {:0.4}, and Benzene \
is {:0.4}'.format(*D_comps))
B = X[2:4].sum()
print('Molar flow of B is {:0.4}'.format(B))
B_comps = (A[:,2:4]*X[2:4]).sum(axis=1)/B
print('Molar fraction in B of Xylene is {:0.4},\
Styrene is {:0.4}, Toluene is {:0.4}, and Benzene \
is {:0.4}'.format(*B_comps))
```
# Reference
“The Use of Mathematical Software packages in Chemical Engineering”, Michael B. Cutlip, John J. Hwalek, Eric H.
Nuttal, Mordechai Shacham, Workshop Material from Session 12, Chemical Engineering Summer School, Snowbird,
Utah, Aug., 1997.
```
%load_ext watermark
%watermark -v -p numpy
```
| github_jupyter |
# Sci-Hub coverage of referenced (cited) articles
Based on [OpenCitations](http://opencitations.net/).
```
import json
import pathlib
import pandas
with open('00.configuration.json') as read_file:
config = json.load(read_file)
path = pathlib.Path('data/doi.tsv.xz')
doi_df = pandas.read_table(path, compression='xz')
doi_df.head(2)
url = config['opencitations_url'] + 'data/citations-doi.tsv.xz'
citation_df = pandas.read_table(url)
# Take only outgoing citations from articles published after 2010
valid_source_dois = set(doi_df.query("issued >= '2015'").doi)
citation_df = citation_df.query("source_doi in @valid_source_dois")
citation_df.head(2)
merged_df = citation_df.merge(
doi_df.rename(columns={'doi': 'target_doi'})
)
merged_df.head(2)
# Top cited DOIs
merged_df.target_doi.value_counts().sort_values(ascending=False).head(10)
len(merged_df)
print(f'''
After filtering for valid DOIs (DOIs in the article catalog):
{len(merged_df):,} total DOI-to-DOI citations
{merged_df.source_doi.nunique():,} DOIs with outgoing DOI citations
{merged_df.target_doi.nunique():,} DOIs with incoming DOI citations
'''.strip())
# Raw Coverage
merged_df.sum(numeric_only=True)
# Percent Coverage
100 * merged_df.mean(numeric_only=True)
# Hit rate by article type
pandas.crosstab(merged_df.type, merged_df.in_scihub_dois, margins=True).sort_values('All')
# Hit rate by article type (as percents)
100 * merged_df.groupby('type').in_scihub_dois.mean()
```
## Journals
```
path = pathlib.Path('data/scopus-title-to-doi-map.tsv.xz')
title_map_df = pandas.read_table(path, compression='xz')
title_map_df = title_map_df.rename(columns={'doi': 'target_doi'})
title_map_df.head(2)
url = config['scopus_url'] + 'data/titles.tsv'
journal_df = pandas.read_table(url)
url = config['scopus_url'] + 'data/title-attributes.tsv'
journal_df = (
journal_df
.merge(pandas.read_table(url))
[['scopus_id', 'title_name', 'active', 'open_access']]
)
journal_df.head(2)
def summarize(df):
row = pandas.Series()
row['articles_cited'] = df.target_doi.nunique()
row['incites_scihub'] = sum(df.in_scihub_dois)
row['incites_crossref'] = len(df)
return row
coverage_df = journal_df.merge(
merged_df
.merge(title_map_df)
.groupby('scopus_id')
.apply(summarize)
.reset_index()
)
coverage_df['incite_coverage'] = coverage_df.incites_scihub / coverage_df.incites_crossref
coverage_df.sort_values('incites_crossref', ascending=False).head(3)
path = pathlib.Path('data/journal-incite-coverage.tsv')
with path.open('w') as write_file:
coverage_df.to_csv(write_file, index=False, sep='\t', float_format='%.5g')
```
## Citations to articles in toll access journals
```
toll_df = coverage_df.query("open_access == 0")
scihub = sum(toll_df.incites_scihub)
total = sum(toll_df.incites_crossref)
# Percent Coverage
print(f'''
{total:,} incoming citations from articles published since 2015 to articles in toll-access journals.
{scihub:,} of these cited articles ({scihub / total:.1%}) are in Sci-Hub's repository.
'''.strip())
```
## Citations to articles in open access journals
```
open_df = coverage_df.query("open_access == 1")
scihub = sum(open_df.incites_scihub)
total = sum(open_df.incites_crossref)
# Percent Coverage
print(f'''
{total:,} incoming citations from articles published since 2015 to articles in open access journals.
{scihub:,} of these cited articles ({scihub / total:.1%}) are in Sci-Hub's repository.
'''.strip())
```
| github_jupyter |
<a href="https://colab.research.google.com/github/TangJiahui/6.036_Machine_Learning/blob/main/MIT_6_036_HW08_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#MIT 6.036 Fall 2020: Homework 8#
This colab notebook provides code and a framework for [homework 8](https://lms.mitx.mit.edu/courses/course-v1:MITx+6.036+2020_Fall/courseware/Week8/week8_homework/). You can work out your solutions here, then submit your results back on the homework page when ready.
## <section>**Setup**</section>
First, download the code distribution for this homework that contains test cases and helper functions.
Run the next code block to download and import the code for this lab.
```
!rm -rf code_for_hw8*
!rm -rf data
!rm -rf mnist_data
!rm -rf *.zip
!rm -rf test*/
!rm -rf *.py
!rm -rf *.pt
!rm -rf __*
!wget --quiet https://introml.odl.mit.edu/cat-soop/_static/6.036/homework/hw08/code_for_hw8.zip --no-check-certificate
!unzip code_for_hw8.zip
!unzip code_for_hw8/q4.zip
!unzip -q test1.zip
!unzip -q test2.zip
!unzip -q test3.zip
!mv code_for_hw8/* .
import numpy as np
import itertools
import math as m
import torch
from torch import nn
from torch.optim import Adam
from torch.utils.data import DataLoader, TensorDataset
from matplotlib import pyplot as plt
from torchvision.datasets import MNIST
import torchvision
from skimage.io import imread, imshow
from skimage.transform import resize
import os
from code_for_hw8_oop import Module, Linear, Tanh, ReLU, SoftMax, NLL
from code_for_hw8_pytorch import get_image_data_1d
from utils_hw8 import (model_fit, model_evaluate, run_pytorch, call_model,
plot_decision, plot_heat, plot_separator, make_iter,
set_weights, set_bias)
```
# 1) Implementing Mini-batch Gradient Descent and Batch Normalization
** Note: You can click the arrow on the left of this text block to collapse/expand this optional section and all its code blocks **
Last week we implemented a framework for building neural networks from scratch. We trained our models using *stochastic* gradient descent. In this problem, we explore how we can implement batch normalization as a module `BatchNorm` in our framework. It is the same module which you analyzed in problem 1.
Key to the concept of batch normalization is the doing gradient descent on batches of data. So we instead of using last week's stochastic gradient descent, we will first implement the *mini-batch* gradient descent method `mini_gd`, which is a hybrid between *stochastic* gradient descent and *batch* gradient descent. The lecture notes on <a href="https://lms.mitx.mit.edu/courses/course-v1:MITx+6.036+2019_Spring/courseware/Week7/neural_networks_2/1?activate_block_id=block-v1%3AMITx%2B6.036%2B2019_Spring%2Btype%40vertical%2Bblock%40neural_networks_2_optimizing_neural_network_parameters_vert"> optimizing neural network parameters</a> are helpful for this part.
In *mini-batch* gradient descent, for a mini-batch of size $K$, we select $K$ distinct data points uniformly at random from the data set and update the network weights based only on their contributions to the gradient:
$$W := W - \eta\sum_{i=1}^K \nabla_W \mathcal{L}(h(x^{(i)}; W), y^{(i)})\;\;.$$
Our *mini-batch* method `mini_gd` will be implemented within the `Sequential` python class (see homework 7 problem 2) and will take the following as inputs:
* `X`: a standard data array (d by n)
* `y`: a standard labels row vector (1 by n)
* `iters`: the number of updates to perform on weights $W$
* `lrate`: the learning rate used
* `K`: the mini-batch size to be used
One call of `mini_gd` should call `Sequential.backward` for back-propagation and `Sequential.step` for updating the weights, for a total of `iters` times, using `lrate` as the learning rate. As in our implementation of `sgd` from homework 7, we compute the predicted output for a mini-batch of data with the `Sequential.forward` method. We compute the loss between our predictions and the true labels using the assigned `Sequential.loss` method. (Note that in homework 7, `Sequential.step` was called `Sequential.sgd_step`. While the functionality of the step function is the same, it has been renamed for convenience. The same is true for the `module.step` function of each module we implemented, where applicable.)
For picking $K$ unique data points at random from our large data-set for each mini-batch, we will implement the following strategy: we will first shuffle our data points `X` (and associated labels `y`). Then, we get $\frac{n}{k}$ (rounded down to the nearest integer) different mini-batches by grouping each $K$ consecutive points from this shuffled array. If we end up iterating over all the points but need more mini-batches, we will repeat the shuffling and the batching process.
<b>1A)</b> You need to fill in the missing code below. We have implemented the shuffling of indices and have provided you with the outer and inner loops.
Implement `mini_gd` in `Sequential` below.
**Hint:** The documentation for <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.shuffle.html"> `numpy.random.shuffle`</a> might be helpful for this part. If you have a list of elements `l` and a set of indices `indices`, you can call `numpy.random.shuffle(indices)` and `l = l[indices]` to shuffle the elements of `l`.
**Implementation Note:** “Notice that y is represented as Y which is one-hot encoded”
```
import math
class Sequential:
def __init__(self, modules, loss):
self.modules = modules
self.loss = loss
def mini_gd(self, X, Y, iters, lrate, notif_each=None, K=10):
D, N = X.shape
np.random.seed(0)
num_updates = 0
indices = np.arange(N)
while num_updates < iters:
np.random.shuffle(indices)
X = X[:, indices]
Y = Y[:, indices]
for j in range(math.floor(N/K)):
if num_updates >= iters: break
# Implement the main part of mini_gd here
Xt = X[:,j*K:(j+1)*K]
Yt = Y[:,j*K:(j+1)*K]
Ypred = self.forward(Xt)
loss = self.loss.forward(Ypred, Yt)
dLdZ = self.loss.backward()
self.backward(dLdZ)
self.sgd_step(lrate)
num_updates += 1
def forward(self, Xt):
for m in self.modules: Xt = m.forward(Xt)
return Xt
def backward(self, delta):
for m in self.modules[::-1]: delta = m.backward(delta)
def sgd_step(self, lrate):
for m in self.modules: m.sgd_step(lrate)
```
<b>1B)</b> We are now ready to implement batch normalization into our neural network framework! Our module `BatchNorm` will sit between consecutive layers of neurons, such as the $l^{th}$ and $(l+1)^{th}$ layers, acting as a "corrector" which allows $W^l$ to change freely, producing outputs $z^l$, but then the module corrects the covariate shift induced in the signals before they reach the $(l+1)^{th}$ layer, converting $z^l$ to $\widehat{Z}^l$.
The following is a summmary what is described in the <a href="https://lms.mitx.mit.edu/courses/course-v1:MITx+6.036+2019_Spring/courseware/Week7/neural_networks_2/2">lecture notes</a>, and it should guide your implementation of the module.
Any normalization between the $l^{th}$ and $(l+1)^{th}$ layers is done *separately* for each of the $n^l$ input connections leading to the $(l+1)^{th}$ layer. We handle a mini-batch of data of size $K$, and $Z^l$ is $n^l \times K$, and the output $\widehat{Z}^l$is of the same shape.
We first compute $n^l$ *batchwise* means and
standard deviations. Let $\mu^l$ be the $n^l \times 1$ vector (`self.mus`) where
$$\mu^l_i = \frac{1}{K} \sum_{j = 1}^K Z^l_{ij}\;\;,$$
and let $\sigma^l$ be the $n^l \times 1$ vector of standard deviations where
$$\sigma^l_i = \sqrt{\frac{1}{K} \sum_{j = 1}^K (Z^l_{ij} - \mu_i)^2}\;\;.$$
Note that `self.vars` is the variance, or element-wise square of $\sigma^l_i$.
The normalized data `self.norm` is the matrix $\overline{Z}$, where
$$\overline{Z}^l_{ij} = \frac{Z^l_{ij} - \mu^l_i}{\sigma^l_i + \epsilon}\;\;,$$
and where $\epsilon$ is a very small constant to guard against division by
zero.
We define weights $G^l$ (`self.G`) and $B^l$ (`self.B`), each being an $n^l \times 1$ vector, which we use to to shift and scale the outputs:
$$\widehat{Z}^l_{ij} = G^l_i \overline{Z}^l_{ij} + B^l_i\;\;.$$
The outputs are finally ready to be passed to the $(l+1)^{th}$ layer.
A slight warning (that we will not worry about here) about `BatchNorm` is that during the *test* phase, if the test mini-batch size is too small (imagine we are deploying a neural network that deals with live video frames), then the lack of samples would cause the freshly-calculated $\mu^l$ and $\sigma^l$ to be far off from their true values that the module's parameters $G^l$ and $B^l$ were trained to be compatible with. To fix that, people usually compute a running average of $\mu^l$ and $\sigma^l$ during training, to be used at test time. We will assume our test mini-batches are large enough.
In this problem we only implement the `BatchNorm.forward` and `BatchNorm.sgd_step` methods. We provide you with the implementation for `BatchNorm.backward` and the lecture notes contain the details of the derivations. You will need to fill in the missing code below.
```
class BatchNorm(Module):
def __init__(self, m):
np.random.seed(0)
self.eps = 1e-20
self.m = m # number of input channels
# Init learned shifts and scaling factors
self.B = np.zeros([self.m, 1]) # m x 1
self.G = np.random.normal(0, 1.0 * self.m ** (-.5), [self.m, 1]) # m x 1
def forward(self, Z):
# Z is n^l x K: m input channels and mini-batch size K
# Store last inputs and K for next backward() call
self.Z = Z
self.K = Z.shape[1]
self.mus = np.mean(Z, axis=1, keepdims=True)
self.vars = np.var(Z, axis=1, keepdims=True)
# Normalize inputs using their mean and standard deviation
self.norm = (Z - self.mus)/(np.sqrt(self.vars) + self.eps)
# Return scaled and shifted versions of self.norm
return self.G * self.norm + self.B
def backward(self, dLdZ):
# Re-usable constants
std_inv = 1/np.sqrt(self.vars+self.eps)
Z_min_mu = self.Z-self.mus
dLdnorm = dLdZ * self.G
dLdVar = np.sum(dLdnorm * Z_min_mu * -0.5 * std_inv**3, axis=1, keepdims=True)
dLdMu = np.sum(dLdnorm*(-std_inv), axis=1, keepdims=True) + dLdVar * (-2/self.K) * np.sum(Z_min_mu, axis=1, keepdims=True)
dLdX = (dLdnorm * std_inv) + (dLdVar * (2/self.K) * Z_min_mu) + (dLdMu/self.K)
self.dLdB = np.sum(dLdZ, axis=1, keepdims=True)
self.dLdG = np.sum(dLdZ * self.norm, axis=1, keepdims=True)
return dLdX
def sgd_step(self, lrate):
self.B = self.B - lrate*self.dLdB
self.G = self.G - lrate*self.dLdG
```
# 2) Weight sharing (OPTIONAL)
** Note: You can click the arrow on the left of this text block to collapse/expand this optional section and all its code blocks **
In the lab we designed a CNN that can count the number of objects in 1 dimensional images, where each black pixel is represented by a value of 0 and each white pixel is represented by a value of 1. Recall that an object is a consecutive sequence of black pixels ($0$'s). For example, the sequence $0100110$ contains three objects.
In this problem we want to see how hard/easy it is to train such a network from data.
Our network architecture will be as follows:
* The first layer is convolutional and you will implement it using the PyTorch `torch.nn.Conv1d` function, with a kernel of size 2 and stride of 1, followed by a ReLu activation (`torch.nn.ReLU`).
* The second layer is a fully connected `torch.nn.Linear` layer which has a scalar output.
Here is sample usage of the `Conv1d` and `Linear` layers.
`layer1=torch.nn.Conv1d(in_channels=?, out_channels=?, kernel_size=?,stride=?,padding=?,bias=True)`
Here, `in_channels` is the number of channels in your data (so for example, RGB images have 3 channels). You can think of the `out_channels` variable as the number of filters you are using.
`layer3 = torch.nn.Linear(in_units=?, out_units=?)`
You need to fill in the parameters marked with `?` based on the problem specifications. Note also that in PyTorch, depending on your implementation, you may be forced to use *three* (four if we count ReLU) layers to implement such a network, where one intermediary `Flatten` layer is used to flatten the output of the convolutional layer, before being passed to the dense layer.
Refer to the <a href="https://pytorch.org/docs/stable/nn.html#conv1d">Conv1D</a>, <a href="https://pytorch.org/docs/stable/nn.html#linear">Linear</a> and <a href="https://pytorch.org/docs/stable/nn.html#flatten">Flatten</a> descriptions in the PyTorch documentation to see the available parameter options.
In this exercise, we fix the structure and want to learn the best combination of weights from data. In the homework code, we have provided functions `train_neural_counter` and `get_image_data_1d`. You can use them to generate data and train the above neural network in PyTorch to answer the following questions. We assume that the images in our data set are randomly generated. The probability of a pixel being white is $0.1$. We work with mean squared error as the loss function for this problem. We have provided template code which you can fill in, to perform the training.
We have also provided helper functions such as `set_weight`, `set_bias`, which might help you set weights and biases of a particular layer.
<b>2B)</b> What is (approximately) the expected loss of the network on $1024\times 1$ images if the convolutional layer is an averaging filter and second layer is the sum function (without a bias term)? (Note that you can answer the question theoretically or through coding, depending on your preference.)
```
# Code template if you would like to check 2B) through code
tsize = 1000
imsize = 1024
prob_white = 0.1
(X_train,Y_train,X_val,Y_val,X_test,Y_test) = get_image_data_1d(tsize,imsize,prob_white)
test_loader = make_iter(X_test, Y_test)
num_filters = 1
kernel_size = 2
strides = 1
padding = 1
layer_1 = nn.Conv1d(in_channels=1, out_channels=num_filters, kernel_size=kernel_size, stride=strides, padding=padding, bias=False)
num_units = imsize+1 # Your code
layer_3 = nn.Linear(num_units, 1, bias=False)
layers = [layer_1, nn.ReLU(), nn.Flatten(), layer_3]
model = nn.Sequential(*layers)
set_weights(model[0], np.array([1/2,1/2]))
set_weights(model[-1], np.ones(num_units))
model_evaluate(model, test_loader, nn.MSELoss())
```
<b>2C)</b> Now suppose we add a bias term of $-10$ to the last layer. What is (approximately) the expected quadratic loss? (Note that you can answer the question theoretically or through coding, depending on your preference.)
```
# Edit code from 2B) with the bias
bias = -10
set_bias(model[-1], bias)
model_evaluate(model, test_loader, nn.MSELoss())
```
<b>2D)</b> Averaging type filters are abundant and form a nearly flat valley of local minima for this problem. It is difficult for the network to find alternative solutions on its own. We need to force our way out of these bad minima and towards a better solution, i.e., an edge detector. To force the first layer to behave as an edge detector, we need to choose a proper **kernel regularizer**. Consider the following functions
$f_1=\sum_i |w_i|$, $f_2=\sum_i |w_i^2|$, $f_3=|\sum_{i} w_i|$. Which one of the choices is likely to guide the network to find an edge detector at the convolution layer?
<a href="https://lms.mitx.mit.edu/courses/course-v1:MITx+6.036+2020_Fall/courseware/Week8/week8_homework/">Refer to HW8 on MITx.</a>
Implement your choice of regularizers from above in the code (complete the function `filter_reg`). Do not allow any bias in the layers for the rest of the problem. The code generates some random test and training data sets and trains the model on these data. Run a few learning trials (5 or more) for each data set and answer the following questions based on the performance of your model.
**IMPORTANT**: When implementing `filter_reg`, you should use the torch backend operations, imported as "torch" in the code. So for example, `torch.sum` and `torch.abs`, rather than `np.sum` and `np.abs`. This is because the `weights` argument is NOT a numpy object, but rather an internal torch object!
```
# Implement filter_reg
def filter_reg(weights, lam = 1000):
# We scale the output of the filter by lam
lam=lam
filter_result = torch.abs(torch.sum(weights))
return lam * filter_result
def model_reg(model):
# Don't edit this function!
filter_weights = model[0].weight
return filter_reg(filter_weights)
def train_neural_counter(layers, data, regularize=False, display=False):
(X_train, Y_train, X_val, Y_val, X_test, Y_test) = data
epochs = 10
batch = 1
train_iter, val_iter, test_iter = (make_iter(X_train, Y_train),
make_iter(X_val,Y_val),
make_iter(X_test,Y_test))
model = nn.Sequential(*layers)
optimizer = Adam(model.parameters())
criterion = nn.MSELoss()
model_fit(model, train_iter, epochs, optimizer, criterion, val_iter,
history=None,verbose=True, model_reg=model_reg if regularize else None)
err = model_evaluate(model, test_iter, criterion)
ws = model[-1].weight
if display:
plt.plot(ws)
plt.show()
return model,err
```
<b>2E)</b> For $1024\times 1$ images and training set of size $1000$, is the network **without any regularization** likely to find models that have a mean square error lower than 8 on the test data?
```
imsize = 1024
prob_white = 0.1
data=get_image_data_1d(1000, imsize, prob_white)
trials=5
for trial in range(trials):
num_filters = 1
kernel_size = 2
strides = 1
padding = 1
layer_1 = nn.Conv1d(in_channels=1, out_channels=num_filters, kernel_size=kernel_size, stride=strides, padding=padding, bias=False)
num_units = imsize+1
layer_3 = nn.Linear(num_units, 1, bias=False)
layers = [layer_1, nn.ReLU(), nn.Flatten(), layer_3]
model, err=train_neural_counter(layers, data)
print(model[0].weight)
print(torch.mean(model[-1].weight))
print(err)
```
#### For parts F) to J), simply edit your code from E) with the necessary changes.
<b>2F)</b> Repeat the same experiment, but now with the regularizer you implemented. Try different regularization parameters. Which choice of regularization parameter gives the best prediction results?
```
# Edit code from 2E), using your regularization turned on in the train_neural_counter function.
# Try setting the lambda parameter to different values in filter_reg
# Set regularize=True
imsize = 1024
prob_white = 0.1
data=get_image_data_1d(1000, imsize, prob_white)
trials=5
def run_nn():
for trial in range(3):
num_filters = 1
kernel_size = 2
strides = 1
padding = 1
layer_1 = nn.Conv1d(in_channels=1, out_channels=num_filters, kernel_size=kernel_size, stride=strides, padding=padding, bias=False)
num_units = imsize+1
layer_3 = nn.Linear(num_units, 1, bias=False)
layers = [layer_1, nn.ReLU(), nn.Flatten(), layer_3]
model, err=train_neural_counter(layers, data, regularize=True)
print(model[0].weight)
print(torch.mean(model[-1].weight))
print(err)
for lam in [0, 1, 10, 1000]:
def filter_reg(weights, lam = lam):
# We scale the output of the filter by lam
lam=lam
filter_result = torch.abs(torch.sum(weights))
return lam * filter_result
run_nn()
```
<b>2G)</b> With the above choice of regularization parameter, what is the mean square error of the best network that you find on the test data? Try a few trials (5 or more) for each data test and report the value of the best network.
#### We expect the training to be easier when there are fewer parameters to learn. Consider images of size $128\times 1$ for the rest of the problem.
<b>2H)</b> Instead of resorting to regularization again, we may instead find a way to reduce the number of parameters. What additional layer can you add to the output of the convolution layer to reduce the number of parameters to be learned without losing any relevant information?
<a href="https://lms.mitx.mit.edu/courses/course-v1:MITx+6.036+2020_Fall/courseware/Week8/week8_homework/">Refer to HW8 on MITx.</a>
<b>2I)</b> Add the layer you suggested above to your network and run some tests with data sets of size 1000 on $128\times 1$ images. How many parameters are left to learn with the new structure?
You can find the appropriate documentations for the new types of modules mentioned in the previous problem here:
<a href="https://pytorch.org/docs/stable/nn.html#dropout">Dropout</a>
<a href="https://pytorch.org/docs/stable/nn.html#maxpool1d">MaxPool1d</a>
# 3) MNIST (Digit Classification)
In this section, we'll be looking at the MNIST data set seen already in problem 2. This time, we look at the *complete* MNIST problem where our networks will take an image of *any* digit from $0-9$ as input (recall that problem 2 only looked at digits $0$ and $1$) and try to predict that digit. Note that in general, an image is described as a two-dimensional array of pixels. Here, the image is a <a href="https://en.wikipedia.org/wiki/Grayscale">grayscale</a> image, so each pixel is represented by only one integer value, in the range $0$ to $255$ (compared to RGB images where each pixel is represented by three integer values, encoding intensity levels in red, green, and blue color channels).
Also, we will now use out-of-the-box neural network implementations using PyTorch. State-of-the-art systems have error rates of less than 0.5% percent on this data set (see <a href="http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354">this list</a>). We'll be happy with an error rate less than 2% since we don't have all year...
You can access the MNIST data for this problem using:
<br><code>train, validation = get_MNIST_data()</code>
```
def shifted(X, shift):
n = X.shape[0]
m = X.shape[1]
size = m + shift
X_sh = np.zeros((n, size, size))
plt.ion()
for i in range(n):
sh1 = np.random.randint(shift)
sh2 = np.random.randint(shift)
X_sh[i, sh1:sh1+m, sh2:sh2+m] = X[i, :, :]
# If you want to see the shifts, uncomment
#plt.figure(1); plt.imshow(X[i])
#plt.figure(2); plt.imshow(X_sh[i])
#plt.show()
#input('Go?')
return X_sh
def get_MNIST_data(shift=0):
train = MNIST(root='./mnist_data', train=True, download=True, transform=None)
val = MNIST(root='./mnist_data', train=False, download=True, transform=None)
(X_train, y1), (X_val, y2) = (train.data.numpy(), train.targets.numpy()), \
(val.data.numpy(), val.targets.numpy())
if shift:
X_train = shifted(X_train, shift)
X_val = shifted(X_val, shift)
return (X_train, y1), (X_val, y2)
```
You can run the fully connected MNIST model, using:
<br><code>run_pytorch_fc_mnist(train, validation, layers, epochs)</code>
And, you can run the CNN MNIST test, using:
<br><code>run_pytorch_cnn_mnist(train, validation, layers, epochs)</code>
You will need to design your own `layers` to feed to `run_pytorch_fc_mnist` and `run_pytorch_cnn_mnist`, which will be different than the ones specified by `archs()`. For instance, `layers=[nn.Linear(in_features=64, out_features=4)]` defines a single layer with 64 inputs and 4 output units.
Note that the training procedure, uses <a href="https://pytorch.org/docs/stable/nn.html#crossentropyloss">PyTorch's CrossEntropyLoss</a>, which handles the softmax activations for you, so adding a softmax layer to the end of your network is not necessary and will produce undesired results.
Also, we advise you to use the option `verbose=True` when unsure about the progress made during training of your models.
#### **IMPORTANT:** For this and subsequent questions, use the PyTorch implementation of modules. For example, for a linear layer, use <code>nn.Linear(...)</code>.
<b> 3A)</b> Look at the code and indicate what the difference is between <code>run_pytorch_fc_mnist</code> and <code>run_pytorch_cnn_mnist</code>.
```
def make_deterministic():
torch.manual_seed(10)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(10)
def weight_reset(l):
if isinstance(l, nn.Conv2d) or isinstance(l, nn.Linear):
l.reset_parameters()
def run_pytorch_fc_mnist(train, test, layers, epochs, verbose=True, trials=1, deterministic=True):
'''
train, test = input data
layers = list of PyTorch layers, e.g. [nn.Linear(in_features=784, out_features=10)]
epochs = number of epochs to run the model for each training trial
trials = number of evaluation trials, resetting weights before each trial
'''
if deterministic:
make_deterministic()
(X_train, y1), (X_val, y2) = train, test
# Flatten the images
m = X_train.shape[1]
X_train = X_train.reshape((X_train.shape[0], m * m))
X_val = X_val.reshape((X_val.shape[0], m * m))
val_acc, test_acc = 0, 0
for trial in range(trials):
# Reset the weights
for l in layers:
weight_reset(l)
# Make Dataset Iterables
train_iter, val_iter = make_iter(X_train, y1, batch_size=32), make_iter(X_val, y2, batch_size=32)
# Run the model
model, vacc, tacc = \
run_pytorch(train_iter, val_iter, None, layers, epochs, verbose=verbose)
val_acc += vacc if vacc else 0
test_acc += tacc if tacc else 0
if val_acc:
print("\nAvg. validation accuracy:" + str(val_acc / trials))
if test_acc:
print("\nAvg. test accuracy:" + str(test_acc / trials))
def run_pytorch_cnn_mnist(train, test, layers, epochs, verbose=True, trials=1, deterministic=True):
if deterministic:
make_deterministic()
# Load the dataset
(X_train, y1), (X_val, y2) = train, test
# Add a final dimension indicating the number of channels (only 1 here)
m = X_train.shape[1]
X_train = X_train.reshape((X_train.shape[0], 1, m, m))
X_val = X_val.reshape((X_val.shape[0], 1, m, m))
val_acc, test_acc = 0, 0
for trial in range(trials):
# Reset the weights
for l in layers:
weight_reset(l)
# Make Dataset Iterables
train_iter, val_iter = make_iter(X_train, y1, batch_size=32), make_iter(X_val, y2, batch_size=32)
# Run the model
model, vacc, tacc = \
run_pytorch(train_iter, val_iter, None, layers, epochs, verbose=verbose)
val_acc += vacc if vacc else 0
test_acc += tacc if tacc else 0
if val_acc:
print("\nAvg. validation accuracy:" + str(val_acc / trials))
if test_acc:
print("\nAvg. test accuracy:" + str(test_acc / trials))
```
<b> 3B)</b> Using one epoch of training, what is the accuracy of a network **with no hidden units** (using the <code>run_pytorch_fc_mnist</code> method) on this data?
```
train, validation = get_MNIST_data()
run_pytorch_fc_mnist(train, validation, [nn.Linear(in_features=28*28, out_features=10)], 1, verbose=True)
```
<b> 3C)</b> Now, linearly scale the input data so that the pixel values are between 0 and 1 and repeat your test with the original layer. What is the accuracy now?
```
layers = [nn.Linear(in_features=28*28, out_features=10)]
train, validation = get_MNIST_data()
# Scale the images
train = ((train[0]- np.mean(train[0]))/np.std(train[0]), train[1])
validation = ((validation[0]- np.mean(validation[0]))/np.std(validation[0]), validation[1])
run_pytorch_fc_mnist(train, validation, layers, 1, verbose=True)
```
### Important: <b>Always scale the data like in 3C) for subsequent problems.</b>
<b> 3E)</b> Using this same architecture, what is the accuracy after the 1st, 5th, 10th, and 15th epochs? Note that this colab notebook 0-indexes epoch output. We're looking for the first, fifth, tenth, and fifteenth number outputted by <code>run_pytorch_fc_mnist(train, validation, layers, 15, verbose=True)</code>
```
train, validation = get_MNIST_data()
# Scale the images
train = train[0] / 255, train[1]
validation = validation[0] / 255, validation[1]
#train = ((train[0]- np.mean(train[0]))/np.std(train[0]), train[1])
#validation = ((validation[0]- np.mean(validation[0]))/np.std(validation[0]), validation[1])
layers = [nn.Linear(in_features=28*28, out_features=10)]
run_pytorch_fc_mnist(train, validation, layers, 15, trials = 1, verbose=True)
```
0.9142, 0.923, 0.9237, 0.9241
<b> 3H)</b> Using one epoch of training, try a single hidden layer, followed by a ReLU activation layer before the final output layer, and gradually increase the units; specifically, try (128, 256, 512, 1024) units and observe the results. What are the accuracies?
To define a ReLU layer in pytorch simply use <code>ReLU()</code>.
```
for num in [128,256,512,1024]:
layers = [nn.Linear(in_features=28*28, out_features=num), nn.ReLU(),nn.Linear(in_features=num, out_features=10)]
run_pytorch_fc_mnist(train, validation, layers, epochs=1, verbose=True)
```
<b> 3I)</b> Now, try a network with two hidden layers:
<ul>
<li>A fully connected layer with 512 hidden units
<li> A ReLU activation layer
<li> A fully connected layer with 256 hidden units
<li> A ReLU activation layer
<li> A fully-connected layer with 10 output units
What is the accuracy?
```
layers = [nn.Linear(in_features=28*28, out_features=512),
nn.ReLU(),
nn.Linear(in_features=512, out_features=256),
nn.ReLU(),
nn.Linear(in_features=256, out_features=10)]
run_pytorch_fc_mnist(train, validation, layers, 1, verbose=True)
```
<b> 3J)</b> Build a convolutional network with the following structure:
<ul>
<li> A convolutional layer (without padding) with 32 filters of size 3 × 3
<li> A ReLU activation layer
<li> A max pooling layer with size 2 × 2
<li> A convolutional layer (without padding) with 64 filters of size 3 × 3
<li> A ReLU activation layer
<li> A max pooling layer with size 2 × 2
<li> A flatten layer (<b>will flatten to size 1600 = 5 × 5 × 64</b>; try to figure out why!)
<li> A fully connected layer with 128 units
<li> A ReLU activation layer
<li> A dropout layer with drop probability 0.5
<li> A fully-connected layer with 10 output units
</ul>
To define Convolutional and max pooling layers in PyTorch use the following syntax:
<code> c = Conv2d(in_channels=i, out_channels=o, kernel_size=filter_size); m = MaxPool2d(kernel_size=filter_size) </code>, where <code> i </code> and <code> o</code> are integers and <code>filter_size</code> can either be an integer (for square filters) or a tuple (for non-square filters i.e. (2, 3) for 2x3 filter).
Train it on MNIST for one epoch, using <code>run_pytorch_cnn_mnist</code> (this may take a little while). What is the accuracy on the validation set?
```
c_1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)
c_2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
m = nn.MaxPool2d(kernel_size=2)
layers = [c_1,
nn.ReLU(),
m,
c_2,
nn.ReLU(),
m,
nn.Flatten(),
nn.Linear(in_features=1600, out_features=128),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(in_features=128, out_features=10)
]
run_pytorch_cnn_mnist(train, validation, layers, epochs = 1, verbose=True)
```
<b> 3K)</b> Now, let's compare the performance of a fully connected model and a CNN on data where the characters have been shifted randomly so that they are no longer centered.
You can build such a data set by calling: <code>train_20, validation_20 = get_MNIST_data(shift=20)</code>. Remember to scale it appropriately.
<b>Note that each image is now 48x48, so you will need to change your layer definitions (size after Flatten will be 6400)</b>.
Run your two-hidden-layer FC architecture from above (problem 3I) on this data and then run the CNN architecture from above (problem 3J), both for one epoch. Report your results.
```
train_20, validation_20 = get_MNIST_data(shift=20)
# Scale the images
train_20 = (train_20[0]/255, train_20[1])
validation_20 = (validation_20[0]/255, validation_20[1])
layers_fc = [nn.Linear(in_features=48*48, out_features=512),
nn.ReLU(),
nn.Linear(in_features=512, out_features=256),
nn.ReLU(),
nn.Linear(in_features=256, out_features=10)]
run_pytorch_fc_mnist(train_20, validation_20, layers_fc, 1, verbose=True)
c_1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)
c_2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
m = nn.MaxPool2d(kernel_size=2)
layers_cnn = [c_1,
nn.ReLU(),
m,
c_2,
nn.ReLU(),
m,
nn.Flatten(),
nn.Linear(in_features=6400, out_features=128),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(in_features=128, out_features=10)
]
run_pytorch_cnn_mnist(train_20, validation_20, layers_cnn, 1, verbose=True)
```
# 4) Raining Cats and Dogs
In this problem, we are going to explore how a model trained on a particular dataset behaves in the general population. We will use the following functions (and [generator](https://wiki.python.org/moin/Generators)).
```
def load_images(directory, imdir='./'):
imgs = []
labels = []
for i in os.listdir(os.path.join(imdir, directory)):
img = resize(imread(os.path.join(imdir, directory, i)), (224, 224), anti_aliasing=True)
imgs.append(np.moveaxis(img, 2, 0))
if 'cat' in i:
labels.append(0)
else:
labels.append(1)
imgs = np.array(imgs)
return imgs, labels
def data_gen(train_images, train_labels, batch_size, eval=True):
all_idxs = np.arange(len(train_labels))
idxs = np.random.shuffle(all_idxs)
i = 0
while i * batch_size + batch_size < train_images.shape[0]:
samples = train_images[i*batch_size: (i+1)*batch_size]
sample_labels = train_labels[i*batch_size: (i+1)*batch_size]
i += 1
yield torch.tensor(samples, dtype=torch.float), torch.tensor(sample_labels, dtype=torch.long)
if eval:
samples = train_images[(i)*batch_size:]
sample_labels = train_labels[(i)*batch_size:]
yield torch.tensor(samples, dtype=torch.float), torch.tensor(sample_labels, dtype=torch.long)
def postproc_output(out):
sm = torch.nn.Softmax(dim=1)
return sm(out).detach().numpy()
```
**4A)** Write code to evaluate the model on each of the three test sets, following this pseudocode:
<ol>
<li> Load model
<ul>
<li> <tt> squeezenet_trained_cats_v_dogs.pt</tt> contains the <tt>state_dict</tt> of the model
<li> You will need to instantiate the model architecture first by running:
<tt>model = torchvision.models.squeezenet1_1(num_classes=2)</tt>
<li> Then use <a href="https://pytorch.org/docs/stable/nn.html?highlight=load_state_dict#torch.nn.Module.load_state_dict"><tt> model.load_state_dict</tt> </a>; make sure to use the parameter <tt>map_location=torch.device('cpu')</tt> when you use <tt>torch.load</tt>. It might also be helpful to read about the general workflow of <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference"> saving and loading trained models in Pytorch </a>
<li> Make sure that you are in evaluation mode by using <tt> model.eval() </tt>
</ul>
<li> Load data from <tt>data_path</tt> [Use our function <tt>load_images</tt>]
<li> For each batch of data [Use our generator <tt>data_gen</tt>; note that the batch size doesn't really matter here except to keep from having to multiply matrices that are too large. For ease of implementation we suggest <tt>batch_size=1</tt>]:
<ul>
<li> Pass the batch of <tt>data</tt> through the model using <tt>model(data)</tt>
<li> Convert the predictions of the model to guesses (after softmax) [use our function <tt>postproc_output</tt>]
<li> Compare the guesses to the actual <tt>labels</tt>
</ul>
<li> Total the accuracy
</ol>
```
def evaluate_model(model_path, data_path):
# Load model
model = torchvision.models.squeezenet1_1(num_classes=2) # Instantiate model architecture
model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu'))) # Load trained model
model.eval() # Ensure that you are in evaluation mode
# Load data
batch_size = 1
imgs, labels = load_images(data_path) # Your code here (call load_images on data_path)
data_generator = data_gen(imgs, labels, batch_size=1) # Your code here
correct, total = 0, 0
# Iterate through data, label in data generator
for i, (data, lab) in enumerate(data_generator):
total += 1
predict = np.argmax(postproc_output(model(data))[0])
true_lab = lab.item()
if predict == true_lab:
correct += 1
print(correct/total)
```
Calculate the performance on each of the test sets.
```
model_path = 'squeezenet_trained_cats_v_dogs.pt'
for data_path in ['test1', 'test2', 'test3']:
evaluate_model(model_path, data_path)
```
| github_jupyter |
```
from pitas import power, flipper_tools
from orphics import maps as omaps
from pixell import enplot, enmap, curvedsky
import numpy as np
from cosmikyu import stats, mpi, datasets, config, utils, gan, transforms
from cosmikyu import nn as cnn
import os
from itertools import product
import healpy as hp
import matplotlib.pyplot as plt
import random
from minkfncts2d import MF2D
from itertools import cycle
%matplotlib inline
%load_ext autoreload
%autoreload 2
data_dir = config.default_data_dir
sehgal_dir = os.path.join(data_dir, 'sehgal')
stat_dir = os.path.join(sehgal_dir, "stats")
norm_info_file = "/home/dwhan89/workspace/cosmikyu/data/sehgal/281220_logz_normalization_info_validation.npz"
SDN = transforms.SehgalDataNormalizerScaledLogZShrink(norm_info_file)
SUN = transforms.SehgalDataUnnormalizerScaledLogZShrink(norm_info_file)
compts = ["kappa", "ksz", "tsz", "ir", "rad"]
plot_dir = "/home/dwhan89/scratch/outbox/cosmikyu"
def plot_path(x):
return os.path.join(plot_dir, x)
overwrite=False
STAT_TEST = stats.STATS("sehgal_cosmoganwgpv_minko_281220", output_dir=stat_dir, overwrite=overwrite)
shape = (128,128)
SDS_test = datasets.SehgalDataSet(sehgal_dir, "test281220_fromcat", transforms=[SDN],
dummy_label=False, dtype=np.float64, shape=(5,)+shape)
nsample = 50#len(SDS_test)
z = np.linspace(-15,15,300)
for i, compt in enumerate(compts):
for j in range(nsample):
if j % 100 == 0: print(compt, j)
st_idx_temp = "%s_{}"%compt
if STAT_TEST.has_data(st_idx_temp.format("chi"), j):
continue
storage = np.zeros((len(z),3))
for k, threshold in enumerate(z):
f, u, chi = MF2D((SDS_test[j][i]).astype(np.float), threshold)
storage[k,:] = np.array([f, u, chi])
STAT_TEST.add_data(st_idx_temp.format("f"), j, storage[:,0].copy())
STAT_TEST.add_data(st_idx_temp.format("u"), j, storage[:,1].copy())
STAT_TEST.add_data(st_idx_temp.format("chi"), j, storage[:,2].copy())
STAT_TEST.add_data("z", 0, z)
ret = STAT_TEST.get_stats()
STanh = cnn.ScaledTanh(15., 2./15.)
LF = cnn.LinearFeature(5,5)
SC = transforms.SehgalSubcomponets([0])
norm_info_file = "/home/dwhan89/workspace/cosmikyu/data/sehgal/281220_logz_normalization_info_validation.npz"
#experiment_id = "ec72a32f599f4ccda54a556ba56abea4"#"e2d04f98e77a49c5804db3379217986f"
#model_dir = "/home/dwhan89/workspace/cosmikyu/output/sehgal_forse_081020/{}/model".format(experiment_id)
experiment_id = "6c187e10f7ad45c8b6e6ebb7c0b15d31"
model_dir = "/home/dwhan89/workspace/cosmikyu/output/sehgal_forse_281220/{}/model".format(experiment_id)
print(model_dir)
overwrite =False
SDN = transforms.SehgalDataNormalizerScaledLogZShrink(norm_info_file)
SUN = transforms.SehgalDataUnnormalizerScaledLogZShrink(norm_info_file)
SDN_GK = transforms.SehgalDataNormalizerScaledLogZShrink(norm_info_file, channel_idxes=["kappa"])
SDS_input = datasets.SehgalDataSet(sehgal_dir, "train_tertiary281220_fromcat",
transforms=[SDN], dummy_label=False, dtype=np.float32)
STAT_GEN = stats.STATS(experiment_id+"_minko", output_dir=stat_dir, overwrite=overwrite)
#save_points = [20,29,30]
save_points = [3,13]
shape = (128,128)
nsample = 100
cuda = False
z = np.linspace(-15,15,300)
for save_point in save_points:
FORSE = gan.VAEGAN_WGP("sehgal_forse_081020", (5,)+shape, nconv_fcgen=64,
nconv_fcdis=64, cuda=cuda, ngpu=4, nconv_layer_gen=5, nconv_layer_disc=5, kernal_size=4, stride=2,
padding=1, output_padding=0, gen_act=[LF,STanh], nin_channel=5,
nout_channel=5, nthresh_layer_gen=0, nthresh_layer_disc=0, dropout_rate=0)
FORSE.load_states(model_dir, "_{}".format(save_point))
for i, compt in enumerate(compts):
for j in range(nsample):
if j % 6000 == 0: print(compt, j)
st_idx_temp = "%s_{}_%d"%(compt, save_point)
#if STAT_GEN.has_data(st_idx_temp.format("chi"), j):
#continue
sample = FORSE.generate_samples(SDS_input[j], concat=False, train=False).data.numpy()[0].astype(np.float64)
storage = np.zeros((len(z), 3))
for k, threshold in enumerate(z):
f, u, chi = MF2D(sample[i], threshold)
storage[k,:] = np.array([f, u, chi])
STAT_GEN.add_data(st_idx_temp.format("f"), j, storage[:,0].copy())
STAT_GEN.add_data(st_idx_temp.format("u"), j, storage[:,1].copy())
STAT_GEN.add_data(st_idx_temp.format("chi"), j, storage[:,2].copy())
STAT_GEN.add_data("z", 0, z)
ret = STAT_GEN.get_stats()
def key2label(key):
storage = {"kappa":r"$ \kappa $",
"ksz":" kSZ ",
"tsz":" tSZ ",
"ir":" CIB ",
"rad":"Radio",
}
return storage[key]
## ps
def key2label(key):
storage = {"kappa":r"$ \kappa$ (x 100)",
"ksz":"kSZ ",
"tsz":"tSZ ",
"ir":"CIB ",
"rad":"Radio",
}
return storage[key]
def mink2label(key):
storage = {"f":("Minkowski functional F (Area)", "Squared number of pixels"),
"u":("Minkowski functional U (Boundary)", "Number of pixels"),
"chi":("Minkowski functional $\chi$ (Euler characteristic)", "Arbitrary Unit")
}
return storage[key]
flux_cut = False
for mink_idx in ["f", "u", "chi"]:
plt.clf()
title, ylabel = mink2label(mink_idx)
print(ylabel)
fig = plt.figure(figsize=(12,12))
ax = fig.gca()
for i, compt in enumerate(compts):
color = next(ax._get_lines.prop_cycler)['color']
st_idx_temp = "%s_{}"%compt
key = st_idx_temp.format(mink_idx)
plt.plot(z, STAT_TEST.stats[key]["mean"], alpha=1, lw=4, marker="",markersize=8, color=color, ls="--")
st_idx_temp = "%s_{}_%d"%(compt, save_points[-1])
key = st_idx_temp.format(mink_idx)
plt.plot(z, STAT_GEN.stats[key]["mean"], alpha=1, lw=2, marker="",markersize=12, color=color, ls="-")
plt.plot([],[], label=key2label(compt), color=color)
plt.title(title, fontsize=30)
plt.plot([],[], lw=2, marker="", ls="-", label="Network", color="k")
plt.plot([],[], lw=2, marker="", label="S10", color="k", ls="--")
ax.set_ylabel(ylabel, fontsize=33)
ax.set_xlabel("Pixel Threshold", fontsize=35)
plt.legend(fontsize=30, ncol=1)
ax.tick_params(axis='both', which='major', labelsize=25)
ax.tick_params(axis='both', which='minor', labelsize=25)
plt.xlim(-5,5)
ax.grid()
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(plot_path(f"141020_minko_{mink_idx}.pdf"), bbox_inches='tight')
plt.show()
for i, compt in enumerate(compts):
plt.clf()
st_idx_temp = "%s_{}"%compt
fig = plt.figure(figsize=(12,7))
ax = fig.gca()
plt.plot(z, STAT_TEST.stats[st_idx_temp.format("f")]["mean"], alpha=0.5, lw=3, label="F (Area)", color="r")
plt.plot(z, STAT_TEST.stats[st_idx_temp.format("u")]["mean"], alpha=0.5,lw=3, label="U (Boundary)", color="b")
plt.plot(z, STAT_TEST.stats[st_idx_temp.format("chi")]["mean"], alpha=0.5,lw=3,
label="$\chi$ (Euler characteristic)", color="g")
st_idx_temp = "%s_{}_%d"%(compt, save_points[-1])
mean, std = STAT_GEN.stats[st_idx_temp.format("f")]["mean"], STAT_GEN.stats[st_idx_temp.format("f")]["std"]
plt.plot(z, mean, ls="--", lw=3, color="r")
mean, std = STAT_GEN.stats[st_idx_temp.format("u")]["mean"], STAT_GEN.stats[st_idx_temp.format("u")]["std"]
plt.plot(z, mean, ls="--", lw=3, color="b")
mean, std = STAT_GEN.stats[st_idx_temp.format("chi")]["mean"], STAT_GEN.stats[st_idx_temp.format("chi")]["std"]
plt.plot(z, mean, ls="--", lw=3, color="g")
plt.plot([],[], lw=3, color="k", label="Reference")
plt.plot([],[], ls="--", lw=3, color="k", label="Generated")
plt.xlim(-5,5)
plt.yscale("linear")
plt.legend(fontsize=20)
ax.tick_params(axis='both', which='major', labelsize=22)
ax.tick_params(axis='both', which='minor', labelsize=22)
plt.xlabel("Threshold", fontsize=20)
plt.show()
plt.clf()
fig, axes = plt.subplots(3,2, figsize=(12,15), sharex='all', sharey='all')
for i, compt in enumerate(compts):
yidx = i // 2
xidx = i % 2
print(i, yidx, xidx)
st_idx_temp = "%s_{}"%compt
axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("f")]["mean"], ls="--", alpha=0.5, lw=3, label="F (Area)", color="r")
axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("u")]["mean"], ls="--", alpha=0.5, lw=3, label="U (Boundary)", color="b")
axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("chi")]["mean"], ls="--", alpha=0.5, lw=3,
label="$\chi$ (Euler characteristic)", color="g")
st_idx_temp = "%s_{}_%d"%(compt, save_points[-1])
mean, std = STAT_GEN.stats[st_idx_temp.format("f")]["mean"], STAT_GEN.stats[st_idx_temp.format("f")]["std"]
axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="r")
mean, std = STAT_GEN.stats[st_idx_temp.format("u")]["mean"], STAT_GEN.stats[st_idx_temp.format("u")]["std"]
axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="b")
mean, std = STAT_GEN.stats[st_idx_temp.format("chi")]["mean"], STAT_GEN.stats[st_idx_temp.format("chi")]["std"]
axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="g")
axes[yidx,xidx].set_xlim([-3,3])
axes[yidx,xidx].set_ylim(-1000, 17000)
axes[yidx,xidx].tick_params(axis='both', which='major', labelsize=25)
axes[yidx,xidx].tick_params(axis='both', which='minor', labelsize=25)
axes[yidx,xidx].grid()
axes[yidx,xidx].text(0.90, 0.8, key2label(compt),
verticalalignment='bottom', horizontalalignment='right',
transform=axes[yidx,xidx].transAxes,
color='k', fontsize=25, bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
plt.subplots_adjust(wspace=None, hspace=None)
axes[yidx,xidx].set_xlabel("Pixel Threshold", fontsize=23)
#plt.plot([],[], lw=3, color="k", label="Reference")
#plt.plot([],[], ls="--", lw=3, color="k", label="Generated")
plt.suptitle("Minkowski Functionals", fontsize=30)
fig.subplots_adjust(top=0.95)
axes[2,1].plot([],[], lw=3, color="r", label="F (Area)")
axes[2,1].plot([],[], lw=3, color="b", label="U (Boundary)")
axes[2,1].plot([],[], lw=3, color="g", label="$\chi$ (Euler characteristic)")
axes[2,1].plot([],[], ls="-", lw=3, color="k", label="Network")
axes[2,1].plot([],[], ls="--", lw=3, color="k", label="S10")
axes[2,1].tick_params(axis='both', which='major', labelsize=25)
axes[2,1].tick_params(axis='both', which='minor', labelsize=25)
axes[2,1].grid()
axes[2,1].set_xlabel("Pixel Threshold", fontsize=25)
axes[2,1].legend(fontsize=20)
#axes[0,0].get_yaxis().set_visible(False)
#axes[1,0].get_yaxis().set_visible(False)
#axes[2,0].get_yaxis().set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
plt.savefig(plot_path("141020_minko.pdf"))
plt.show()
def mink2label(key):
storage = {"f":("F (Area)"),
"u":("Minkowski functional U (Boundary)", "Number of pixels"),
"chi":("Minkowski functional $\chi$ (Euler characteristic)", "Arbitrary Unit")
}
return storage[key]
def key2label(key):
storage = {"kappa":r"$ \kappa$",
"ksz":"kSZ ",
"tsz":"tSZ ",
"ir":"CIB ",
"rad":"Radio",
}
return storage[key]
import matplotlib.ticker as mtick
f = mtick.ScalarFormatter(useOffset=False, useMathText=True)
g = lambda x,pos : "${}$".format(f._formatSciNotation('%1.10e' % x))
plt.clf()
fig, axes = plt.subplots(2,2, figsize=(12,12), sharex='all', sharey='row')
npix = sample[0].size
for i in np.arange(4):
yidx = i // 2
xidx = i % 2
print(i, yidx, xidx)
mult_fact = 1/ npix if i < 2 else 1/490
if i < 3:
mink_idx = ["f", "u", "chi"][i]
for j, compt in enumerate(compts):
color = next(axes[yidx, xidx]._get_lines.prop_cycler)['color']
st_idx_temp = "%s_{}"%compt
key = st_idx_temp.format(mink_idx)
axes[yidx, xidx].plot(z, STAT_TEST.stats[key]["mean"]*mult_fact, alpha=1, lw=3, marker="",markersize=8, color=color, ls="--")
st_idx_temp = "%s_{}_%d"%(compt, save_points[-1])
key = st_idx_temp.format(mink_idx)
axes[yidx, xidx].plot(z, STAT_GEN.stats[key]["mean"]*mult_fact, alpha=1, lw=3, marker="",markersize=12, color=color, ls="-")
#plt.plot([],[], label=key2label(compt), color=color)
#st_idx_temp = "%s_{}"%compt
#axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("f")]["mean"], ls="--", alpha=0.5, lw=3, label="F (Area)", color="r")
#axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("u")]["mean"], ls="--", alpha=0.5, lw=3, label="U (Boundary)", color="b")
#axes[yidx,xidx].plot(z, STAT_TEST.stats[st_idx_temp.format("chi")]["mean"], ls="--", alpha=0.5, lw=3,
# label="$\chi$ (Euler characteristic)", color="g")
#st_idx_temp = "%s_{}_%d"%(compt, save_points[-1])
#mean, std = STAT_GEN.stats[st_idx_temp.format("f")]["mean"], STAT_GEN.stats[st_idx_temp.format("f")]["std"]
#axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="r")
#mean, std = STAT_GEN.stats[st_idx_temp.format("u")]["mean"], STAT_GEN.stats[st_idx_temp.format("u")]["std"]
#axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="b")
#mean, std = STAT_GEN.stats[st_idx_temp.format("chi")]["mean"], STAT_GEN.stats[st_idx_temp.format("chi")]["std"]
#axes[yidx,xidx].plot(z, mean, ls="-", lw=3, color="g")
axes[yidx,xidx].set_xlim([-3,3])
#axes[yidx,xidx].set_ylim(-1000, 17000)
axes[yidx,xidx].tick_params(axis='both', which='major', labelsize=30)
axes[yidx,xidx].tick_params(axis='both', which='minor', labelsize=30)
axes[yidx,xidx].grid()
#axes[yidx,xidx].text(0.90, 0.8, key2label(compt),
# verticalalignment='bottom', horizontalalignment='right',
# transform=axes[yidx,xidx].transAxes,
# color='k', fontsize=25, bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
#axes[yidx,xidx].set_xlabel("Pixel Threshold", fontsize=23)
#axes[yidx,xidx].yaxis.set_major_formatter(mtick.FuncFormatter(g))
#plt.plot([],[], lw=3, color="k", label="Reference")
#plt.plot([],[], ls="--", lw=3, color="k", label="Generated")
plt.suptitle("Minkowski Functionals", fontsize=30)
fig.subplots_adjust(top=0.93)
for j, compt in enumerate(compts):
color = next(axes[1, 1]._get_lines.prop_cycler)['color']
plt.plot([],[], label=key2label(compt), color=color, lw=3)
#axes[0,0].plot([],[], lw=3, color="r", label="F (Area)")
#axes[0,1].plot([],[], lw=3, color="b", label="U (Boundary)")
#axes[1,0].plot([],[], lw=3, color="g", label="$\chi$ (Euler characteristic)")
axes[1,1].plot([],[], ls="-", lw=3, color="k", label="Network")
axes[1,1].plot([],[], ls="--", lw=3, color="k", label="S10")
axes[1,0].set_ylim(-3,5.9)
axes[1,1].tick_params(axis='both', which='major', labelsize=30)
axes[1,1].tick_params(axis='both', which='minor', labelsize=30)
axes[0,0].text(0.95, 0.83, "F (Area)",
verticalalignment='bottom', horizontalalignment='right',
transform=axes[0,0].transAxes,
color='k', fontsize=30, bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
axes[0,1].text(0.95, 0.83, "U (Boundary)",
verticalalignment='bottom', horizontalalignment='right',
transform=axes[0,1].transAxes,
color='k', fontsize=30, bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
axes[1,0].text(0.95, 0.83, r"$\chi$ (Euler)",
verticalalignment='bottom', horizontalalignment='right',
transform=axes[1,0].transAxes,
color='k', fontsize=30, bbox={'facecolor': 'white', 'alpha': 0.8, 'pad': 10})
axes[0,0].set_ylabel("Arbitrary Unit", fontsize=30)
axes[1,0].set_ylabel("Arbitrary Unit", fontsize=30)
axes[1,0].set_xlabel("Pixel Threshold", fontsize=30)
axes[1,1].set_xlabel("Pixel Threshold", fontsize=30)
#axes[1,1].plot([],[])
#axes[0,0].legend(fontsize=30)
#axes[1,0].legend(fontsize=30)
#axes[0,1].legend(fontsize=30)
axes[1,1].legend(fontsize=30)
plt.subplots_adjust(hspace=0, wspace=0)
#axes[1,1].grid()
#axes[0,0].get_yaxis().set_visible(False)
#axes[1,0].get_yaxis().set_visible(False)
#axes[2,0].get_yaxis().set_visible(False)
#axes[1,1].yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1e'))
plt.savefig(plot_path("141020_minko_full.pdf"), bbox_inches='tight')
plt.show()
```
| github_jupyter |
```
from sklearn import linear_model
import glob
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append("../../")
from mfilter.types import FrequencySamples, TimeSeries, FrequencySeries, TimesSamples
# functions
# read file
def read_file():
# folder MLensing with the files should be placed out
# of the root file of the project (the one with the .git)
return glob.glob("../../../MLensing/*.mjd")
# read data
def read_data(files, j, ini, end, normalize_time=False):
a = pd.read_csv(files[j], skiprows=3, header=None, sep=" ") # read the table
a.columns = ["MJD", "Mag", "err"] # name the columns
times = TimesSamples(a["MJD"][ini:end]) # read the times in MJD
times -= times.min() # normalize to 0, this give us units of days
times *= days_to_sec # transform to units of seconds
if normalize_time:
print("before normalize, we get an average sampling rate of:",
times.average_fs, "samples per seconds")
times *= times.average_fs # normalize to units of samples (for digital signals)
print("after normalize, we get an average sampling rate of:", times.average_fs,
"samples per cycles \n with this, perfect reconstruction is guaranteed " +
"possible for a bandlimit of: ", times.average_fs/2)
data = TimeSeries(a["Mag"][ini:end], times=times) # get the magnitude
data -= np.median(data) # normalize the magnitud to 0 (the center goes to 0).
err = a["err"].tolist()[ini:end] # get the error
return times, data, err
reg1 = linear_model.SGDRegressor(alpha=1e-8, max_iter=1000, penalty='l2', l1_ratio=0.5, tol=0.001)
reg2 = linear_model.SGDRegressor(alpha=1e-8, max_iter=1000, penalty='l2', l1_ratio=0.5, tol=0.001)
j = 10 # the particular observation to use, j={5, 9, 14} gives bad results
ini = 0 # init of a range of the observations, we use the whole data
end = -1 # end of a range of the observation, we use the whole data
days_to_sec = 1 * 24 * 60 * 60 # transformation from days to seconds
sec_to_days = 1 /(60 * 60 * 24) # transformation from seconds to days
files = read_file()
times0, data0, err = read_data(files, j, ini, end, normalize_time=False)
# times0 *= sec_to_days
fig = plt.figure(figsize=(8,3))
plt.plot(times0, data0, '.')
plt.title("Microlens from MACHO dataset")
plt.xlabel("time (seconds)")
print("average nyquist limit of these time samples is: ", times0.average_fs/2, "Hz")
# print("with a minimum wavelenth of: ", c*2/times0.average_fs, "meters")
print(times0[0]*times0.average_fs, times0[1]* times0.average_fs, times0[2]* times0.average_fs)
# times /=
# using simulated signals (sin)
# time
beta = 1
times = TimesSamples(np.linspace(times0.min(), times0.max(), beta*len(times0)))
times = TimesSamples(np.linspace(0, 9, 10))
# times *= sec_to_days
# nyquist limit (maximum recognizable frequency)
nyq = times.average_fs / 2
print("fs:", times.average_fs, "fs2:", (len(times)-1)/times.duration)
print("nyquist limit frequency is: ", nyq)
# frequencies in Hz, lets declare f1 as 1/10 of nyq and f2 as 1/6 of nyq
f1 = nyq
f2 = nyq/2
print("from these, f1:", f1, "and f2:", f2)
c = 300000000 # light speed in m/s
print("and their lambas are lambda1:", c/f1, "and lambda2:", c/f2)
data1 = TimeSeries(np.cos(2 * np.pi * f1 * times), times=times)
# data2 = TimeSeries(np.sin(2 * np.pi * f1 * times)*0.8 + np.sin(2 * np.pi * f2 * times)*0.2, times=times)
plt.figure(figsize=(10, 4))
plt.plot(times, data1, 'r-', label="data1")
# plt.plot(times, data2, 'b-', label="data2")
plt.legend()
times.value
fs = 10
nyq = fs/2
f1 = nyq
times2 = np.arange(100)/fs
data = np.cos(2 * np.pi * f1 * times2)
plt.plot(times2, data)
# times2
# fourier matrix
#frequencies, using averaged nyquist limit
nyq2 = len(times) / times.duration / 2
minf = 1 / times.duration #min freq, avoid 0
freqs = FrequencySamples(input_time=times, minimum_frequency=minf, maximum_frequency=nyq2, samples_per_peak=1)
freqs = FrequencySamples(initial_array=np.append(np.sort(-freqs.value), freqs.value), df=freqs.basic_df)
# matrix
m = times.value.reshape(-1, 1) * freqs.value
m = np.exp(2j * np.pi * m)
m = np.hstack((m.real, m.imag))
reg1.fit(m, data1)
# save these coefs
coefs1 = np.copy(reg1.coef_)
# clear the coefs in reg1
reg2.coef_ = None
reg1.fit(m, data2)
coefs2 = np.copy(reg1.coef_)
def _cast_into_ft(coefs):
n_freqs = int(len(coefs) / 2)
ft = 1j * np.zeros(n_freqs)
for i in range(n_freqs):
ft[i] = coefs[i] - 1j * coefs[i + n_freqs]
return ft
ft1 = _cast_into_ft(coefs1)
ft2 = _cast_into_ft(coefs2)
fig, [ax1, ax2, ax3] = plt.subplots(1, 3, figsize=(16, 4))
ax1.plot(freqs, np.abs(ft1), 'r.')
ax1.plot(freqs, np.abs(ft2), 'b.')
ax2.plot(freqs, np.real(ft1), 'r.')
ax2.plot(freqs, np.real(ft2), 'b.')
ax3.plot(freqs, np.imag(ft1), 'r.')
ax3.plot(freqs, np.imag(ft2), 'b.')
# plt.fill_between(np.arange(len(coefs2)), coefs1, coefs2)
# using fft
fft1 = np.fft.fft(data1)
fft2 = np.fft.fft(data2)
fftfreq = np.fft.fftfreq(len(data1), d=times.duration / len(times))
fig, [ax1, ax2, ax3] = plt.subplots(1, 3, figsize=(16, 4))
ax1.plot(fftfreq, np.abs(fft1)/len(times), 'r.')
ax1.plot(fftfreq, np.abs(fft2)/len(times), 'b.')
ax2.plot(fftfreq, np.real(fft1)/len(times), 'r.')
ax2.plot(fftfreq, np.real(fft2)/len(times), 'b.')
ax3.plot(fftfreq, np.imag(fft1)/len(times), 'r.')
ax3.plot(fftfreq, np.imag(fft2)/len(times), 'b.')
# lets check first if this mach, find the peaks frequency,
# find the args where the peak happends
peakft1 = np.argmax(np.abs(ft1))
peakfft1 = np.argmax(np.abs(fft1))
print(freqs[peakft1], np.fft.fftfreq(len(times))[peakfft1], fftfreq[peakfft1])
np.fft.fftfreq(len(times))[peakfft1] * len(times) / times.duration
f1
times.average_fs / 2
np.argmax(np.abs(ft1))
f1 / fftfreq[np.argmax(np.abs(fft1))]
```
NYQUIST ES IMPORTANTE Y PUEDE SER UN FACTOR LIMITANTE, EN SI EL MAYOR FACTOR ES QU IMPONER COMO FREQ MAXIMA NYQUIST PUEDE HACERNOS PERDER TODA LA INFORMACION RELETANTE
```
fs = 10000*2
times = np.arange(10) / fs
# times = np.linspace(0, 6e-4, 1000)
data = np.cos(2 * np.pi * 10000 * times)
plt.plot(times, data, '-')
plt.ticklabel_format(axis='x',style='sci',scilimits=(0,0))
np.arange(4)
```
| github_jupyter |
# Multiclass Example
This example show shows how to use `tsfresh` to extract and select useful features from timeseries in a multiclass classification example.
The underlying control of the false discovery rate (FDR) has been introduced by [Tang et al. (2020, Sec. 3.2)](https://doi.org/10.1140/epjds/s13688-020-00244-9).
We use an example dataset of human activity recognition for this.
The dataset consists of timeseries for 7352 accelerometer readings.
Each reading represents an accelerometer reading for 2.56 sec at 50hz (for a total of 128 samples per reading). Furthermore, each reading corresponds one of six activities (walking, walking upstairs, walking downstairs, sitting, standing and laying).
For more information go to https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
This notebook follows the example in [the first notebook](./01%20Feature%20Extraction%20and%20Selection.ipynb), so we will go quickly over the extraction and focus on the more interesting feature selection in this case.
```
%matplotlib inline
import matplotlib.pylab as plt
from tsfresh import extract_features, extract_relevant_features, select_features
from tsfresh.utilities.dataframe_functions import impute
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import pandas as pd
import numpy as np
```
## Load and visualize data
```
from tsfresh.examples.har_dataset import download_har_dataset, load_har_dataset, load_har_classes
# fetch dataset from uci
download_har_dataset()
df = load_har_dataset()
df.head()
y = load_har_classes()
```
The data is not in a typical time series format so far:
the columns are the time steps whereas each row is a measurement of a different person.
Therefore we bring it to a format where the time series of different persons are identified by an `id` and are order by time vertically.
```
df["id"] = df.index
df = df.melt(id_vars="id", var_name="time").sort_values(["id", "time"]).reset_index(drop=True)
df.head()
plt.title('accelerometer reading')
plt.plot(df[df["id"] == 0].set_index("time").value)
plt.show()
```
## Extract Features
```
# only use the first 500 ids to speed up the processing
X = extract_features(df[df["id"] < 500], column_id="id", column_sort="time", impute_function=impute)
X.head()
```
## Train and evaluate classifier
For later comparison, we train a decision tree on all features (without selection):
```
X_train, X_test, y_train, y_test = train_test_split(X, y[:500], test_size=.2)
classifier_full = DecisionTreeClassifier()
classifier_full.fit(X_train, y_train)
print(classification_report(y_test, classifier_full.predict(X_test)))
```
# Multiclass feature selection
We will now select a subset of relevant features using the `tsfresh` select features method.
However it only works for binary classification or regression tasks.
For a 6 label multi classification we therefore split the selection problem into 6 binary one-versus all classification problems.
For each of them we can do a binary classification feature selection:
```
relevant_features = set()
for label in y.unique():
y_train_binary = y_train == label
X_train_filtered = select_features(X_train, y_train_binary)
print("Number of relevant features for class {}: {}/{}".format(label, X_train_filtered.shape[1], X_train.shape[1]))
relevant_features = relevant_features.union(set(X_train_filtered.columns))
len(relevant_features)
```
we keep only those features that we selected above, for both the train and test set
```
X_train_filtered = X_train[list(relevant_features)]
X_test_filtered = X_test[list(relevant_features)]
```
and train again:
```
classifier_selected = DecisionTreeClassifier()
classifier_selected.fit(X_train_filtered, y_train)
print(classification_report(y_test, classifier_selected.predict(X_test_filtered)))
```
It worked! The precision improved by removing irrelevant features.
## Improved Multiclass feature selection
We can instead specify the number of classes for which a feature should be a relevant predictor in order to pass through the filtering process. This is as simple as setting the `multiclass` parameter to `True` and setting `n_significant` to the required number of classes. We will try with a requirement of being relevant for 5 classes.
```
X_train_filtered_multi = select_features(X_train, y_train, multiclass=True, n_significant=5)
X_train_filtered_multi.shape
```
We can see that the number of relevant features is lower than the previous implementation.
```
classifier_selected_multi = DecisionTreeClassifier()
classifier_selected_multi.fit(X_train_filtered_multi, y_train)
X_test_filtered_multi = X_test[X_train_filtered_multi.columns]
print(classification_report(y_test, classifier_selected_multi.predict(X_test_filtered_multi)))
```
We now get slightly better classification performance, especially for classes where the previous classifier performed poorly. The parameter `n_significant` can be tuned for best results.
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
import sys
sys.path.insert(0, "/home/husein/parsing/self-attentive-parser/src")
sys.path.append("/home/husein/parsing/self-attentive-parser")
import tensorflow as tf
from transformers import AlbertTokenizer
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained(
'huseinzol05/tiny-bert-bahasa-cased',
unk_token = '[UNK]',
pad_token = '[PAD]',
do_lower_case = False,
)
import json
with open('vocab-tiny.json') as fopen:
data = json.load(fopen)
LABEL_VOCAB = data['label']
TAG_VOCAB = data['tag']
with tf.gfile.GFile('export/model-tiny.pb', 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
input_ids = graph.get_tensor_by_name('import/input_ids:0')
word_end_mask = graph.get_tensor_by_name('import/word_end_mask:0')
charts = graph.get_tensor_by_name('import/charts:0')
tags = graph.get_tensor_by_name('import/tags:0')
sess = tf.InteractiveSession(graph = graph)
BERT_MAX_LEN = 512
import numpy as np
from parse_nk import BERT_TOKEN_MAPPING
def make_feed_dict_bert(sentences):
all_input_ids = np.zeros((len(sentences), BERT_MAX_LEN), dtype=int)
all_word_end_mask = np.zeros((len(sentences), BERT_MAX_LEN), dtype=int)
subword_max_len = 0
for snum, sentence in enumerate(sentences):
tokens = []
word_end_mask = []
tokens.append(u"[CLS]")
word_end_mask.append(1)
cleaned_words = []
for word in sentence:
word = BERT_TOKEN_MAPPING.get(word, word)
# BERT is pre-trained with a tokenizer that doesn't split off
# n't as its own token
if word == u"n't" and cleaned_words:
cleaned_words[-1] = cleaned_words[-1] + u"n"
word = u"'t"
cleaned_words.append(word)
for word in cleaned_words:
word_tokens = tokenizer.tokenize(word)
if not word_tokens:
# The tokenizer used in conjunction with the parser may not
# align with BERT; in particular spaCy will create separate
# tokens for whitespace when there is more than one space in
# a row, and will sometimes separate out characters of
# unicode category Mn (which BERT strips when do_lower_case
# is enabled). Substituting UNK is not strictly correct, but
# it's better than failing to return a valid parse.
word_tokens = ["[UNK]"]
for _ in range(len(word_tokens)):
word_end_mask.append(0)
word_end_mask[-1] = 1
tokens.extend(word_tokens)
tokens.append(u"[SEP]")
word_end_mask.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
subword_max_len = max(subword_max_len, len(input_ids))
all_input_ids[snum, :len(input_ids)] = input_ids
all_word_end_mask[snum, :len(word_end_mask)] = word_end_mask
all_input_ids = all_input_ids[:, :subword_max_len]
all_word_end_mask = all_word_end_mask[:, :subword_max_len]
return all_input_ids, all_word_end_mask
s = 'Saya sedang membaca buku tentang Perlembagaan'.split()
sentences = [s]
i, m = make_feed_dict_bert(sentences)
i, m
charts_val, tags_val = sess.run((charts, tags), {input_ids: i, word_end_mask: m})
charts_val, tags_val
for snum, sentence in enumerate(sentences):
chart_size = len(sentence) + 1
chart = charts_val[snum,:chart_size,:chart_size,:]
# !wget https://raw.githubusercontent.com/michaeljohns2/self-attentive-parser/michaeljohns2-support-tf2-patch/benepar/chart_decoder.pyx
import chart_decoder_py
chart_decoder_py.decode(chart)
import nltk
from nltk import Tree
PTB_TOKEN_ESCAPE = {u"(": u"-LRB-",
u")": u"-RRB-",
u"{": u"-LCB-",
u"}": u"-RCB-",
u"[": u"-LSB-",
u"]": u"-RSB-"}
def make_nltk_tree(sentence, tags, score, p_i, p_j, p_label):
# Python 2 doesn't support "nonlocal", so wrap idx in a list
idx_cell = [-1]
def make_tree():
idx_cell[0] += 1
idx = idx_cell[0]
i, j, label_idx = p_i[idx], p_j[idx], p_label[idx]
label = LABEL_VOCAB[label_idx]
if (i + 1) >= j:
word = sentence[i]
tag = TAG_VOCAB[tags[i]]
tag = PTB_TOKEN_ESCAPE.get(tag, tag)
word = PTB_TOKEN_ESCAPE.get(word, word)
tree = Tree(tag, [word])
for sublabel in label[::-1]:
tree = Tree(sublabel, [tree])
return [tree]
else:
left_trees = make_tree()
right_trees = make_tree()
children = left_trees + right_trees
if label:
tree = Tree(label[-1], children)
for sublabel in reversed(label[:-1]):
tree = Tree(sublabel, [tree])
return [tree]
else:
return children
tree = make_tree()[0]
tree.score = score
return tree
tree = make_nltk_tree(s, tags_val[0], *chart_decoder_py.decode(chart))
print(str(tree))
def make_str_tree(sentence, tags, score, p_i, p_j, p_label):
idx_cell = [-1]
def make_str():
idx_cell[0] += 1
idx = idx_cell[0]
i, j, label_idx = p_i[idx], p_j[idx], p_label[idx]
label = LABEL_VOCAB[label_idx]
if (i + 1) >= j:
word = sentence[i]
tag = TAG_VOCAB[tags[i]]
tag = PTB_TOKEN_ESCAPE.get(tag, tag)
word = PTB_TOKEN_ESCAPE.get(word, word)
s = u"({} {})".format(tag, word)
else:
children = []
while ((idx_cell[0] + 1) < len(p_i)
and i <= p_i[idx_cell[0] + 1]
and p_j[idx_cell[0] + 1] <= j):
children.append(make_str())
s = u" ".join(children)
for sublabel in reversed(label):
s = u"({} {})".format(sublabel, s)
return s
return make_str()
make_str_tree(s, tags_val[0], *chart_decoder_py.decode(chart))
```
| github_jupyter |
### Setup
```
# IMPORTS & OTHER SETTINGS
%run 'settings.py'
%matplotlib inline
!pwd
from scipy.spatial import distance
from sklearn.preprocessing import QuantileTransformer
# File paths
data_path = '../data/'
pickle_path = os.path.join(data_path, 'pickles')
if not os.path.exists(data_path):
raise Exception('Hold your horses, partner, you need data!')
# Trained LSI models
tf_lsi_path = os.path.join(pickle_path, 'tf_lsi.pkl')
cv_lsi_path = os.path.join(pickle_path, 'cv_lsi.pkl')
# Corpora
tfidf_corpus_path = os.path.join(pickle_path, 'tfidf_corpus.pkl')
tfidf_corpus = pickle.load(open(tfidf_corpus_path, 'rb'))
cv_corpus_path = os.path.join(pickle_path, 'cv_corpus.pkl')
cv_corpus = pickle.load(open(cv_corpus_path, 'rb'))
if not os.path.exists(tf_lsi_path):
# Main text
id2word_tf_path = os.path.join(pickle_path, 'id2word_tf.pkl')
id2word_tf = pickle.load(open(id2word_tf_path, 'rb'))
# Secondary text
id2word_path = os.path.join(pickle_path, 'id2word.pkl')
id2word = pickle.load(open(id2word_path, 'rb'))
trained = False
if os.path.exists(tf_lsi_path):
tf_lsi = pickle.load(open(tf_lsi_path, 'rb'))
cv_lsi = pickle.load(open(cv_lsi_path, 'rb'))
trained = True
# Main df
no318_listings = os.path.join(data_path, 'neworleans/listings.csv')
sf318_listings = os.path.join(data_path, 'sanfrancisco/listings.csv')
pd218_listings = os.path.join(data_path, 'portland/listings.csv')
sy118_listings = os.path.join(data_path, 'sydney/listings.csv')
ny318_listings = os.path.join(data_path, 'newyork/listings.csv')
def get_data(cities):
''' Concatenate the listings into one dataframe '''
dfs = []
for city in cities:
dfs.append(pd.read_csv(city, infer_datetime_format=True,
parse_dates =
['last_scraped', 'host_since', 'calendar_last_scraped',
'first_review', 'last_review']))
return pd.concat(dfs)
cities = [no318_listings, sf318_listings, pd218_listings, sy118_listings, ny318_listings]
df = get_data(cities).reset_index(drop=True).reset_index()
```
### Last Minute Cleaning
```
def clean_price(x):
y = x.split('.')[0] \
.replace('$', '') \
.replace(',', '')
return y
def replace_tf(x):
x = x.replace('t', '1')
y = x.replace('f', '0')
return y
# Clean columns
df['price'] = df.price.apply(clean_price).astype('uint8')
df['instant_bookable'] = df.instant_bookable.apply(replace_tf).astype('uint8')
df['is_business_travel_ready'] = df.is_business_travel_ready.apply(replace_tf).astype('uint8')
```
### Training LSI Model for Engineered Text Features
```
%%time
# Fitting LSI models
if not trained:
num = 2**9
cv_lsi = models.LsiModel(corpus=cv_corpus, id2word=id2word, num_topics=num,
power_iters=3, extra_samples=250)
tf_lsi = models.LsiModel(corpus=tfidf_corpus, id2word=id2word_tf, num_topics=num,
power_iters=3, extra_samples=250)
pickle.dump(cv_lsi, open(cv_lsi_path, 'wb'))
pickle.dump(tf_lsi, open(tf_lsi_path, 'wb'))
# Plotting the singular values received from SVD/ LSI
plt.figure(figsize=(12,6))
plt.bar(range(len(tf_lsi.projection.s)), tf_lsi.projection.s)
plt.xlabel('Latent Space')
plt.ylabel('Relative Strengths/ Singular Values')
plt.title('Relative Strengths of Latent Variables');
%%time
num = 2**9
# Retrieve vectors for the original corpus in the LS space ("transform" in sklearn)
lsi_tfidf_corpus = tf_lsi[tfidf_corpus]
lsi_cv_corpus = cv_lsi[cv_corpus]
# Dump the resulting document vectors into a list
doc_tfidf_vecs = [doc for doc in lsi_tfidf_corpus]
doc_cv_vecs = [doc for doc in lsi_cv_corpus]
# Create an index transformer that calculates similarity based on our space
indx = similarities.MatrixSimilarity(doc_tfidf_vecs, num_features=num)
indx2 = similarities.MatrixSimilarity(doc_cv_vecs, num_features=num)
```
### Functions Used by Recommender
```
def haversine(lat1, lon1, lat2, lon2, R=3963.1905919):
'''
Finds the distance of 2 latitudes and longitudes
R is the Earth's radius in miles
'''
dLat = radians(lat2 - lat1)
dLon = radians(lon2 - lon1)
lat1 = radians(lat1)
lat2 = radians(lat2)
a = sin(dLat/2)**2 + cos(lat1)*cos(lat2)*sin(dLon/2)**2
c = 2*asin(sqrt(a))
return R * c
def calc_distances(rec_df):
''' Applies the haversine formula to the recommendation dataframe '''
loc_cols = ['latitude', 'longitude']
lat1 = rec_df.latitude.loc[0]
lon1 = rec_df.longitude.loc[0]
rec_df['loc_dist'] = rec_df[loc_cols].apply(lambda x:
haversine(lat1, lon1, x.latitude, x.longitude), axis=1)
return rec_df
def get_info(df, sims, reach=1000):
''' Combines the top similar listings with the main data '''
columns = \
[
'index','id','name','host_name','host_is_superhost',
'neighbourhood_cleansed','latitude','longitude','property_type',
'room_type','accommodates','beds','price','minimum_nights',
'maximum_nights','has_availability','instant_bookable',
'is_business_travel_ready'
]
df = df[columns]
columns = ['index','similarity'] # index = position in original dataframe
top_df = pd.DataFrame(sims[:reach], columns=columns)
return pd.merge(top_df, df, how='left', on='index')
def merge_other(rec_df, df, sims):
''' Combines the other similarity scores with the main recommender df'''
columns = ['index', 'similarity']
secondary_df = pd.DataFrame(sims, columns=columns)
return pd.merge(rec_df, secondary_df)
```
### The Meat
```
def recommendation(listing, indx, indx2, doc_tfidf_vecs, doc_cv_vecs, df, n_return=15):
'''
Takes a listing ID and the dataframe of listing information
to compute the top 15 similar listings
'''
# Get the listing position for the similarity matrix
listing = list(df.id).index(listing)
# Return the sorted list of cosine similarities to the listing document
sims = sorted(enumerate(indx[doc_tfidf_vecs[listing]]),
key=lambda item: -item[1])
sims2 = sorted(enumerate(indx2[doc_cv_vecs[listing]]),
key=lambda item: -item[1])
# Assemble data and calculate location distances
rec_df = get_info(df, sims)
secondary_df = get_info(df, sims2, reach=100000)
rec_df = pd.merge(rec_df, secondary_df.iloc[:,1:3], how='left', on='id', suffixes=('', '_2'))
rec_df = calc_distances(rec_df)
# Buffer on guests accommodated and nights available
guests = rec_df.accommodates.loc[0] - 3
min_ni = rec_df.minimum_nights.loc[0] - 3
max_ni = rec_df.maximum_nights.loc[0] + 3
# Applying buffers/ filters on data
rec_df = rec_df[(rec_df.accommodates >= guests)]
rec_df = rec_df[(rec_df.minimum_nights >= min_ni)]
rec_df = rec_df[(rec_df.maximum_nights <= max_ni)]
rec_df = rec_df[(rec_df.has_availability == 't')]
# Maintain positioning information
index_id_map = rec_df[['index', 'id']] # index = position in main dataframe
# Grab & clean columns for 'distance' comparison
columns = ['similarity', 'similarity_2', 'loc_dist', 'accommodates', 'beds',
'price', 'is_business_travel_ready', 'instant_bookable'
]
rec_df = rec_df[columns]
rec_df.beds.fillna(0, inplace=True)
# Using a robust quantile transformer for the similarity calculation
qt = QuantileTransformer(random_state=42, output_distribution='uniform')
rec_df = qt.fit_transform(rec_df)
# User decides the weights!
# Listing description (w)
try:
desc_score = input('Are you interested in finding a similar ' +
'listing based on the description? (Y/N)\t')
if 'y' in desc_score.lower():
desc_score = 1.5
elif 'n' in desc_score.lower():
desc_score = 0.5
else:
raise Exception
except:
raise ValueError("I'm looking for some kind of yes or no answer.")
# Host description (w)
try:
host_score = input('Are you interested in finding a similar ' +
'listing based on this host? (Y/N)\t')
if 'y' in host_score.lower():
host_score = 1.5
elif 'n' in host_score.lower():
host_score = 0.5
else:
raise Exception
except:
raise ValueError("I'm looking for some kind of yes or no answer.")
# Business trip (w)
try:
business_score = input('Is this a business trip? (Y/N)\t')
if 'y' in business_score.lower():
business_score = 1.5
elif 'n' in business_score.lower():
business_score = 0.2
else:
raise Exception
except:
raise ValueError("I'm looking for some kind of yes or no answer.")
# Price weighting
try:
price_score = \
int(input('How important is the price? Rate [1-5]\n' +
"1 means you can explore other prices.\n" +
'5 means you found the price you want!\t'))
if price_score < 1 or price_score > 5:
raise Exception
price_score /= 3
except:
raise ValueError('Please score this on a scale from 1 to 5!')
# Location weighting
try:
dist_score = \
int(input('How important is the location? Rate [1-5]\n' +
'1 means you are open to other neighborhoods\n' +
'5 means you want to stay close!\t'))
if 1 >= dist_score >= 5:
raise ValueError('Please score this on a scale form 1 to 5!')
dist_score /= 1
except:
raise ValueError("That's an incorrect input!")
columns = ['similarity', 'similarity_2', 'loc_dist', 'accommodates', 'beds',
'price', 'is_business_travel_ready', 'instant_bookable']
# Dampen control variables
bed_score = .25
accomodates_score = .5
book_score = .25
# Dynamic columns for rescoring
desc_similarity = columns.index('similarity')
host_similarity = columns.index('similarity_2')
business = columns.index('is_business_travel_ready')
pricing = columns.index('price')
location = columns.index('loc_dist')
beds = columns.index('beds')
accommodation = columns.index('accommodates')
instant_book = columns.index('instant_bookable')
rec_df[:, desc_similarity] = rec_df[:, desc_similarity] * desc_score
rec_df[:, host_similarity] = rec_df[:, host_similarity] * host_score
rec_df[:, business] = rec_df[:, business] * business_score
rec_df[:, pricing] = rec_df[:, pricing] * price_score
rec_df[:, location] = rec_df[:, location] * dist_score
rec_df[:, beds] = rec_df[:, beds] * bed_score
rec_df[:, accommodation] = rec_df[:, accommodation] * accomodates_score
rec_df[:, instant_book] = rec_df[:, instant_book] * book_score
print(pd.DataFrame(rec_df, columns = columns).describe())
# Calculate similarity of the related listings
rec_df = distance.cdist(rec_df, rec_df, 'minkowski', p=2)[0]
rec_df = pd.DataFrame(rec_df, columns=['distance'])
rec_df = pd.concat([index_id_map.reset_index(drop=True), rec_df],
axis=1, ignore_index=True)
rec_df.rename({0:'position', 1:'id', 2:'distance'}, axis='columns', inplace=True)
rec_df.sort_values(by='distance', inplace=True)
return rec_df.head(n_return)
```
### Results
```
%%time
recommendations = recommendation(18997233, indx, indx2, doc_tfidf_vecs, doc_cv_vecs, df)
recommendations.sort_values('id')
%%time
recommendations2 = recommendation(18997233, indx, indx2, doc_tfidf_vecs, doc_cv_vecs, df)
recommendations2.sort_values('id')
```
| github_jupyter |
```
import pandas as pd
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
import os
import shutil
shutil.rmtree("lending-club-data")
os.mkdir("lending-club-data")
os.mkdir("lending-club-data/risk-engine")
```
# Original data
```
df = pd.read_csv("source-data/loan.csv", dtype=str, parse_dates=["issue_d"])
df.shape
```
# Enriching original data
```
bad_loan = [
"Charged Off",
"Default",
"Does not meet the credit policy. Status:Charged Off",
"In Grace Period",
"Late (16-30 days)",
"Late (31-120 days)",
]
def classification(x):
return 1.0 if x["loan_status"] in bad_loan else 0.0
import datetime
def def_date(x):
dd = ""
per = int(x["term"].strip()[:2])
if x["loan_status"] in bad_loan:
# dd = random.choice(pd.date_range(x["issue_date"], periods=per, freq="M"))
# random_num = float(x["id"]) % per # random number from 0 to 36 or 60
# random_month = (random_num / 4) ** 2 if random_num < 18 else random_num
random_month = float(x["id"]) % per
dd = x["issue_d"] + datetime.timedelta(weeks=(random_month * 4))
return dd
from dateutil.relativedelta import relativedelta
def maturity_date(x):
per = int(x["term"].strip()[:2])
md = x["issue_d"] + relativedelta(months=per)
return md
df["Opening Year"] = df["issue_d"].dt.year
df["Opening Month"] = df["issue_d"].dt.month
df["Opening Day"] = df["issue_d"].dt.day
df["maturity_date"] = df.apply(lambda x: maturity_date(x), axis=1)
df["loan_class"] = df.apply(lambda x: classification(x), axis=1)
df["default_date"] = df.apply(lambda x: def_date(x), axis=1)
```
# Computing historical PDs
```
import numpy as np
df = df.replace(np.nan, "")
opening_pds = pd.pivot_table(
df,
index=["sub_grade", "emp_length", "home_ownership"],
values=["loan_class"],
aggfunc=np.mean,
).reset_index()
opening_pds.rename(columns={"loan_class": "Opening PD12"}, inplace=True)
opening_pds["Opening PDLT"] = opening_pds["Opening PD12"] * 1.2
opening_pds.head(3)
```
# Generating risk reports
```
import random
def loan_matured(x, reporting_date):
return reporting_date > x["maturity_date"]
def just_issued(x, reporting_date):
return x["issue_date"] == reporting_date
def initial_stage(x):
return 1 if x["PD12"] < 0.05 else 2
def initial_pd(x):
# print(x['id'], x["sub_grade"], x["emp_length"],x["home_ownership"] )
init_pd = opening_pds[
(opening_pds.sub_grade == x["sub_grade"])
& (opening_pds.emp_length == x["emp_length"])
& (opening_pds.home_ownership == x["home_ownership"])
]["Opening PD12"].iloc[0]
return init_pd if init_pd > 0 else 0.0001
def initial_pdlt(x):
return opening_pds[
(opening_pds.sub_grade == x["sub_grade"])
& (opening_pds.emp_length == x["emp_length"])
& (opening_pds.home_ownership == x["home_ownership"])
]["Opening PDLT"].iloc[0]
def pd_one(x, reporting_date):
if x["just_issued"]:
pd_one = initial_pd(x)
else:
pd_one = x["Previous PD12"]
pd_one = max(min(pd_one * random.gauss(1, 0.2), 0.9), 0.001)
return pd_one
def pd_lt(x, reporting_date):
pd_lt = x["PD12"] * 1.2
return min(pd_lt, 0.9)
def stage(x, reporting_date):
pd_one = x["PD12"]
op_pd = x["Opening PD12"]
stage = x["Previous Stage"]
if pd_one / op_pd > 1.7:
stage = min(2, stage + 1)
if isinstance(x["default_date"], datetime.date):
if reporting_date >= x["default_date"]:
stage = 3
if pd_one > 0.7:
stage = 3
return stage
def dayspastdue(x, reporting_date):
npl = ""
if isinstance(x["default_date"], datetime.date):
if reporting_date >= x["default_date"]:
delta = reporting_date - x["default_date"]
npl = delta.days
return npl
```
# Risk data reports
```
df_copy = df.copy()
df = df.sample(15000)
import pandas as pd
temp = pd.DataFrame()
reporting_dates = sorted(list(set(df["issue_d"])))
def diff_month(d1, d2):
return (d1.year - d2.year) * 12 + d1.month - d2.month
for rd in reporting_dates:
# Old
temp = temp.copy()
if not temp.empty:
# temp.rename(columns = {'PD12': 'Previous PD12', 'PDLT': 'Previous PDLT', 'EAD': 'Previous EAD', 'LGD': 'Previous LGD', 'Stage': 'Previous Stage'}, inplace = True)
temp["Previous PD12"] = temp["PD12"]
temp["Previous PDLT"] = temp["PDLT"]
temp["Previous EAD"] = temp["EAD"]
temp["Previous LGD"] = temp["LGD"]
temp["Previous Stage"] = temp["Stage"]
temp["just_issued"] = False
temp["PD12"] = temp.apply(lambda x: pd_one(x, rd), axis=1)
temp["PDLT"] = temp.apply(lambda x: pd_lt(x, rd), axis=1)
temp["LGD"] = random.choice([0.6, 0.7, 0.8, 0.81])
temp["EAD"] = temp["Previous EAD"] * random.gauss(1, 0.05)
temp["Stage"] = temp.apply(lambda x: stage(x, rd), axis=1)
# New
sub = df[df.issue_d == rd].copy()
sub["just_issued"] = True
sub["Opening PD12"] = sub.apply(lambda x: initial_pd(x), axis=1)
sub["Opening PDLT"] = sub.apply(lambda x: initial_pdlt(x), axis=1)
sub["PD12"] = sub["Opening PD12"]
sub["PDLT"] = sub["Opening PDLT"]
sub["EAD"] = sub["loan_amnt"].astype(float)
sub["LGD"] = 0.8
sub["Stage"] = sub.apply(lambda x: initial_stage(x), axis=1)
sub["Previous PD12"] = sub["PD12"]
sub["Previous PDLT"] = sub["PDLT"]
sub["Previous EAD"] = sub["EAD"]
sub["Previous LGD"] = sub["LGD"]
sub["Previous Stage"] = sub["Stage"]
temp = pd.concat([temp, sub], ignore_index=True)
temp["DaysPastDue"] = temp.apply(lambda x: dayspastdue(x, rd), axis=1)
temp["Reporting Date"] = rd.strftime("%Y-%m-%d")
temp['Months Since Inception'] = temp.apply(lambda x: diff_month(rd, x['issue_d']), axis = 1)
temp["loan_matured"] = temp.apply(lambda x: loan_matured(x, rd), axis=1)
temp = temp[temp.loan_matured == False].copy()
print(rd, temp.shape[0])
temp[
[
"id",
"PD12",
"PDLT",
"EAD",
"LGD",
"Stage",
"Previous PD12",
"Previous PDLT",
"Previous EAD",
"Previous LGD",
"Previous Stage",
"Reporting Date",
"DaysPastDue",
"Months Since Inception"
]
].to_csv("lending-club-data/risk-engine/" + rd.strftime("%Y-%m-%d") + ".csv", index=False)
```
# Static data
```
df_c = pd.merge(
left=df[["id", "default_date", "sub_grade", "emp_length", "home_ownership"]],
right=opening_pds,
on=["sub_grade", "emp_length", "home_ownership"],
how="left",
)
df_c[["id", "default_date", "Opening PD12", "Opening PDLT"]].to_csv(
"lending-club-data/static.csv", index=False
)
df[['id', 'member_id', 'loan_amnt', 'funded_amnt', 'funded_amnt_inv',
'term', 'int_rate', 'installment', 'grade', 'sub_grade', 'emp_title',
'emp_length', 'home_ownership', 'annual_inc', 'verification_status',
'issue_d', 'loan_status', 'pymnt_plan', 'url', 'desc', 'purpose',
'title', 'zip_code', 'addr_state', 'dti', 'delinq_2yrs',
'earliest_cr_line', 'inq_last_6mths', 'mths_since_last_delinq',
'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal',
'revol_util', 'total_acc', 'initial_list_status', 'out_prncp',
'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',
'total_rec_int', 'total_rec_late_fee', 'recoveries',
'collection_recovery_fee', 'last_pymnt_d', 'last_pymnt_amnt',
'next_pymnt_d', 'last_credit_pull_d', 'collections_12_mths_ex_med',
'mths_since_last_major_derog', 'policy_code', 'application_type',
'annual_inc_joint', 'dti_joint', 'verification_status_joint',
'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m',
'open_il_6m', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il',
'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc',
'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m',
'Opening Year', 'Opening Month', 'Opening Day', 'maturity_date']].to_csv("lending-club-data/loans.csv", index = False)
```
# Zipping
```
!cd lending-club-data && zip -r "loans.zip" "loans.csv" -x "*.ipynb_checkpoints*"
!cd lending-club-data && rm loans.csv
!cd lending-club-data && zip -r "static.zip" "static.csv" -x "*.ipynb_checkpoints*"
!cd lending-club-data && rm static.csv
!cd lending-club-data && zip -r -q "risk-engine.zip" "risk-engine/" -x "*.ipynb_checkpoints*"
!cd lending-club-data && rm -r -f risk-engine/
!cd lending-club-data && yes | unzip risk-engine.zip
# !cd lending-club-data && rm -rf find -type d -name .ipynb_checkpoints`
!cd lending-club-data/risk-engine && ls -a
!cd lending-club-data && ls -lrt
zip -r bitvolution.zip ./bitvolution
```
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Migrate single-worker multiple-GPU training
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/mirrored_strategy">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/mirrored_strategy.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/mirrored_strategy.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/mirrored_strategy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide demonstrates how to migrate the single-worker multiple-GPU workflows from TensorFlow 1 to TensorFlow 2.
To perform synchronous training across multiple GPUs on one machine:
- In TensorFlow 1, you use the `tf.estimator.Estimator` APIs with `tf.distribute.MirroredStrategy`.
- In TensorFlow 2, you can use [Keras Model.fit](https://www.tensorflow.org/tutorials/distribute/keras) or [a custom training loop](https://www.tensorflow.org/tutorials/distribute/custom_training) with `tf.distribute.MirroredStrategy`. Learn more in the [Distributed training with TensorFlow](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) guide.
## Setup
Start with imports and a simple dataset for demonstration purposes:
```
import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels = [[0.8], [0.9], [1.]]
```
## TensorFlow 1: Single-worker distributed training with tf.estimator.Estimator
This example demonstrates the TensorFlow 1 canonical workflow of single-worker multiple-GPU training. You need to set the distribution strategy (`tf.distribute.MirroredStrategy`) through the `config` parameter of the `tf.estimator.Estimator`:
```
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
def _eval_input_fn():
return tf1.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
def _model_fn(features, labels, mode):
logits = tf1.layers.Dense(1)(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
strategy = tf1.distribute.MirroredStrategy()
config = tf1.estimator.RunConfig(
train_distribute=strategy, eval_distribute=strategy)
estimator = tf1.estimator.Estimator(model_fn=_model_fn, config=config)
train_spec = tf1.estimator.TrainSpec(input_fn=_input_fn)
eval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn)
tf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
## TensorFlow 2: Single-worker training with Keras
When migrating to TensorFlow 2, you can use the Keras APIs with `tf.distribute.MirroredStrategy`.
If you use the `tf.keras` APIs for model building and Keras `Model.fit` for training, the main difference is instantiating the Keras model, an optimizer, and metrics in the context of `Strategy.scope`, instead of defining a `config` for `tf.estimator.Estimator`.
If you need to use a custom training loop, check out the [Using tf.distribute.Strategy with custom training loops](https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_custom_training_loops) guide.
```
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset = tf.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer=optimizer, loss='mse')
model.fit(dataset)
model.evaluate(eval_dataset, return_dict=True)
```
## Next steps
To learn more about distributed training with `tf.distribute.MirroredStrategy` in TensorFlow 2, check out the following documentation:
- The [Distributed training on one machine with Keras](../../tutorials/distribute/keras) tutorial
- The [Distributed training on one machine with a custom training loop](../../tutorials/distribute/custom_training) tutorial
- The [Distributed training with TensorFlow](../../guide/distributed_training) guide
- The [Using multiple GPUs](../../guide/gpu#using_multiple_gpus) guide
- The [Optimize the performance on the multi-GPU single host (with the TensorFlow Profiler)](../../guide/gpu_performance_analysis#2_optimize_the_performance_on_the_multi-gpu_single_host) guide
| github_jupyter |
```
from fit.datamodules.super_res import MNIST_SResFITDM, CelebA_SResFITDM
from fit.utils import convert2DFT, pol2cart
from fit.utils.tomo_utils import get_polar_rfft_coords_2D
from fit.transformers.PositionalEncoding2D import PositionalEncoding2D
from matplotlib import pyplot as plt
import torch
import numpy as np
```
# FIT for Super-Resolution
We train FIT for super-resolution auto-regressively. Therefor our data has to be ordered such that each prefix describes the whole image at reduced resolution.
```
# dm = MNIST_SResFITDM(root_dir='./data/', batch_size=4)
dm = CelebA_SResFITDM(root_dir='./data/CelebA/', batch_size=4)
dm.prepare_data()
dm.setup()
```
Transformers require a positional encoding to make use of location information. We provide the Fourier coefficient locations as a positional encoding based on 2D polar coordinates.
Next to `r` and `phi` the coordinates of the Fourier coefficients we get `flatten_order`, which describes how the Fourier coefficients have to be arranged to be ordered from lowest to highest frequency. `fourier_rings` is the array on which `flatten_order` is computed.
```
r, phi, flatten_order, fourier_rings = get_polar_rfft_coords_2D(dm.gt_shape)
train_dl = dm.train_dataloader()
# x and y are normalized magnitude and angle of the Fourier coefficients
for x_fc, (amp_min, amp_max) in train_dl:
break
# Here we sort the normalized amplitudes and phases from lowest to highest frequency.
x_fc = x_fc[:, flatten_order]
x_fc.shape
```
Here we can choose any prefix of the encoded sequence e.g. `k=100` the first 100 encoded Fourier coefficients.
```
k = 39
prefix = torch.zeros_like(x_fc)
prefix[...,0] += x_fc[...,0].min()
prefix[:,:k] = x_fc[:,:k]
```
Now we convert both sequences (the full and the prefix) back into Fourier spectra and then compute the inverse Fourier transform.
```
x_dft = convert2DFT(x_fc, amp_min, amp_max, flatten_order, img_shape=dm.gt_shape)
prefix_dft = convert2DFT(prefix, amp_min, amp_max, flatten_order, img_shape=dm.gt_shape)
plt.figure(figsize=(10,10))
plt.subplot(2,2,1)
plt.imshow(torch.roll(torch.log(prefix_dft[0].abs()), dm.gt_shape//2, 0))
plt.title('Prefix Fourier Spectrum')
plt.subplot(2,2,2)
plt.imshow(torch.roll(torch.log(x_dft[0].abs()), dm.gt_shape//2, 0))
plt.title('Full Fourier Spectrum')
plt.subplot(2,2,3)
plt.imshow(torch.fft.irfftn(prefix_dft[0], s=(dm.gt_shape,dm.gt_shape)), cmap='gray')
plt.title('Prefix');
plt.subplot(2,2,4)
plt.imshow(torch.fft.irfftn(x_dft[0], s=(dm.gt_shape,dm.gt_shape)), cmap='gray')
plt.title('Full Sequence');
```
# Positional Encoding
Since we work with transformers and want to utilize the location information of our data we have to provide it via positional encoding. Fourier spectra are best described by polar-coordinates, hence we use a polar-coordinate based positional encoding.
```
pos_enc = PositionalEncoding2D(8, (r, phi), flatten_order)
plt.figure(figsize=(8,7))
plt.subplot(1,2,1)
plt.scatter(*pol2cart(r[flatten_order], phi[flatten_order]), c=pos_enc.pe[0,:,0], s=15)
plt.axis('equal');
plt.title('Polar-Coordiante Positional Encoding\n Target-Radius');
plt.subplot(1,2,2)
plt.scatter(*pol2cart(r[flatten_order], phi[flatten_order]), c=pos_enc.pe[0,:,4], s=15)
plt.axis('equal');
plt.title('Polar-Coordiante Positional Encoding\n Target-Angle');
```
| github_jupyter |
```
import os
import shutil
# We do not need a dataset so we load the fake input
from meta_learning.backend.tensorflow.dataset import noop as _dataset_noop
# We load the optimizers we care about.
from meta_learning.backend.tensorflow import meta_optimizer as _meta_optimizer
# The memory types we want to use.
from meta_learning.backend.tensorflow import memory as _memory
# Our memory requires some base optimizer implementations
from meta_learning.backend.tensorflow import base_optimizer as _base_optimizer
# The model we care about.
from meta_learning.backend.tensorflow import model as _model
# We want to save our model as easy as possible.
from meta_learning.backend.tensorflow import saver as _saver
from meta_learning.plot import rosenbrock as _plot_rosenbrock
from meta_learning.plot import utils as _plot_utils
from tf_utils import summaries as _summaries
GLOBALS = {}
# Some global variables for this notebook.
LOGS_DIR = '../logs'
PLOTS_DIR = '../plots'
MODEL_DIR_META = os.path.join(LOGS_DIR, 'example_rosenbrock/meta_sgd')
MODEL_DIR_SGD = os.path.join(LOGS_DIR, 'example_rosenbrock/sgd')
LEARNING_RATE = 0.001
MEM_LEARNING_RATE = 0.001
CLIP_GRAD = 10
STEPS_PER_TRAIN_CALL = 100
LM_SCALE = 0.5
NUM_CENTERS = 100
def model_dir_iteration(model_dir, iteration):
return os.path.join(model_dir, str(iteration))
def clean_model_dir(model_dir):
if os.path.exists(model_dir):
shutil.rmtree(model_dir)
# An example training loop.
def train(model_dir, optimizer_fn, saver_fn):
model = _model.Rosenbrock(model_dir=model_dir,
joined=False,
trainable=None) # Trainable only matters if we want to optimize only one dim.
model_fn = model.create_model_fn(optimizer_fn=optimizer_fn, saver_fn=saver_fn)
estimator = model.create_estimator(model_fn=model_fn)
dataset = _dataset_noop.Noop(num_epochs=1,
batch_size=1,
shard_name='train')
estimator.train(input_fn=dataset.create_input_fn(),
steps=STEPS_PER_TRAIN_CALL)
def run(base_model_dir, optimizer_fn, saver_fn):
clean_model_dir(base_model_dir)
model_dir = model_dir_iteration(base_model_dir, 1)
train(model_dir, optimizer_fn, saver_fn)
# Just a easy way to plot the saved data.
def plot(base_model_dir):
# We make sure that this is called again
%matplotlib inline
experiment_results = _summaries.get_summary_save_tensor_dict(base_model_dir, 'rosenbrock')
model_name = os.path.relpath(base_model_dir, LOGS_DIR)
for exp_name, exp_data in sorted(experiment_results.items()):
print('plot ', exp_name)
_plot_rosenbrock.plot_func(PLOTS_DIR,
model_name + exp_name,
-1.0, 2.0, 100,
exp_data['x1'], exp_data['x2'])
plots = {}
for exp_name, exp_data in sorted(experiment_results.items()):
plots[exp_name] = {'step': exp_data['step'], 'loss':exp_data['loss']}
_plot_utils.plot_values_by_step(PLOTS_DIR,
model_name,
'loss',
plots, logscale=True)
# We need to specify if we want to use the memory with a specificd gradient descent type.
# We specify our adam and sgd memory models:
meta_sgd_fn = _meta_optimizer.Memory.init_fn(
memory_init_fn=_memory.AdamStatic.init_fn(
LEARNING_RATE,
memory_clip_grad=CLIP_GRAD,
memory_lm_scale=LM_SCALE,
memory_num_centers=NUM_CENTERS,
memory_learning_rate=MEM_LEARNING_RATE),
base_optimizer_init_fn=_base_optimizer.GradientDescent.init_fn(
LEARNING_RATE,
clip_by_value=CLIP_GRAD))
sgd_fn = _meta_optimizer.Reference.init_fn(
base_optimizer_init_fn=_base_optimizer.GradientDescent.init_fn(
LEARNING_RATE,
clip_by_value=CLIP_GRAD))
run(MODEL_DIR_META,
meta_sgd_fn,
_saver.Standard.init_fn())
run(MODEL_DIR_SGD,
sgd_fn,
_saver.Standard.init_fn())
plot(MODEL_DIR_SGD)
plot(MODEL_DIR_META)
```
| github_jupyter |
```
import sys
import pandas
# local imports
sys.path.insert(0, '../')
import utils
```
## Read DO Slim
```
commit = '72614ade9f1cc5a5317b8f6836e1e464b31d5587'
url = utils.rawgit('dhimmel', 'disease-ontology', commit, 'data/slim-terms.tsv')
disease_df = pandas.read_table(url)
disease_df = disease_df.rename(columns={'doid': 'doid_id', 'name': 'doid_name'})
disease_df = disease_df[['doid_id', 'doid_name']]
disease_df.head(2)
```
## Read Entrez Gene
```
commit = '6e133f9ef8ce51a4c5387e58a6cc97564a66cec8'
url = utils.rawgit('dhimmel', 'entrez-gene', commit, 'data/genes-human.tsv')
gene_df = pandas.read_table(url)
gene_df = gene_df[gene_df.type_of_gene == 'protein-coding']
gene_df = gene_df.rename(columns={'GeneID': 'entrez_gene_id', 'Symbol': 'gene_symbol'})
gene_df = gene_df[['entrez_gene_id', 'gene_symbol']]
gene_df.head(2)
```
## Read datasets
```
# DISEASES
commit = 'e0089ef89a56348d7d4e0684a9c51c5747b16237'
url = utils.rawgit('dhimmel', 'diseases', commit, 'data/merged-slim.tsv')
diseases_df = pandas.read_table(url)
diseases_df.head(2)
# DOAF
commit = 'bbe1c326aa385416e36d02b144e89e2b99e700b6'
url = utils.rawgit('dhimmel', 'doaf', commit, 'data/doaf.tsv')
doaf_df = pandas.read_table(url)
doaf_df = doaf_df.rename(columns={'doid_code': 'doid_id', 'GeneID': 'entrez_gene_id'})
doaf_df.head(3)
# DisGeNET
commit = 'fdc5f42f2da745cbf71d7b4cc5021de5685e4a11'
url = utils.rawgit('dhimmel', 'disgenet', commit, 'data/consolidated.tsv')
disgenet_df = pandas.read_table(url)
disgenet_df = disgenet_df.rename(columns={'doid_code': 'doid_id', 'geneId': 'entrez_gene_id'})
disgenet_df.head(2)
# hetio GWAS
commit = '0617ea7ea8268f21f5ca1b8dbe487dd12671fc7b'
url = utils.rawgit('dhimmel', 'gwas-catalog', commit, 'data/gene-associations.tsv')
gwas_df = pandas.read_table(url)
gwas_df = gwas_df.rename(columns={'doid_code': 'doid_id', 'gene': 'entrez_gene_id'})
gwas_df.head(2)
```
## Filters
```
diseases_df = diseases_df.query('score_integrated_no_distild >= 2')
doaf_df = doaf_df.query('count >= 3')
disgenet_df = disgenet_df.query('score_max >= 0.06')
gwas_df = gwas_df[gwas_df.status == 'HC-P']
```
## Combine
```
diseases_df['provenance'] = 'DISEASES'
doaf_df['provenance'] = 'DOAF'
disgenet_df['provenance'] = 'DisGeNET'
gwas_df['provenance'] = 'GWAS Catalog'
diseases_df['license'] = 'CC BY 4.0'
doaf_df['license'] = ''
disgenet_df['license'] = 'ODbL 1.0'
gwas_df['license'] = 'CC BY 4.0'
dfs = [df[['doid_id', 'entrez_gene_id', 'provenance', 'license']]
for df in (diseases_df, doaf_df, disgenet_df, gwas_df)]
concat_df = pandas.concat(dfs)
concat_df = disease_df.merge(gene_df.merge(concat_df))
concat_df.provenance.value_counts()
def condense(df):
"""Consolidate multiple associations into a single Series."""
row = pandas.Series()
row['sources'] = '|'.join(df.provenance)
licenses = set(df.license)
licenses.discard('')
try:
row['license'], = licenses
except ValueError:
row['license'] = None
return row
short_df = concat_df.groupby(['doid_id', 'entrez_gene_id']).apply(condense).reset_index()
short_df = disease_df.merge(gene_df.merge(short_df))
short_df.head()
short_df.to_csv('DaG-association.tsv', sep='\t', index=False)
```
| github_jupyter |
```
from data import load_data
clinical, _, genes, treatments, outcome = load_data()
clinical.head()
treatments.columns = [c.replace('therapy_first_line_Non-therapy',
'therapy_first_line_Non-treatment') for c in treatments.columns]
treatments.head()
```
# THERAPY SENSITIVITY MODELLING
```
outcome
from pipeline import NMLA
from sklearn.model_selection import StratifiedKFold
from evaluation import optimize_threshold, classification_metrics
from sklearn.metrics import roc_auc_score, log_loss, confusion_matrix
from constants import N_FOLDS, RANDOM_STATE
from util import join_values
import lightgbm as lgb
import pickle as pkl
import pandas as pd
import numpy as np
import time
import os
# creating analyser object to compute and group
# classification matrics grouped by training and validation
# dataset and by experiment id
# analyser = Analyser()
# Creating 10-fold CV splits stratified by treatment and outcome
kfold = StratifiedKFold(N_FOLDS, shuffle=True, random_state=RANDOM_STATE)
split = kfold.split(np.zeros(outcome.shape[0]), join_values([treatments, outcome]))
# creating result structure
simulation = {c: [] for c in ['ACTUAL_TREATMENT']}
# creating
x, y = clinical.values[:, 1:], clinical.values[:, 0]
for experiment, (train_index, valid_index) in enumerate(split):
initial_time = time.time()
print('{}\n\n'.format(experiment))
#######################################################################################################
# Split train & valid
#######################################################################################################
y_train = outcome.iloc[train_index, 0]
y_valid = outcome.iloc[valid_index, 0]
clinical_train = clinical.iloc[train_index, :]
clinical_valid = clinical.iloc[valid_index, :]
treatments_train = treatments.iloc[train_index, :]
treatments_valid = treatments.iloc[valid_index, :]
genes_train = genes.iloc[train_index, :]
genes_valid = genes.iloc[valid_index, :]
#######################################################################################################
# NMLA load
#######################################################################################################
with open('output/nmla/trained_model_{}.pkl'.format(experiment), 'rb') as file:
nmla = pkl.load(file)
#######################################################################################################
# Treatment Simulation
#######################################################################################################
actual_treatment = treatments_valid.idxmax(axis=1)
actual_treatment = actual_treatment.str.replace('Non-therapy', 'Non-treatment').to_list()
simulation['ACTUAL_TREATMENT'] += actual_treatment
for t1 in treatments.columns:
if t1 not in simulation:
simulation[t1] = []
# t1 is the current treatment
# this for just set 1 for current treatment
# and 0 for remainder
for t2 in treatments.columns:
treatments_valid[t2] = float(t1 == t2)
simulation[t1] += list(nmla.predict(clinical_valid, genes_valid, treatments_valid))
del nmla
```
# With Non-treatment
```
simulation = pd.DataFrame(simulation)
# simulation = simulation[simulation['ACTUAL_TREATMENT'] != 'therapy_first_line_Non-treatment']
simulation['SIMULATED_TREATMENT'] = simulation[treatments.columns].idxmax(axis=1)
simulation['SIMULATED_TREATMENT'] = simulation.apply(lambda x: x['ACTUAL_TREATMENT'] if x[x['ACTUAL_TREATMENT']] == x[treatments.columns].max() else x['SIMULATED_TREATMENT'], axis=1)
simulation.head()
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns; sns.set()
def change_proportion(x):
vvv = x['ACTUAL_TREATMENT'].tolist()
return x.shape[0] / (simulation['ACTUAL_TREATMENT'] == np.unique(vvv)[0]).sum()
h = pd.DataFrame(simulation[['SIMULATED_TREATMENT', 'ACTUAL_TREATMENT']].groupby(
['ACTUAL_TREATMENT', 'SIMULATED_TREATMENT']).apply(change_proportion).unstack())
# h['non-therapy'] = 0
h.columns = [c.replace('therapy_first_line_', '') for c in h.columns]
h.index = [c.replace('therapy_first_line_', '') for c in h.index]
h = h.sort_index()[sorted(h.columns)].fillna(0)
fig, ax = plt.subplots(1, 1, figsize = (5, 5), dpi=80)
sns.heatmap(h, vmin=0.0, vmax=1, square=True, linewidths=.5, annot=True)
ax.set_ylabel('Actual Therapy')
ax.set_xlabel('Simulated Therapy')
plt.show()
(simulation['ACTUAL_TREATMENT'] != simulation['SIMULATED_TREATMENT']).sum() / simulation.shape[0]
```
# Without Non-treatment
```
simulation_ = pd.DataFrame(simulation)
simulation_ = simulation_[simulation_['ACTUAL_TREATMENT'] != 'therapy_first_line_Non-treatment']
simulation_['SIMULATED_TREATMENT'] = simulation_[treatments.columns[:-1]].idxmax(axis=1)
simulation_['SIMULATED_TREATMENT'] = simulation_.apply(lambda x: x['ACTUAL_TREATMENT'] if x[x['ACTUAL_TREATMENT']] == x[treatments.columns[:-1]].max() else x['SIMULATED_TREATMENT'], axis=1)
simulation_.head()
from matplotlib import pyplot as plt
import seaborn as sns; sns.set()
def change_proportion(x):
vvv = x['ACTUAL_TREATMENT'].tolist()
return x.shape[0] / (simulation_['ACTUAL_TREATMENT'] == np.unique(vvv)[0]).sum()
h = pd.DataFrame(simulation_[['SIMULATED_TREATMENT', 'ACTUAL_TREATMENT']].groupby(
['ACTUAL_TREATMENT', 'SIMULATED_TREATMENT']).apply(change_proportion).unstack())
# h['non-therapy'] = 0
h.columns = [c.replace('therapy_first_line_', '') for c in h.columns]
h.index = [c.replace('therapy_first_line_', '') for c in h.index]
h = h.sort_index()[sorted(h.columns)].fillna(0)
fig, ax = plt.subplots(1, 1, figsize = (5, 5), dpi=80)
sns.heatmap(h, vmin=0.0, vmax=1, square=True, linewidths=.5, annot=True)
ax.set_ylabel('Actual Therapy')
ax.set_xlabel('Simulated Therapy')
plt.show()
(simulation_['ACTUAL_TREATMENT'] != simulation_['SIMULATED_TREATMENT']).sum() / simulation_.shape[0]
simulation_.columns = [c.replace('therapy_first_line_', '').lower() for c in simulation_.columns]
simulation_['actual_treatment'] = simulation_['actual_treatment'].str.replace('therapy_first_line_', '')
simulation_['simulated_treatment'] = simulation_['simulated_treatment'].str.replace('therapy_first_line_', '')
if 'Non-treatment' in simulation_:
del simulation_['Non-treatment']
simulation_.to_csv('output/simulation/simulation.csv', sep=',', index=False)
simulation_.head()
simulation_['actual_treatment'].unique()
```
| github_jupyter |
This workbook is a Python coding example utilizing the SAS QKB for data quality, enrichment, and entity resolution
```
# import swat (SAS Scripting Language for Analytics Transfer),
# and pandas
import swat
import pandas as pd
# Create a connection to CAS, specifying host name or url
# the SAS Viya server needs SAS Data Quality or SAS Data Preparation installed and QKB configured
viya=swat.CAS('http://99.999.99.99/cas-shared-default-http/', 5570, 'yourusername', 'yourpw', protocol='http')
# Set an active library for this session
viya.setsessopt(caslib='Public')
# to upload the excel file uncomment the line below. Not necessary if it's already uploaded, you can just use the exising table.
#testdata = viya.read_excel('C:\Testdata\CDN_ACCOUNTS.xlsx', casout=dict(name='CDN_Accounts',caslib='Public', promote='true'))
# use the swat CASTable object to treat a CAS Table like a pandas DataFrame
testdata = viya.CASTable('CDN_Accounts')
testdata.head(20)
# run dataStep code invoking dq function to Identify contents of City, Province, PostCode fields
viya.dataStep.runCode(
code=''' data public.testexcel_dq_from_Python ;
set public.CDN_Accounts ;
City_Ident = dqIdentify(City,'Field Content','ENCAN');
Prov_Ident = dqIdentify(Prov,'Field Content','ENCAN');
Post_Ident = dqIdentify(PostCode,'Field Content','ENCAN');
Phone_Ident = dqIdentify(Phone,'Field Content','ENCAN');
run;''')
# let's look at just the first six rows of our data ...
dq = viya.CASTable('testexcel_dq_from_Python')
dq.head(6)
# using the above results of the SAS QKB Identity Analysis,
# run dataStep code to move stray phone, email, Postal code info to their correct places
viya.dataStep.runCode(
code=''' data public.testexcel_dq_from_Python;
set public.testexcel_dq_from_Python;
if Phone_Ident = 'E-MAIL' then do;
Email = Phone;
Phone = '';
Phone_Ident = 'EMPTY';
end;
if Post_Ident = 'PHONE' then do;
Phone = PostCode;
PostCode = '';
Phone_Ident = 'PHONE';
Post_Ident = 'EMPTY';
end;
if Prov_Ident = 'POSTAL CODE' then do;
PostCode = Prov;
Prov = '';
Post_Ident = 'POSTAL CODE';
Prov_Ident = 'EMPTY';
end;
run;''')
dq.head(6)
# run dataStep code to combine City, Province, PostCode fields for problem rows and parse out correct info
# using the SAS QKB parse definiton for City-State/Province-Postal Code.
viya.dataStep.runCode(
code=''' data public.testexcel_dq_2(drop=City_Ident Prov_Ident Post_Ident Phone_Ident parsedCPP);
set public.testexcel_dq_from_Python;
if City_Ident ^= 'CITY' and (Prov_Ident='EMPTY' or Post_Ident='EMPTY') then do;
parsedCPP = dqParse(CATX(' ',City,Prov,PostCode), 'City - State/Province - Postal Code', 'ENCAN');
City = dqParseTokenGet(parsedCPP, 'City', 'City - State/Province - Postal Code', 'ENCAN');
Prov = dqParseTokenGet(parsedCPP, 'State/Province', 'City - State/Province - Postal Code', 'ENCAN');
PostCode = dqParseTokenGet(parsedCPP, 'Postal Code', 'City - State/Province - Postal Code', 'ENCAN');
end;
run;''')
dq2 = viya.CASTable('testexcel_dq_2')
dq2.to_frame()
# use the SAS QKB Standardize defintions for Province, PostCode, and Phone to standardize those columns in place
# use the SAS QKB Gender Analysis definition to enrich the data with a gender field, based on the Name
viya.dataStep.runCode(
code=''' data public.testexcel_dq_2;
set public.testexcel_dq_2;
Prov = dqStandardize(Prov,'State/Province (Postal Standard)','ENCAN');
Phone = dqStandardize(Phone,'Phone');
PostCode= dqStandardize(PostCode,'Postal Code','ENCAN');
gender = dqGender(Name,'Name','ENCAN') ;
run;''')
dq2.to_frame()
# using the SAS QKB matchcode definitions create matchcodes on Name, Address, City.
viya.dataStep.runCode(
code=''' data public.testexcel_dq_3;
set public.testexcel_dq_2;
Name_matchcode55 = dqMatch(Name,'Name',55,'ENCAN');
Address_matchcode70 = dqMatch(Address,'Address',70,'ENCAN');
City_matchcode85 = dqMatch(City,'City',85,'ENCAN');
if length(Phone) > 7 then Phone7 = substr(Phone,length(Phone)-7,8);
run;''')
# let's look at results for rows 7 to 10 ...
dq3 = viya.CASTable('testexcel_dq_3')
dq3[['ID','Name','Address','City','Name_matchcode55','Address_matchcode70','City_matchcode85','Phone7']].fetch(from_=7,to=10)
#load Entity Resolution CAS action set
viya.loadactionset(actionset="entityRes")
# use the entityres.match CAS action to match rows on:
# (Name & Address & Postal Code) OR (Name & City & Province) OR (Name & Phone) OR (Name & Email)
#
dq_clustered = viya.CASTable('test_Clustered', groupBy='CLUSTERID', replace=True)
viya.entityres.match(clusterid='CLUSTERID',
intable=dq3,
matchrules=[{'rule':[{'columns':['Name_matchcode55','Address_matchcode70','Postcode']},]},
{'rule':[{'columns':['Name_matchcode55','City_matchcode85','Prov']}]},
{'rule':[{'columns':['Name_matchcode55','Phone7']}]},
{'rule':[{'columns':['Name_matchcode55','Email']}]}
],
outtable=dq_clustered)
# display cluster results. CLUSTERID is a 24-byte character string
dq_clustered[['CLUSTERID','ID','Name','Address','City','Phone','Email']].sort_values('CLUSTERID').to_frame()
# to make the clusters easier to see, use simple.groupByInfo CAS action to genereate numeric GroupID from alphanumeric CLUSTERID
dq_clust_nums = viya.CASTable('test_Clust_nums',replace=True)
dq_clustered.simple.groupByInfo(generatedColumns='GROUPID',
casout=dq_clust_nums,
includeDuplicates=True)
dq_clust_nums[['_GroupID_','ID','Name','Address','City','Prov','PostCode','Amount','Phone','Email']].sort_values('_GroupID_').to_frame()
# run datastep code to create golden record for each cluster with best info, and total amount
viya.dataStep.runCode(
code=''' data public.test_done
(drop= max_Name min_gender max_Address max_City max_Postal tot_Amount max_Phone max_Email
CLUSTERID ID Name_matchcode55 Address_matchcode70 City_matchcode85 Phone7);
set public.test_Clust_Nums;
by _GroupID_;
retain max_Name min_gender max_Address max_City max_Postal tot_Amount max_Phone max_Email;
if first._GroupID_ then do;
max_Name = Name;
min_gender = gender;
max_Address = Address;
max_City = City;
max_Postal = PostCode;
tot_Amount = Amount;
max_Phone = Phone;
max_Email = Email;
end;
else do;
if length(Name) < length(max_Name) then Name = max_Name; else max_Name = Name;
if gender > min_gender then gender = min_gender; else min_gender = gender;
if length(Address) < length(max_Address) then Address = max_Address; else max_Address = Address;
if length(City) < length(max_City) then City = max_City; else max_City = City;
if length(PostCode) < length(max_Postal) then PostCode = max_Postal; else max_Postal = PostCode;
tot_Amount = Amount + tot_Amount;
Amount = tot_Amount;
if length(Phone) < length(max_Phone) then Phone = max_Phone; else max_Phone = Phone;
if length(Email) < length(max_Email) then Email = max_Email; else max_Email = Email;
end;
if last._GroupID_ then output;
run;''')
dq_done = viya.CASTable('test_done')
dq_done[['_GroupID_','Name','gender','Address','City','Prov','PostCode','Amount','Phone','Email']].sort_values('_GROUPID_').head(20)
```
| github_jupyter |
# Lecture 33: AlexNet
```
%matplotlib inline
import tqdm
import copy
import time
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import matplotlib.pyplot as plt
import torchvision
from torchvision import transforms,datasets, models
print(torch.__version__) # This code has been updated for PyTorch 1.0.0
```
## Load Data:
```
apply_transform = transforms.Compose([transforms.Resize(224),transforms.ToTensor()])
BatchSize = 4
trainset = datasets.CIFAR10(root='./CIFAR10', train=True, download=True, transform=apply_transform)
trainLoader = torch.utils.data.DataLoader(trainset, batch_size=BatchSize,
shuffle=True, num_workers=4) # Creating dataloader
testset = datasets.CIFAR10(root='./CIFAR10', train=False, download=True, transform=apply_transform)
testLoader = torch.utils.data.DataLoader(testset, batch_size=BatchSize,
shuffle=False, num_workers=4) # Creating dataloader
# Size of train and test datasets
print('No. of samples in train set: '+str(len(trainLoader.dataset)))
print('No. of samples in test set: '+str(len(testLoader.dataset)))
```
## Define network architecture
```
net = models.AlexNet()
print(net)
# Counting number of trainable parameters
totalParams = 0
for name,params in net.named_parameters():
print(name,'-->',params.size())
totalParams += np.sum(np.prod(params.size()))
print('Total number of parameters: '+str(totalParams))
# Copying initial weights for visualization
init_weightConv1 = copy.deepcopy(net.features[0].weight.data) # 1st conv layer
init_weightConv2 = copy.deepcopy(net.features[3].weight.data) # 2nd conv layer
# Check availability of GPU
use_gpu = torch.cuda.is_available()
if use_gpu:
print('GPU is available!')
device = "cuda"
else:
print('GPU is not available!')
device = "cpu"
net = net.to(device)
```
## Define loss function and optimizer
```
criterion = nn.NLLLoss() # Negative Log-likelihood
optimizer = optim.Adam(net.parameters(), lr=1e-4) # Adam
```
## Train the network
```
iterations = 5
trainLoss = []
testAcc = []
start = time.time()
for epoch in range(iterations):
epochStart = time.time()
runningLoss = 0
net.train() # For training
for data in tqdm.tqdm_notebook(trainLoader):
inputs,labels = data
inputs, labels = inputs.to(device), labels.to(device)
# Initialize gradients to zero
optimizer.zero_grad()
# Feed-forward input data through the network
outputs = net(inputs)
# Compute loss/error
loss = criterion(F.log_softmax(outputs,dim=1), labels)
# Backpropagate loss and compute gradients
loss.backward()
# Update the network parameters
optimizer.step()
# Accumulate loss per batch
runningLoss += loss.item()
avgTrainLoss = runningLoss/(len(trainset)/BatchSize)
trainLoss.append(avgTrainLoss)
# Evaluating performance on test set for each epoch
net.eval() # For testing [Affects batch-norm and dropout layers (if any)]
running_correct = 0
with torch.no_grad():
for data in testLoader:
inputs,labels = data
inputs = inputs.to(device)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
if use_gpu:
predicted = predicted.cpu()
running_correct += (predicted == labels).sum()
avgTestAcc = float(running_correct)*100/10000.0
testAcc.append(avgTestAcc)
# Plotting training loss vs Epochs
fig1 = plt.figure(1)
plt.plot(range(epoch+1),trainLoss,'r-',label='train')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Training loss')
# Plotting testing accuracy vs Epochs
fig2 = plt.figure(2)
plt.plot(range(epoch+1),testAcc,'g-',label='test')
if epoch==0:
plt.legend(loc='upper left')
plt.xlabel('Epochs')
plt.ylabel('Testing accuracy')
epochEnd = time.time()-epochStart
print('Iteration: {:.0f} /{:.0f} ; Training Loss: {:.6f} ; Testing Acc: {:.3f} ; Time consumed: {:.0f}m {:.0f}s '\
.format(epoch + 1,iterations,avgTrainLoss,avgTestAcc,epochEnd//60,epochEnd%60))
end = time.time()-start
print('Training completed in {:.0f}m {:.0f}s'.format(end//60,end%60))
# Copying trained weights for visualization
trained_weightConv1 = copy.deepcopy(net.features[0].weight.data)
trained_weightConv2 = copy.deepcopy(net.features[3].weight.data)
if use_gpu:
trained_weightConv1 = trained_weightConv1.cpu()
trained_weightConv2 = trained_weightConv2.cpu()
```
## Visualization of weights
```
# functions to show an image
def imshow(img, strlabel):
npimg = img.numpy()
npimg = np.abs(npimg)
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 10
plt.rcParams["figure.figsize"] = fig_size
plt.figure()
plt.title(strlabel)
plt.imshow(np.transpose(npimg, (1, 2, 0)))
imshow(torchvision.utils.make_grid(init_weightConv1,nrow=8,normalize=True),'Initial weights: conv1')
imshow(torchvision.utils.make_grid(trained_weightConv1,nrow=8,normalize=True),'Trained weights: conv1')
imshow(torchvision.utils.make_grid(init_weightConv1-trained_weightConv1,nrow=8,normalize=True),'Difference of weights: conv1')
imshow(torchvision.utils.make_grid(init_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Initial weights: conv2')
imshow(torchvision.utils.make_grid(trained_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Trained weights: conv2')
imshow(torchvision.utils.make_grid(init_weightConv2[0].unsqueeze(1)-trained_weightConv2[0].unsqueeze(1),nrow=8,normalize=True),'Difference of weights: conv2')
```
| github_jupyter |
# Advanced Ray Tutorial - Exercise Solutions
© 2019-2020, Anyscale. All Rights Reserved

First, import everything we'll need and start Ray:
```
import ray, time, sys
import numpy as np
sys.path.append("../..")
from util.printing import pd, pnd # convenience methods for printing results.
!../../tools/start-ray.sh --check --verbose
ray.init(address='auto', ignore_reinit_error=True)
```
## Exercise 1 in 01: Ray Tasks Revisited
You were asked to convert the regular Python code to Ray code. Here are the three cells appropriately modified.
First, we need the appropriate imports and `ray.init()`.
```
@ray.remote
def slow_square(n):
time.sleep(n)
return n*n
start = time.time()
ids = [slow_square.remote(n) for n in range(4)]
squares = ray.get(ids)
duration = time.time() - start
assert squares == [0, 1, 4, 9]
# should fail until the code modifications are made:
assert duration < 4.1, f'duration = {duration}'
```
## Exercise 2 in 01: Ray Tasks Revisited
You were asked to use `ray.wait()` with a shorter timeout, `2.5` seconds. First we need to redefine in this notebook the remote functions we used in that lesson:
```
@ray.remote
def make_array(n):
time.sleep(n/10.0)
return np.random.standard_normal(n)
@ray.remote
def add_arrays(a1, a2):
time.sleep(a1.size/10.0)
return np.add(a1, a2)
start = time.time()
array_ids = [make_array.remote(n*10) for n in range(5)]
added_array_ids = [add_arrays.remote(id, id) for id in array_ids]
arrays = []
waiting_ids = list(added_array_ids) # Assign a working list to the full list of ids
while len(waiting_ids) > 0: # Loop until all tasks have completed
# Call ray.wait with:
# 1. the list of ids we're still waiting to complete,
# 2. tell it to return immediately as soon as TWO of them complete,
# 3. tell it wait up to 10 seconds before timing out.
return_n = 2 if len(waiting_ids) > 1 else 1
ready_ids, remaining_ids = ray.wait(waiting_ids, num_returns=return_n, timeout=2.5)
print('Returned {:3d} completed tasks. (elapsed time: {:6.3f})'.format(len(ready_ids), time.time() - start))
new_arrays = ray.get(ready_ids)
arrays.extend(new_arrays)
for array in new_arrays:
print(f'{array.size}: {array}')
waiting_ids = remaining_ids # Reset this list; don't include the completed ids in the list again!
print(f"\nall arrays: {arrays}")
pd(time.time() - start, prefix="Total time:")
```
For a timeout of `2.5` seconds, the second call to `ray.wait()` times out before two tasks finish, so it only returns one completed task. Why did the third and last iteration not time out? (That is, they both successfully returned two items.) It's because all the tasks were running in parallel so they had time to finish. If you use a shorter timeout, you'll see more time outs, where zero or one items are returned.
Try `1.5` seconds, where all but one iteration times out and returns one item. The first iteration returns two items.
Try `0.5` seconds, where you'll get several iterations that time out and return zero items, while all the other iterations time out and return one item.
## Exercise 3 in 01: Ray Tasks Revisited
You were asked to convert the code to use Ray, especially `ray.wait()`.
```
@ray.remote
def slow_square(n):
time.sleep(n)
return n*n
start = time.time()
ids = [slow_square.remote(n) for n in range(4)]
squares = []
waiting_ids = ids
while len(waiting_ids) > 0:
finished_ids, waiting_ids = ray.wait(waiting_ids) # We just assign the second list to waiting_ids...
squares.extend(ray.get(finished_ids))
duration = time.time() - start
assert squares == [0, 1, 4, 9]
assert duration < 4.1, f'duration = {duration}'
```
## Exercise - "Homework" - in 02: Ray Actors Revisited
Since profiling shows that `live_neighbors` is the bottleneck, what could be done to reduce its execution time? The new implementation solution shown here reduces its overhead by about 40%. Not bad.
The solution also implements parallel invocations grid updates, rather doing the whole grid in sequential steps.
As discussed in lesson 4, these kinds of optimizations make sense when you _really_ have a compelling reason to squeeze optimal performance out of the code. Hence, this optimization exercise will mostly appeal to those of you with such requirements or who low-level performance optimizations like this.
This solution for optimizing `live_neighbors` was developed using [micro-perf-tests.py](micro-perf-tests.py). The changes to the game code can be found in [game-of-life-2-exercise.py](game-of-life-2-exercise.py) rather than repeating them in cells here. Both scripts run standalone and both have a `--help` flag for more information.
If you tried the "easier experiments" suggested, such as enhancing `RayConwaysRules.step()` to accept a `num_steps` argument, you probably found that they didn't improve performance. As for the non-Ray game, this change only moves processing around but doesn't parallelize it more than before, so performance is about the same.
```
from game_of_life_2_exercise import RayGame, apply_rules_block, time_ray_games
```
For comparison, one set of test runs with the exercise code before improvements took about 25 seconds.
If you look at `RayGame2.step`, it calls `RayConwaysRules.step` one step at a time, using remote calls. This seems like a good place for improvement. Let's extend `RayConwaysRules.step` to do more than one step, just like `RayGame2.step` already supports.
Changes are indicated with comments.
```
time_ray_games(
num_games = 1,
max_steps = 400,
batch_size = 1,
grid_dimensions = (100,100),
use_block_updates = False)
```
In a test run, this ran in about 15.5 seconds, about 9.5 seconds faster than the first version! Hence, as expected, optimizing `live_neighbors` provided significant improvement.
What about using block updates? Let's try a bigger grid, but fewer steps. First, without the block updates:
```
time_ray_games(
num_games = 1,
max_steps = 100,
batch_size = 1,
grid_dimensions = (200,200),
use_block_updates = False)
time_ray_games(
num_games = 1,
max_steps = 100,
batch_size = 1,
grid_dimensions = (200,200),
use_block_updates = True,
block_size = 50) # The default block size is -1, so no blocks are used!
```
In a test run, this performed about twice as fast! So block processing definitely helps.
Finally, does batching help? We'll use fewer steps and the original 100x100 grid. First without batching and then with batching:
```
%time time_ray_games(num_games = 1, max_steps = 100, batch_size = 1, grid_dimensions = (100,100), use_block_updates=False)
%time time_ray_games(num_games = 1, max_steps = 100, batch_size = 50, grid_dimensions = (100,100), use_block_updates=False)
```
Batching doesn't make much difference and in fact, we don't expect to matter, because it doesn't change the parallelism, like blocking does, and it doesn't make the algorithm more efficient, like the new `live_neighbors` does.
To conclude, the new implementation of `live_neighbors` has a noticable benefit. Batching doesn't make much difference, but using parallel blocks helps a lot.
| github_jupyter |
# Applying Deep Batch Active Learning to your own learning task
In this notebook, we show how our implemented batch mode deep active learning (BMDAL) methods can be applied to a custom NN. We will first change the working directory from the examples subfolder to the main folder, which is required for the imports to work correctly.
```
import os
os.chdir('..') # change directory inside the notebook to the main directory
```
## Example with artificial data and custom model
We first generate some artificial 2-D training data, which will be plotted below.
```
import torch
import torch.nn as nn
n_train = 100
n_pool = 2000
torch.manual_seed(1234)
x = torch.randn(n_train+n_pool, 3)
theta = 3*(x[:, 1] + 0.1 * x[:, 0])
x = (0.2 * x[:, 2] + x[:, 1] + 2)[:, None] * torch.stack([torch.sin(theta), torch.cos(theta)], dim=1)
y = torch.exp(x[:, 0])
y = y[:, None]
x_train = x[:n_train]
y_train = y[:n_train]
x_pool = x[n_train:]
y_pool = y[n_train:]
```
Note that for the labels, we have used the deterministic function $y = e^{x_1}$. In order to learn this function, it would be better to have more training points on the right of the domain. We can visualize the pool set (in gray) and the train set (in black) without the labels as follows:
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x_pool[:, 0].numpy(), x_pool[:, 1].numpy(), '.', color='#BBBBBB')
plt.plot(x_train[:, 0].numpy(), x_train[:, 1].numpy(), '.', color='k')
plt.show()
```
Next, we train a NN on the given training data. For this, we use a simple three-layer NN using the standard layers from PyTorch. Since we only have 100 training points, we use all of the data in each step. We optimize the NN using Adam for 256 epochs with fixed learning rate. These hyperparameters are only an example, you may use other values.
```
custom_model = nn.Sequential(nn.Linear(2, 100), nn.SiLU(), nn.Linear(100, 100), nn.SiLU(), nn.Linear(100, 1))
opt = torch.optim.Adam(custom_model.parameters(), lr=2e-2)
for epoch in range(256):
y_pred = custom_model(x_train)
loss = ((y_pred - y_train)**2).mean()
train_rmse = loss.sqrt().item()
pool_rmse = ((custom_model(x_pool) - y_pool)**2).mean().sqrt().item()
print(f'train RMSE: {train_rmse:5.3f}, pool RMSE: {pool_rmse:5.3f}')
loss.backward()
opt.step()
opt.zero_grad()
```
Next, we want to select 50 new points to label using Deep Batch Active Learning. For this, we wrap our training and pool inputs in the `TensorFeatureData` class and call the method `bmdal.algorithms.select_batch()` method with the batch size (50), our trained model, training and pool data as well as our desired configuration of selection method, mode (`sel_with_train`), base kernel and kernel transformations. Here, we use the LCMD-TP method proposed in our paper. For an extensive overview over the available options, we refer to the docstring in the code for `bmdal.algorithms.select_batch()`. Finally, we plot the selected samples from the pool set.
Note: If your training / pool data is so large that it does not fit on the device (CPU/GPU), you may need to write a custom `FeatureData` subclass, where the `get_tensor_impl_()` method acts as a data loader. Please contact the library author in this case.
The function `select_batch()` supports the computation of gradient features for layers of type `nn.Linear` by default. In order to support gradients for other types of layers, there are two options:
- The first option is that the other layers inherit from `bmdal.layer_features.LayerGradientComputation` and implement the corresponding methods. For example, the class `bmdal.layer_features.LinearLayer` does this. In contrast to `nn.Linear`, it supports the factors $\sigma_w/\sqrt{d_l}$ and $\sigma_b$ from the paper, and it is used in our benchmarking code.
- The second option is that a matching of the other layer types to corresponding subclasses of `bmdal.layer_features.LayerGradientComputation` is provided via the `layer_grad_dict` argument of `select_batch()`. For more details, we refer to the documentation of `select_batch()`.
```
from bmdal.feature_data import TensorFeatureData
from bmdal.algorithms import select_batch
train_data = TensorFeatureData(x_train)
pool_data = TensorFeatureData(x_pool)
new_idxs, _ = select_batch(batch_size=50, models=[custom_model],
data={'train': train_data, 'pool': pool_data}, y_train=y_train,
selection_method='lcmd', sel_with_train=True,
base_kernel='grad', kernel_transforms=[('rp', [512])])
plt.plot(x_pool[:, 0].numpy(), x_pool[:, 1].numpy(), '.', color='#BBBBBB')
plt.plot(x_train[:, 0].numpy(), x_train[:, 1].numpy(), '.', color='k')
plt.plot(x_pool[new_idxs, 0].numpy(), x_pool[new_idxs, 1].numpy(), '.', color='b')
plt.show()
```
We can observe that the BMDAL method selected more pool samples from the right of the domain. This is desirable since the target function is $y = e^{x_1}$, which is steeper at the right of the domain. This behavior would not arise from a network-independent base kernel like 'linear', 'nngp' or 'laplace'.
## Example using our benchmark data and training code
In the following, we will give a different example, utilizing a data set from our benchmark and our functions for creating and fitting fully-connected NN models. The code is adapted from `ModelTrainer.__call__()` in `train.py`. First, we need to choose a data set. We choose the `road_network` data set since it contains 2D inputs representing locations in north Denmark, which can be nicely visualized. We choose an initial training set size of `n_train=256`. The argument `al_batch_sizes` is not relevant to us here since we will choose the batch size later manually. We then create a random split of the task with `id=0`, which also serves as a random seed for the split. Finally, we convert the index tensor for the respective parts of the data set to PyTorch tensors.
To be able to run the following code, it is required to follow the data download instructions from the README.md file in the repository.
```
from data import Task, TaskSplit
import torch
task = Task.get_tabular_tasks(n_train=256, al_batch_sizes=[256] * 16, ds_names=['road_network'])[0]
task_split = TaskSplit(task, id=0)
train_idxs = torch.as_tensor(task_split.train_idxs, dtype=torch.int64)
valid_idxs = torch.as_tensor(task_split.valid_idxs, dtype=torch.int64)
pool_idxs = torch.as_tensor(task_split.pool_idxs, dtype=torch.int64)
test_idxs = torch.as_tensor(task_split.test_idxs, dtype=torch.int64)
data = task_split.data
```
Now, we visualize the pool points (gray) and training points (black). The shape of north Denmark can clearly be recognized.
```
%matplotlib inline
import matplotlib.pyplot as plt
x_train = data.tensors['X'][train_idxs]
x_pool = data.tensors['X'][pool_idxs]
y_pool = data.tensors['y'][pool_idxs]
plt.plot(x_pool[:, 0].numpy(), x_pool[:, 1].numpy(), '.', color='#BBBBBB')
plt.plot(x_train[:, 0].numpy(), x_train[:, 1].numpy(), '.', color='k')
plt.show()
```
Next, we create a fully-connected NN model with NTK Parametrization using the function `create_tabular_model` and train it using `fit_model`. Both functions contain more optional arguments to modify hyperparameters like the width and number of layers, the learning rate, and so on. We refer to the respective implementations for more details.
Unlike typical PyTorch code, the two functions mentioned above work with NNs that are vectorized such that multiple ensemble members are trained separately. The number of ensemble members is given by the `n_models` parameter, which we set to $1$. In order to obtain a non-vectorized NN in the end, which is required for BMDAL, we call the function `get_single_model()`.
```
from train import fit_model
from models import create_tabular_model
n_models = 1
n_features = data.tensors['X'].shape[1] # will be 2 in our case
vectorized_model = create_tabular_model(n_models=n_models, n_features=n_features)
fit_model(vectorized_model, data, n_models, train_idxs, valid_idxs)
model = vectorized_model.get_single_model(0)
```
The model structure, printed below, contains some custom layer types. The `ParallelSequential` and `ParallelLayerWrapper` layers come from the vectorization, but for the purpose of BMDAL `ParallelSequential` could be replaced by `nn.Sequential` and `ParallelLayerWrapper` could be omitted. The class `bmdal.layer_features.LinearLayer` implements the abstract class `bmdal.layer_features.LayerGradientComputation` and is therefore automatically used for computing gradient features. Compared to `nn.Linear`, `LinearLayer` supports weight and bias factors $\sigma_w/\sqrt{d_l}$ and $\sigma_b$ from the Neural Tangent Parameterization as used in the paper.
```
print(model)
```
Now, we apply BMDAL to select a subset of elements from the pool set as in the example above.
```
from bmdal.feature_data import TensorFeatureData
from bmdal.algorithms import select_batch
X = TensorFeatureData(data.tensors['X'])
feature_data = {'train': X[train_idxs],
'pool': X[pool_idxs]}
y_train = data.tensors['y'][train_idxs]
new_idxs, al_stats = select_batch(batch_size=128, models=[model], data=feature_data, y_train=y_train,
selection_method='lcmd', sel_with_train=True,
base_kernel='grad', kernel_transforms=[('rp', [512])])
# move new_idxs from the pool set to the training set
# therefore, we first create a boolean array that is True at the indices in new_idxs and False elsewhere
logical_new_idxs = torch.zeros(pool_idxs.shape[-1], dtype=torch.bool)
logical_new_idxs[new_idxs] = True
# We now append the new indices to the training set
train_idxs = torch.cat([train_idxs, pool_idxs[logical_new_idxs]], dim=-1)
# and remove them from the pool set
pool_idxs = pool_idxs[~logical_new_idxs]
```
The `al_stats` object contains some information about the execution of BMDAL. Here, the time for building the features and running the selection was about 19 seconds on a CPU, which is not very much considering that the initial pool set contains 198,720 samples! The entry for `selection_status` may contain a warning message in case that the normal selection failed. Usually, the batch is filled up with random samples in this case. Here, we have no warning.
```
print(al_stats)
```
Next, we plot the selected points:
```
plt.plot(x_pool[:, 0].numpy(), x_pool[:, 1].numpy(), '.', color='#BBBBBB')
plt.plot(x_train[:, 0].numpy(), x_train[:, 1].numpy(), '.', color='k')
plt.plot(x_pool[new_idxs, 0].numpy(), x_pool[new_idxs, 1].numpy(), '.', color='b')
plt.show()
```
The selected points concentrate in some regions of the space. We can see that the labels (which correspond to the elevation of the land in this data set) vary more strongly in these regions:
```
plt.scatter(x_pool[:, 0].numpy(), x_pool[:, 1].numpy(), c=y_pool[:, 0].numpy())
plt.gray()
plt.show()
```
| github_jupyter |
# Recommendation System
Student Name: Dacheng Wen (dachengw)
## Introduction
This tutorial will introduce a approach to build a simple recommendation system.
Accroding to the definition from Wikipedia, recommendation system is a subclass of information filtering system that seek to predict the "rating" or "preference" that a user would give to an item.
A daily example is the Amazon's recommendation engine:
[<img src="http://netdna.webdesignerdepot.com/uploads/amazon//recommended.jpg">](http://netdna.webdesignerdepot.com/uploads/amazon//recommended.jpg)
Theoretically, Amazon analyzes users' information (purchase history, browse history and more) to recommend what the users may want to buy.
## Tutorial content
In this tutorial, we will build a simple offline recommendation system to recommend movies. This recommendaiton system is not a practical or sophisticated one for commerical use, but working through this tutorial can give a sense about how a recommendation system works.
We will cover the following topics in this tutorial:
- [Expectation](#Expectation)
- [Downloading and loading data](#Downloading-and-loading-data)
- [Item-based collaborative filtering](#Item-based-collaborative-filtering)
- [Recommendation for new users](#Recommendation-for-new-users)
- [Summary](#Summary)
## Expectation
The recommendation system we will build can:
1. Take the existing rating data as input.
2. Recommend at most k (k = 5 for this tutorial) movies which haven't rated by the user for each user.
```
k = 5
```
## Downloading and loading data
We are going to use the open dataset provided by MovieLens (https://movielens.org/).
The dataset can be downloaded from http://grouplens.org/datasets/movielens/.
For this tutorial, we will use the u.data file from smallest dataset (100K records)
According to the ReadMe (http://files.grouplens.org/datasets/movielens/ml-100k/README). This files contains ratings by 943 users on 1682 items. Each user has rated at least 20 movies. Users and items are numbered consecutively from 1. The data is randomly ordered.
This is a tab separated list of:
user id | item id | rating | timestamp.
Note:
1. An item means an movie, so the item id is the movie id. We consider item and movie interchangable for this tutorial.
2. For the simple recommendaiton system we are going to build, we only use the first three fields, user id, item id and rating. That is to say, we ignore the timestamp. Timestamp is indeed a valuable information, but we ignore it in this tutorial for simplicity.
3. The range of rating is 1-5, and 5 means the best.
Althought not necessry, it would be nice to be able to get the movie title by its id. Therefore we need to download the u.item file. The first two fields of every record in this file are
movie id | movie title | ...
Let's download these files:
```
import requests
def download_file(link_address, filename):
response = requests.get(link_address, stream=True)
if (response.status_code == requests.codes.ok) :
with open(filename, 'wb') as handle:
for block in response.iter_content(1024):
handle.write(block)
print "Successfully downloaded " + filename
return True
else:
print "Sorry, " + filename + " download failed"
return False
# download user - movie ratings
download_file('http://files.grouplens.org/datasets/movielens/ml-100k/u.data', 'u.data')
# download movie id - movie map
download_file('http://files.grouplens.org/datasets/movielens/ml-100k/u.item', 'u.item')
```
Then read the files to memory:
```
# read u.data
user_rating_raw = []
with open('u.data') as f:
for line in f:
fields = line.split('\t')
user_rating_raw.append([int(fields[0]),
int(fields[1]),
float(fields[2]),
int(fields[3])])
print "Read u.data, got " + str(len(user_rating_raw)) + " rating records."
print
print "The first 5 records are:"
for row_index in range(5):
print user_rating_raw[row_index]
print
# read u.item
movie_title_map = {};
with open('u.item') as f:
for line in f:
fields = line.split('|')
movie_title_map[int(fields[0])] = fields[1]
print "Read id-title map for " + str(len(movie_title_map)) + " movies."
print
print "The first 5 movies in the map are:"
for movie_id in range(1, 6):
print (movie_id, movie_title_map[movie_id])
print
```
## Item based collaborative filtering
Among the multiple recommendation alogrithms, item-based collabrative filtering is one of most popular alogorithm. The recommendation alogrithm used by Amazon and other websites are based on item-based collabrative filtering (https://en.wikipedia.org/wiki/Item-item_collaborative_filtering). *
We are going to implement a simple item-based collabrative filtering on thie tutorial.
The idea of item-based collabrative filtering is to find similar items, and then recommend items based on the users' history related item.
Let's say we found that _Star Wars (1977)_ is similar to _Return of the Jedi (1983)_, we assumes that the users who like _Star Wars (1977)_ are going to enjoy _Return of the Jedi (1983)_ too. Therefore, if we find that there is a user who watched (rated) _Star Wars (1997)_ but haven't watched (rated) _Return of the Jedi (1983)_, we will recommend _Return of the Jedi (1983)_ to the user.
For our MovieLens scenario, we need to:
1. Compute the similarity between movies based on the ratings
2. For each user, recommend movies which are similar to the movies rated by that user, and the recommended movies should not contains those movies which have already rated by that user.
Reference:
* Linden, G., Smith, B., & York, J. (2003). Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet computing, 7(1), 76-80.
Before computing the similarity between movies, let's convert the raw data, user_rating_record, into a matrix (numpy 2d array), movie_user_mat.
Each element in the movie_user_mat stores a rating. movie_user_mat is of size num_movie by num_user. num_movie\[i\]\[j\] means the j-th user's rating for i-th movie. Therefore, each row stores the ratings for a movie from all users, and each column stores a user's rating.
Noted that the the range of the rating is 1-5, so we can use 0 to indicate that a user haven't rated a movie.
```
import numpy as np
# number of movies and number of users,
# these two numbers are from ReadMe (http://files.grouplens.org/datasets/movielens/ml-100k/README)
num_user = 943
num_movie = 1682
movie_user_mat = np.zeros((num_movie, num_user));
for user_rating_record in user_rating_raw:
# minus 1 to convert the index (id) to 0 based
user_index = user_rating_record[0] - 1
movie_index = user_rating_record[1] - 1
rating = user_rating_record[2]
movie_user_mat[movie_index][user_index] = rating
```
Now that we have the movie-user matrix, we can perform the first step, computing the similarity between movies. We will use cosine similarity that we learned (https://en.wikipedia.org/wiki/Cosine_similarity). Because each row represents the ratings for a movie from all users, we consider treat rows as the input vectors. Noted that the similarity matrix, movie_similarity_mat, is a sysemtric matrix (movie_similarity_mat\[i\]\[j\] = movie_similarity_mat\[j\]\[i\]).
```
import scipy.spatial as scp
movie_similarity_mat = np.zeros((num_movie, num_movie))
for i in range(num_movie):
movie_i_rating = movie_user_mat[i]
for j in range(i, num_movie):
movie_j_rating = movie_user_mat[j]
cos_similarity = 1.0 - scp.distance.cosine(movie_i_rating, movie_j_rating)
movie_similarity_mat[i][j] = cos_similarity
movie_similarity_mat[j][i] = cos_similarity
```
Finally, we can compute the what movies should be recommended to the users.
In order to achieve this goal, for each user, we need to compute his / her interest in each movie. We represent the interests using a coefficient.
The coefficient that indicates j-th user's interest in i-th movie (a large the coefficient means the user is highly interested in that movie)
$$ coefficient[i][j]= \sum_{k=1}^n similarity[k-1][i] * rating[k-1][j]$$
Where n is the number of movies, similarity\[k-1\]\[i\] is movie_similarity_mat\[k-1\]\[i\] (similarity between k-1 th movie and i-th movie) and rating\[k-1\]\[j\] is movie_user_mat\[k-1\]\[j\] (j-th user's rating on k-1 th movie)
Noted that this equation is equivalent to
$$ coefficient[i][j]= \sum_{k=1}^n similarity[i][k-1] * rating[k-1][j]$$
because movie_similarity_mat is symmetric.
It may looks cofusing, so let's take a small dataset (stored in test_rat) as an example.
```
test_rat = np.asarray([[0,1,5],
[1,0,5],
[5,0,0],
[0,5,3]]);
test_simi = np.zeros((4, 4))
for i in range(4):
movie_i_rating = test_rat[i]
for j in range(i, 4):
movie_j_rating = test_rat[j]
cos_similarity = 1.0 - scp.distance.cosine(movie_i_rating, movie_j_rating)
test_simi[i][j] = cos_similarity
test_simi[j][i] = cos_similarity
print "movie-rating:"
print test_rat
print
print "similarities:"
print test_simi
```
For the first user (0-th user), his / her interst in the first movie (0-th movie) should be:
$$ coefficent[0][0] = rating[0][0] * similarity[0][0] + rating[1][0] * similarity[1][0] + rating[2][0] * similarity[2][0] + rating[3][0] * similarity[3][0] $$
$$ coefficent[0][0] = 0 * 1 + 1 * 0.96153846 + 5 * 0 + 0 * 0.67267279 = 0.96153846 $$
his / her interst in the last movie (3-th movie) should be:
$$ coefficent[3][0] = 0 * 0.67267279 + 1 * 0.5045046 + 5 * 0 + 0 * 1 = 0.5045046 $$
because 0.96153846 > 0.5045046, we should recommend the first movie instead of the last movie if we can only recommend one movie.
Noted that the equation
$$ coefficient[i][j]= \sum_{k=1}^n similarity[i][k-1] * rating[k-1][j]$$
is simply a matrix dot operation:
$$coefficient = similarity.dot(rating)$$
The last detail we need to take care of is that we shouldn't recommend a movie that have been rated. If a user already rated the movie _Star Wars (1977)_, we should not recomment _Star Wars (1977)_ to this user.
We store the coeffiecients in recommendation_coefficient_mat, and store the id of the recommended movies for each user in a dictionary, recommendation_per_user.
```
import heapq
# find n elements with largest values from a dictonary
# http://www.pataprogramming.com/2010/03/python-dict-n-largest/
def dict_nlargest(d,n):
return heapq.nlargest(n,
d,
key = lambda t: d[t])
# num_movie by num_user = (num_movie by num_movie) * (num_movie by num_user)
recommendation_coefficient_mat = movie_similarity_mat.dot(movie_user_mat)
recommendation_per_user = {}
for user_index in range(num_user):
recommendation_coefficient_vector = recommendation_coefficient_mat.T[user_index]
# remove the movies that already been rated
unrated_movie = (movie_user_mat.T[user_index] == 0)
recommendation_coefficient_vector *= unrated_movie
recommendation_coefficient_dict = {movie_id:coefficient
for movie_id, coefficient
in enumerate(recommendation_coefficient_vector)}
recommendation_per_user[user_index] = dict_nlargest(recommendation_coefficient_dict, k)
```
So the recommended movie for the first user is:
```
print "(movie id, title)"
for movie_id in recommendation_per_user[0]:
# movie_id + 1 to convert it backed to 1-based instead of 0-based
print (movie_id, movie_title_map[movie_id + 1])
print
```
## Recommendation for new users
We mentioned that we can use users's information to recommend movies, but what if we have a new user that we have no information about? The coefficients for that user will be all zeros, it is not reasonable to find the top-5 elements in an array of zeros.
What movies should we recommend? An option is to recommend the movies which got rated by the most number of the users. This is similiar to recommending "best seller" on Amazon.com to new users.
```
import collections
movie_rated_counter = collections.Counter([rating_record[1]
for rating_record in user_rating_raw])
most_rated_movies = movie_rated_counter.most_common(k)
print "The most rated 5 movies are:\n"
for movie_id, rated_count in most_rated_movies:
print (movie_id, movie_title_map[movie_id], rated_count)
print
```
So we can recommend these five movie to new users: Star Wars (1977) Contact (1997) Fargo (1996) Return of the Jedi (1983) Liar Liar (1997)
## Summary
In short, we implemented a simple recommendation system using item-based collaborative filtering :)
But the truth is that this recommendation system is too simple, there are a lot of details we haven't taken care of. For example, if a movie haven't been rated by any user, it's likely that this movie will never be recommended, that means the coverage (recall rate) of this system is not satisfying.
For the curious minds, you are welcome to explore more sophisticated systems :)
| github_jupyter |
<!-- dom:TITLE: Data Analysis and Machine Learning: Linear Regression and more Advanced Regression Analysis -->
# Data Analysis and Machine Learning: Linear Regression and more Advanced Regression Analysis
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
**Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: **May 22, 2019**
Copyright 1999-2019, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
## Why Linear Regression (aka Ordinary Least Squares and family)
Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
* Method of choice for fitting a continuous function!
* Gives an excellent introduction to central Machine Learning features with **understandable pedagogical** links to other methods like **Neural Networks**, **Support Vector Machines** etc
* Analytical expression for the fitting parameters $\boldsymbol{\beta}$
* Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
* Analytical relation with probabilistic interpretations
* Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
* Easy to code! And links well with classification problems and logistic regression and neural networks
* Allows for **easy** hands-on understanding of gradient descent methods
* and many more features
For more discussions of Ridge and Lasso regression, [Wessel van Wieringen's](https://arxiv.org/abs/1509.09169) article is highly recommended.
Similarly, [Mehta et al's article](https://arxiv.org/abs/1803.08823) is also recommended.
## Regression analysis, overarching aims
Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
The first variable is called the **dependent**, the **outcome** or the **response** variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
* $n$ cases $i = 0, 1, 2, \dots, n-1$
* Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
* $p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}_i=[x_{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{X}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
## Regression analysis, overarching aims II
Consider an experiment in which $p$ characteristics of $n$ samples are
measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
$\mathbf{X}$.
The matrix $\mathbf{X}$ is called the *design
matrix*. Additional information of the samples is available in the
form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
generally referred to as the *response variable*. The aim of
regression analysis is to explain $\boldsymbol{y}$ in terms of
$\boldsymbol{X}$ through a functional relationship like $y_i =
f(\mathbf{X}_{i,\ast})$. When no prior knowledge on the form of
$f(\cdot)$ is available, it is common to assume a linear relationship
between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
the *linear regression model* where $\boldsymbol{\beta} = [\beta_0, \ldots,
\beta_{p-1}]^{T}$ are the *regression parameters*.
Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
## Examples
In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
consider the model we discussed for describing nuclear binding energies.
There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
Assuming
$$
BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
$$
we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
$p\times n$ matrix $\boldsymbol{X}$.
Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
so-called [credit card default data from Taiwan](https://www.sciencedirect.com/science/article/pii/S0957417407006719?via%3Dihub). The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$
## General linear models
Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
$$
y=y(x) \rightarrow y(x_i)=\tilde{y}_i+\epsilon_i=\sum_{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
$$
where $\epsilon_i$ is the error in our approximation.
## Rewriting the fitting procedure as a linear algebra problem
For every set of values $y_i,x_i$ we have thus the corresponding set of equations
$$
\begin{align*}
y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\\
y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\\
y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\\
\dots & \dots \\
y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\\
\end{align*}
$$
## Rewriting the fitting procedure as a linear algebra problem, more details
Defining the vectors
$$
\boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
$$
and
$$
\boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
$$
and
$$
\boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
$$
and the design matrix
$$
\boldsymbol{X}=
\begin{bmatrix}
1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\\
1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\\
1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\\
\dots& \dots &\dots& \dots & \dots &\dots\\
1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\\
\end{bmatrix}
$$
we can rewrite our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The above design matrix is called a [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix).
## Generalizing the fitting procedure as a linear algebra problem
We are obviously not limited to the above polynomial expansions. We
could replace the various powers of $x$ with elements of Fourier
series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
x_i)}$, or time series or other orthogonal functions. For every set
of values $y_i,x_i$ we can then generalize the equations to
$$
\begin{align*}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\\
\dots & \dots \\
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\\
\dots & \dots \\
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\\
\end{align*}
$$
**Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!**
## Generalizing the fitting procedure as a linear algebra problem
We redefine in turn the matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=
\begin{bmatrix}
x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\\
x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\\
x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\\
\dots& \dots &\dots& \dots & \dots &\dots\\
x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\\
\end{bmatrix}
$$
and without loss of generality we rewrite again our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values?
## Optimizing our parameters
We have defined the matrix $\boldsymbol{X}$ via the equations
$$
\begin{align*}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\\
\dots & \dots \\
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\\
\dots & \dots \\
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\\
\end{align*}
$$
As we noted above, we stayed with a system with the design matrix
$\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
## Our model for the nuclear binding energies
In our [introductory notes](https://compphysics.github.io/MachineLearningMSU/doc/pub/Introduction/html/Introduction.html) we looked at the so-called [liguid drop model](https://en.wikipedia.org/wiki/Semi-empirical_mass_formula). Let us remind ourselves about what did by looking at the code.
We restate the parts of the code we are most interested in.
```
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
# Then nice printout using pandas
DesignMatrix = pd.DataFrame(X)
DesignMatrix.index = A
DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A']
display(DesignMatrix)
```
With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
## Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right\},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}^T\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}^T\boldsymbol{\beta}\right)\right\}.
$$
This function is one possible way to define the so-called function.
It is also common to define
the function $Q$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
## Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking below with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value (see exercises)
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
## Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as **LU** decomposition or **Singular Value Decomposition** (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
## Interpretations and optimizing our parameters
The residuals $\boldsymbol{\epsilon}$ are in turn given by
$$
\boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta},
$$
and with
$$
\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
we have
$$
\boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach.
Let us now return to our nuclear binding energies and simply code the above equations.
## Own code for Ordinary Least Squares
It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to
write
```
# matrix inversion to find beta
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies)
# and then make the prediction
ytilde = X @ beta
```
Alternatively, you can use the least squares functionality in **Numpy** as
```
fit = np.linalg.lstsq(X, Energies, rcond =None)[0]
ytildenp = np.dot(fit,X.T)
```
And finally we plot our fit with and compare with data
```
Masses['Eapprox'] = ytilde
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016OLS")
plt.show()
```
## Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of _Scikit_Learn_ in the introductory slides.
Since we are not using _Scikit-Learn here we can define our own $R2$ function as
```
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_model)) ** 2)
```
and we would be using it as
```
print(R2(Energies,ytilde))
```
We can easily add our **MSE** score as
```
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
print(MSE(Energies,ytilde))
```
and finally the relative error as
```
def RelativeError(y_data,y_model):
return abs((y_data-y_model)/y_data)
print(RelativeError(Energies, ytilde))
```
## The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right\},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
## The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
## The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
## The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
## The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
## The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed below. Or using
Lasso and Ridge regression. See below.
## Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
[the addition of three-body
forces](https://www.sciencedirect.com/science/article/pii/S0370157399001106). This
time the file is presented as a standard **csv** file.
The beginning of the Python code here is similar to what you have seen before,
with the same initializations and declarations. We use also **pandas**
again, rather extensively in order to organize our data.
The difference now is that we use **Scikit-Learn's** regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in **Ridge** regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
## The code
```
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),4))
X[:,3] = Density**(4.0/3.0)
X[:,2] = Density
X[:,1] = Density**(2.0/3.0)
X[:,0] = 1
# We use now Scikit-Learn's linear regressor and ridge regressor
# OLS part
clf = skl.LinearRegression().fit(X, Energies)
ytilde = clf.predict(X)
EoS['Eols'] = ytilde
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, ytilde))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde))
print(clf.coef_, clf.intercept_)
# The Ridge regression with a hyperparameter lambda = 0.1
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies)
yridge = clf_ridge.predict(X)
EoS['Eridge'] = yridge
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, yridge))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge))
print(clf_ridge.coef_, clf_ridge.intercept_)
fig, ax = plt.subplots()
ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$')
ax.set_ylabel(r'Energy per particle')
ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2,
label='Theoretical data')
ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m',
label='OLS')
ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g',
label='Ridge $\lambda = 0.1$')
ax.legend()
save_fig("EoSfitting")
plt.show()
```
The above simple polynomial in density $\rho$ gives an excellent fit
to the data. Can you give an interpretation of the various powers of $\rho$?
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
## Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). **Scikit-Learn** has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called **bias-variance** tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_model)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organized into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),5))
X[:,0] = 1
X[:,1] = Density**(2.0/3.0)
X[:,2] = Density
X[:,3] = Density**(4.0/3.0)
X[:,4] = Density**(5.0/3.0)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
```
## The singular value decomposition
The examples we have looked at so far are cases where we normally can
invert the matrix $\boldsymbol{X}^T\boldsymbol{X}$. Using a polynomial expansion as we
did both for the masses and the fitting of the equation of state,
leads to row vectors of the design matrix which are essentially
orthogonal due to the polynomial character of our model. This may
however not the be case in general and a standard matrix inversion
algorithm based on say LU decomposition may lead to singularities. We will see an example of this below when we try to fit
the coupling constant of the widely used Ising model.
There is however a way to partially circumvent this problem and also gain some insight about the ordinary least squares approach.
This is given by the **Singular Value Decomposition** algorithm, perhaps
the most powerful linear algebra algorithm. Let us look at a
different example where we may have problems with the standard matrix
inversion algorithm. Thereafter we dive into the math of the SVD.
## The Ising model
The one-dimensional Ising model with nearest neighbor interaction, no
external field and a constant coupling constant $J$ is given by
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
H = -J \sum_{k}^L s_k s_{k + 1},
\label{_auto1} \tag{1}
\end{equation}
$$
where $s_i \in \{-1, 1\}$ and $s_{N + 1} = s_1$. The number of spins
in the system is determined by $L$. For the one-dimensional system
there is no phase transition.
We will look at a system of $L = 40$ spins with a coupling constant of
$J = 1$. To get enough training data we will generate 10000 states
with their respective energies.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
```
Here we use ordinary least squares
regression to predict the energy for the nearest neighbor
one-dimensional Ising model on a ring, i.e., the endpoints wrap
around. We will use linear regression to fit a value for
the coupling constant to achieve this.
## Reformulating the problem to suit regression
A more general form for the one-dimensional Ising model is
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
\label{_auto2} \tag{2}
\end{equation}
$$
Here we allow for interactions beyond the nearest neighbors and a state dependent
coupling constant. This latter expression can be formulated as
a matrix-product
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
\boldsymbol{H} = \boldsymbol{X} J,
\label{_auto3} \tag{3}
\end{equation}
$$
where $X_{jk} = s_j s_k$ and $J$ is a matrix which consists of the
elements $-J_{jk}$. This form of writing the energy fits perfectly
with the form utilized in linear regression, that is
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon},
\label{_auto4} \tag{4}
\end{equation}
$$
We split the data in training and test data as discussed in the previous example
```
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
```
## Linear regression
In the ordinary least squares method we choose the cost function
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
C(\boldsymbol{X}, \boldsymbol{\beta})= \frac{1}{n}\left\{(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})\right\}.
\label{_auto5} \tag{5}
\end{equation}
$$
We then find the extremal point of $C$ by taking the derivative with respect to $\boldsymbol{\beta}$ as discussed above.
This yields the expression for $\boldsymbol{\beta}$ to be
$$
\boldsymbol{\beta} = \frac{\boldsymbol{X}^T \boldsymbol{y}}{\boldsymbol{X}^T \boldsymbol{X}},
$$
which immediately imposes some requirements on $\boldsymbol{X}$ as there must exist
an inverse of $\boldsymbol{X}^T \boldsymbol{X}$. If the expression we are modeling contains an
intercept, i.e., a constant term, we must make sure that the
first column of $\boldsymbol{X}$ consists of $1$. We do this here
```
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
def ols_inv(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return scl.inv(x.T @ x) @ (x.T @ y)
beta = ols_inv(X_train_own, y_train)
```
## Singular Value decomposition
Doing the inversion directly turns out to be a bad idea since the matrix
$\boldsymbol{X}^T\boldsymbol{X}$ is singular. An alternative approach is to use the **singular
value decomposition**. Using the definition of the Moore-Penrose
pseudoinverse we can write the equation for $\boldsymbol{\beta}$ as
$$
\boldsymbol{\beta} = \boldsymbol{X}^{+}\boldsymbol{y},
$$
where the pseudoinverse of $\boldsymbol{X}$ is given by
$$
\boldsymbol{X}^{+} = \frac{\boldsymbol{X}^T}{\boldsymbol{X}^T\boldsymbol{X}}.
$$
Using singular value decomposition we can decompose the matrix $\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^T$,
where $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal(unitary) matrices and $\boldsymbol{\Sigma}$ contains the singular values (more details below).
where $X^{+} = V\Sigma^{+} U^T$. This reduces the equation for
$\omega$ to
<!-- Equation labels as ordinary links -->
<div id="_auto6"></div>
$$
\begin{equation}
\boldsymbol{\beta} = \boldsymbol{V}\boldsymbol{\Sigma}^{+} \boldsymbol{U}^T \boldsymbol{y}.
\label{_auto6} \tag{6}
\end{equation}
$$
Note that solving this equation by actually doing the pseudoinverse
(which is what we will do) is not a good idea as this operation scales
as $\mathcal{O}(n^3)$, where $n$ is the number of elements in a
general matrix. Instead, doing $QR$-factorization and solving the
linear system as an equation would reduce this down to
$\mathcal{O}(n^2)$ operations.
```
def ols_svd(x: np.ndarray, y: np.ndarray) -> np.ndarray:
u, s, v = scl.svd(x)
return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y
beta = ols_svd(X_train_own,y_train)
```
When extracting the $J$-matrix we need to make sure that we remove the intercept, as is done here
```
J = beta[1:].reshape(L, L)
```
A way of looking at the coefficients in $J$ is to plot the matrices as images.
```
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J, **cmap_args)
plt.title("OLS", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
```
It is interesting to note that OLS
considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as
valid matrix elements for $J$.
In our discussion below on hyperparameters and Ridge and Lasso regression we will see that
this problem can be removed, partly and only with Lasso regression.
In this case our matrix inversion was actually possible. The obvious question now is what is the mathematics behind the SVD?
## Linear Regression Problems
One of the typical problems we encounter with linear regression, in particular
when the matrix $\boldsymbol{X}$ (our so-called design matrix) is high-dimensional,
are problems with near singular or singular matrices. The column vectors of $\boldsymbol{X}$
may be linearly dependent, normally referred to as super-collinearity.
This means that the matrix may be rank deficient and it is basically impossible to
to model the data using linear regression. As an example, consider the matrix
$$
\begin{align*}
\mathbf{X} & = \left[
\begin{array}{rrr}
1 & -1 & 2
\\
1 & 0 & 1
\\
1 & 2 & -1
\\
1 & 1 & 0
\end{array} \right]
\end{align*}
$$
The columns of $\boldsymbol{X}$ are linearly dependent. We see this easily since the
the first column is the row-wise sum of the other two columns. The rank (more correct,
the column rank) of a matrix is the dimension of the space spanned by the
column vectors. Hence, the rank of $\mathbf{X}$ is equal to the number
of linearly independent columns. In this particular case the matrix has rank 2.
Super-collinearity of an $(n \times p)$-dimensional design matrix $\mathbf{X}$ implies
that the inverse of the matrix $\boldsymbol{X}^T\boldsymbol{x}$ (the matrix we need to invert to solve the linear regression equations) is non-invertible. If we have a square matrix that does not have an inverse, we say this matrix singular. The example here demonstrates this
$$
\begin{align*}
\boldsymbol{X} & = \left[
\begin{array}{rr}
1 & -1
\\
1 & -1
\end{array} \right].
\end{align*}
$$
We see easily that $\mbox{det}(\boldsymbol{X}) = x_{11} x_{22} - x_{12} x_{21} = 1 \times (-1) - 1 \times (-1) = 0$. Hence, $\mathbf{X}$ is singular and its inverse is undefined.
This is equivalent to saying that the matrix $\boldsymbol{X}$ has at least an eigenvalue which is zero.
## Fixing the singularity
If our design matrix $\boldsymbol{X}$ which enters the linear regression problem
<!-- Equation labels as ordinary links -->
<div id="_auto7"></div>
$$
\begin{equation}
\boldsymbol{\beta} = (\boldsymbol{X}^{T} \boldsymbol{X})^{-1} \boldsymbol{X}^{T} \boldsymbol{y},
\label{_auto7} \tag{7}
\end{equation}
$$
has linearly dependent column vectors, we will not be able to compute the inverse
of $\boldsymbol{X}^T\boldsymbol{X}$ and we cannot find the parameters (estimators) $\beta_i$.
The estimators are only well-defined if $(\boldsymbol{X}^{T}\boldsymbol{X})^{-1}$ exits.
This is more likely to happen when the matrix $\boldsymbol{X}$ is high-dimensional. In this case it is likely to encounter a situation where
the regression parameters $\beta_i$ cannot be estimated.
A cheap *ad hoc* approach is simply to add a small diagonal component to the matrix to invert, that is we change
$$
\boldsymbol{X}^{T} \boldsymbol{X} \rightarrow \boldsymbol{X}^{T} \boldsymbol{X}+\lambda \boldsymbol{I},
$$
where $\boldsymbol{I}$ is the identity matrix. When we discuss **Ridge** regression this is actually what we end up evaluating. The parameter $\lambda$ is called a hyperparameter. More about this later.
## Basic math of the SVD
From standard linear algebra we know that a square matrix $\boldsymbol{X}$ can be diagonalized if and only it is
a so-called [normal matrix](https://en.wikipedia.org/wiki/Normal_matrix), that is if $\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$
we have $\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ or if $\boldsymbol{X}\in {\mathbb{C}}^{n\times n}$ we have $\boldsymbol{X}\boldsymbol{X}^{\dagger}=\boldsymbol{X}^{\dagger}\boldsymbol{X}$.
The matrix has then a set of eigenpairs
$$
(\lambda_1,\boldsymbol{u}_1),\dots, (\lambda_n,\boldsymbol{u}_n),
$$
and the eigenvalues are given by the diagonal matrix
$$
\boldsymbol{\Sigma}=\mathrm{Diag}(\lambda_1, \dots,\lambda_n).
$$
The matrix $\boldsymbol{X}$ can be written in terms of an orthogonal/unitary transformation $\boldsymbol{U}$
$$
\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T,
$$
with $\boldsymbol{U}\boldsymbol{U}^T=\boldsymbol{I}$ or $\boldsymbol{U}\boldsymbol{U}^{\dagger}=\boldsymbol{I}$.
Not all square matrices are diagonalizable. A matrix like the one discussed above
$$
\boldsymbol{X} = \begin{bmatrix}
1& -1 \\
1& -1\\
\end{bmatrix}
$$
is not diagonalizable, it is a so-called [defective matrix](https://en.wikipedia.org/wiki/Defective_matrix). It is easy to see that the condition
$\boldsymbol{X}\boldsymbol{X}^T=\boldsymbol{X}^T\boldsymbol{X}$ is not fulfilled.
## The SVD, a Fantastic Algorithm
However, and this is the strength of the SVD algorithm, any general
matrix $\boldsymbol{X}$ can be decomposed in terms of a diagonal matrix and
two orthogonal/unitary matrices. The [Singular Value Decompostion
(SVD) theorem](https://en.wikipedia.org/wiki/Singular_value_decomposition)
states that a general $m\times n$ matrix $\boldsymbol{X}$ can be written in
terms of a diagonal matrix $\boldsymbol{\Sigma}$ of dimensionality $n\times n$
and two orthognal matrices $\boldsymbol{U}$ and $\boldsymbol{V}$, where the first has
dimensionality $m \times m$ and the last dimensionality $n\times n$.
We have then
$$
\boldsymbol{X} = \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T
$$
As an example, the above defective matrix can be decomposed as
$$
\boldsymbol{X} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1& 1 \\ 1& -1\\ \end{bmatrix} \begin{bmatrix} 2& 0 \\ 0& 0\\ \end{bmatrix} \frac{1}{\sqrt{2}}\begin{bmatrix} 1& -1 \\ 1& 1\\ \end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T,
$$
with eigenvalues $\sigma_1=2$ and $\sigma_2=0$.
The SVD exits always!
## Another Example
Consider the following matrix which can be SVD decomposed as
$$
\boldsymbol{X} = \frac{1}{15}\begin{bmatrix} 14 & 2\\ 4 & 22\\ 16 & 13\end{matrix}=\frac{1}{3}\begin{bmatrix} 1& 2 & 2 \\ 2& -1 & 1\\ 2 & 1& -2\end{bmatrix} \begin{bmatrix} 2& 0 \\ 0& 1\\ 0 & 0\end{bmatrix}\frac{1}{5}\begin{bmatrix} 3& 4 \\ 4& -3\end{bmatrix}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T.
$$
This is a $3\times 2$ matrix which is decomposed in terms of a
$3\times 3$ matrix $\boldsymbol{U}$, and a $2\times 2$ matrix $\boldsymbol{V}$. It is easy to see
that $\boldsymbol{U}$ and $\boldsymbol{V}$ are orthogonal (how?).
And the SVD
decomposition (singular values) gives eigenvalues
$\sigma_i\geq\sigma_{i+1}$ for all $i$ and for dimensions larger than $i=2$, the
eigenvalues (singular values) are zero.
In the general case, where our design matrix $\boldsymbol{X}$ has dimension
$n\times p$, the matrix is thus decomposed into an $n\times n$
orthogonal matrix $\boldsymbol{U}$, a $p\times p$ orthogonal matrix $\boldsymbol{V}$
and a diagonal matrix $\boldsymbol{\Sigma}$ with $r=\mathrm{min}(n,p)$
singular values $\sigma_i\lg 0$ on the main diagonal and zeros filling
the rest of the matrix. There are at most $p$ singular values
assuming that $n > p$. In our regression examples for the nuclear
masses and the equation of state this is indeed the case, while for
the Ising model we have $p > n$. These are often cases that lead to
near singular or singular matrices.
The columns of $\boldsymbol{U}$ are called the left singular vectors while the columns of $\boldsymbol{V}$ are the right singular vectors.
## Economy-size SVD
If we assume that $n > p$, then our matrix $\boldsymbol{U}$ has dimension $n
\times n$. The last $n-p$ columns of $\boldsymbol{U}$ become however
irrelevant in our calculations since they are multiplied with the
zeros in $\boldsymbol{\Sigma}$.
The economy-size decomposition removes extra rows or columns of zeros
from the diagonal matrix of singular values, $\boldsymbol{\Sigma}$, along with the columns
in either $\boldsymbol{U}$ or $\boldsymbol{V}$ that multiply those zeros in the expression.
Removing these zeros and columns can improve execution time
and reduce storage requirements without compromising the accuracy of
the decomposition.
If $n > p$, we keep only the first $p$ columns of $\boldsymbol{U}$ and $\boldsymbol{\Sigma}$ has dimension $p\times p$.
If $p > n$, then only the first $n$ columns of $\boldsymbol{V}$ are computed and $\boldsymbol{\Sigma}$ has dimension $n\times n$.
The $n=p$ case is obvious, we retain the full SVD.
In general the economy-size SVD leads to less FLOPS and still conserving the desired accuracy.
## Mathematical Properties
There are several interesting mathematical properties which will be
relevant when we are going to discuss the differences between say
ordinary least squares (OLS) and **Ridge** regression.
We have from OLS that the parameters of the linear approximation are given by
$$
\boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
The matrix to invert can be rewritten in terms of our SVD decomposition as
$$
\boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{U}^T\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T.
$$
Using the orthogonality properties of $\boldsymbol{U}$ we have
$$
\boldsymbol{X}^T\boldsymbol{X} = \boldsymbol{V}\boldsymbol{\Sigma}^T\boldsymbol{\Sigma}\boldsymbol{V}^T = \boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T,
$$
with $\boldsymbol{D}$ being a diagonal matrix with values along the diagonal given by the singular values squared.
This means that
$$
(\boldsymbol{X}^T\boldsymbol{X})\boldsymbol{V} = \boldsymbol{V}\boldsymbol{D},
$$
that is the eigenvectors of $(\boldsymbol{X}^T\boldsymbol{X})$ are given by the columns of the right singular matrix of $\boldsymbol{X}$ and the eigenvalues are the squared singular values. It is easy to show (show this) that
$$
(\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D},
$$
that is, the eigenvectors of $(\boldsymbol{X}\boldsymbol{X})^T$ are the columns of the left singular matrix and the eigenvalues are the same.
Going back to our OLS equation we have
$$
\boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}.
$$
We will come back to this expression when we discuss Ridge regression.
## Ridge and LASSO Regression
Let us remind ourselves about the expression for the standard Mean Squared Error (MSE) which we used to define our cost function and the equations for the ordinary least squares (OLS) method, that is
our optimization problem is
$$
{\displaystyle \min_{\boldsymbol{\beta}\in {\mathbb{R}}^{p}}}\frac{1}{n}\left\{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right\}.
$$
or we can state it as
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2,
$$
where we have used the definition of a norm-2 vector, that is
$$
\vert\vert \boldsymbol{x}\vert\vert_2 = \sqrt{\sum_i x_i^2}.
$$
By minimizing the above equation with respect to the parameters
$\boldsymbol{\beta}$ we could then obtain an analytical expression for the
parameters $\boldsymbol{\beta}$. We can add a regularization parameter $\lambda$ by
defining a new cost function to be optimized, that is
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_2^2
$$
which leads to the Ridge regression minimization problem where we
require that $\vert\vert \boldsymbol{\beta}\vert\vert_2^2\le t$, where $t$ is
a finite number larger than zero. By defining
$$
C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1,
$$
we have a new optimization equation
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\vert\vert \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\vert\vert_2^2+\lambda\vert\vert \boldsymbol{\beta}\vert\vert_1
$$
which leads to Lasso regression. Lasso stands for least absolute shrinkage and selection operator.
Here we have defined the norm-1 as
$$
\vert\vert \boldsymbol{x}\vert\vert_1 = \sum_i \vert x_i\vert.
$$
## More on Ridge Regression
Using the matrix-vector expression for Ridge regression,
$$
C(\boldsymbol{X},\boldsymbol{\beta})=\frac{1}{n}\left\{(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})^T(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\right\}+\lambda\boldsymbol{\beta}^T\boldsymbol{\beta},
$$
by taking the derivatives with respect to $\boldsymbol{\beta}$ we obtain then
a slightly modified matrix inversion problem which for finite values
of $\lambda$ does not suffer from singularity problems. We obtain
$$
\boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{X}^T\boldsymbol{X}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y},
$$
with $\boldsymbol{I}$ being a $p\times p$ identity matrix with the constraint that
$$
\sum_{i=0}^{p-1} \beta_i^2 \leq t,
$$
with $t$ a finite positive number.
We see that Ridge regression is nothing but the standard
OLS with a modified diagonal term added to $\boldsymbol{X}^T\boldsymbol{X}$. The
consequences, in particular for our discussion of the bias-variance
are rather interesting.
Furthermore, if we use the result above in terms of the SVD decomposition (our analysis was done for the OLS method), we had
$$
(\boldsymbol{X}\boldsymbol{X}^T)\boldsymbol{U} = \boldsymbol{U}\boldsymbol{D}.
$$
We can analyse the OLS solutions in terms of the eigenvectors (the columns) of the right singular value matrix $\boldsymbol{U}$ as
$$
\boldsymbol{X}\boldsymbol{\beta} = \boldsymbol{X}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\boldsymbol{U}\boldsymbol{U}^T\boldsymbol{y}
$$
For Ridge regression this becomes
$$
\boldsymbol{X}\boldsymbol{\beta}^{\mathrm{Ridge}} = \boldsymbol{U\Sigma V^T}\left(\boldsymbol{V}\boldsymbol{D}\boldsymbol{V}^T+\lambda\boldsymbol{I} \right)^{-1}(\boldsymbol{U\Sigma V^T})^T\boldsymbol{y}=\sum_{j=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\frac{\sigma_j^2}{\sigma_j^2+\lambda}\boldsymbol{y},
$$
with the vectors $\boldsymbol{u}_j$ being the columns of $\boldsymbol{U}$.
## Interpreting the Ridge results
Since $\lambda \geq 0$, it means that compared to OLS, we have
$$
\frac{\sigma_j^2}{\sigma_j^2+\lambda} \leq 1.
$$
Ridge regression finds the coordinates of $\boldsymbol{y}$ with respect to the
orthonormal basis $\boldsymbol{U}$, it then shrinks the coordinates by
$\frac{\sigma_j^2}{\sigma_j^2+\lambda}$. Recall that the SVD has
eigenvalues ordered in a descending way, that is $\sigma_i \geq
\sigma_{i+1}$.
For small eigenvalues $\sigma_i$ it means that their contributions become less important, a fact which can be used to reduce the number of degrees of freedom.
Actually, calculating the variance of $\boldsymbol{X}\boldsymbol{v}_j$ shows that this quantity is equal to $\sigma_j^2/n$.
With a parameter $\lambda$ we can thus shrink the role of specific parameters.
## More interpretations
For the sake of simplicity, let us assume that the design matrix is orthonormal, that is
$$
\boldsymbol{X}^T\boldsymbol{X}=(\boldsymbol{X}^T\boldsymbol{X})^{-1} =\boldsymbol{I}.
$$
In this case the standard OLS results in
$$
\boldsymbol{\beta}^{\mathrm{OLS}} = \boldsymbol{X}^T\boldsymbol{y}=\sum_{i=0}^{p-1}\boldsymbol{u}_j\boldsymbol{u}_j^T\boldsymbol{y},
$$
and
$$
\boldsymbol{\beta}^{\mathrm{Ridge}} = \left(\boldsymbol{I}+\lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}=\left(1+\lambda\right)^{-1}\boldsymbol{\beta}^{\mathrm{OLS}},
$$
that is the Ridge estimator scales the OLS estimator by the inverse of a factor $1+\lambda$, and
the Ridge estimator converges to zero when the hyperparameter goes to
infinity.
We will come back to more interpreations after we have gone through some of the statistical analysis part.
For more discussions of Ridge and Lasso regression, [Wessel van Wieringen's](https://arxiv.org/abs/1509.09169) article is highly recommended.
Similarly, [Mehta et al's article](https://arxiv.org/abs/1803.08823) is also recommended.
## Where are we going?
Before we proceed, we need to rethink what we have been doing. In our
eager to fit the data, we have omitted several important elements in
our regression analysis. In what follows we will
1. look at statistical properties, including a discussion of mean values, variance and the so-called bias-variance tradeoff
2. introduce resampling techniques like cross-validation, bootstrapping and jackknife and more
This will allow us to link the standard linear algebra methods we have discussed above to a statistical interpretation of the methods.
## Resampling methods
Resampling methods are an indispensable tool in modern
statistics. They involve repeatedly drawing samples from a training
set and refitting a model of interest on each sample in order to
obtain additional information about the fitted model. For example, in
order to estimate the variability of a linear regression fit, we can
repeatedly draw different samples from the training data, fit a linear
regression to each new sample, and then examine the extent to which
the resulting fits differ. Such an approach may allow us to obtain
information that would not be available from fitting the model only
once using the original training sample.
## Resampling approaches can be computationally expensive
Resampling approaches can be computationally expensive, because they
involve fitting the same statistical method multiple times using
different subsets of the training data. However, due to recent
advances in computing power, the computational requirements of
resampling methods generally are not prohibitive. In this chapter, we
discuss two of the most commonly used resampling methods,
cross-validation and the bootstrap. Both methods are important tools
in the practical application of many statistical learning
procedures. For example, cross-validation can be used to estimate the
test error associated with a given statistical learning method in
order to evaluate its performance, or to select the appropriate level
of flexibility. The process of evaluating a model’s performance is
known as model assessment, whereas the process of selecting the proper
level of flexibility for a model is known as model selection. The
bootstrap is widely used.
## Why resampling methods ?
**Statistical analysis.**
* Our simulations can be treated as *computer experiments*. This is particularly the case for Monte Carlo methods
* The results can be analysed with the same statistical tools as we would use analysing experimental data.
* As in all experiments, we are looking for expectation values and an estimate of how accurate they are, i.e., possible sources for errors.
## Statistical analysis
* As in other experiments, many numerical experiments have two classes of errors:
* Statistical errors
* Systematical errors
* Statistical errors can be estimated using standard tools from statistics
* Systematical errors are method specific and must be treated differently from case to case.
## Statistics
The *probability distribution function (PDF)* is a function
$p(x)$ on the domain which, in the discrete case, gives us the
probability or relative frequency with which these values of $X$ occur:
$$
p(x) = \mathrm{prob}(X=x)
$$
In the continuous case, the PDF does not directly depict the
actual probability. Instead we define the probability for the
stochastic variable to assume any value on an infinitesimal interval
around $x$ to be $p(x)dx$. The continuous function $p(x)$ then gives us
the *density* of the probability rather than the probability
itself. The probability for a stochastic variable to assume any value
on a non-infinitesimal interval $[a,\,b]$ is then just the integral:
$$
\mathrm{prob}(a\leq X\leq b) = \int_a^b p(x)dx
$$
Qualitatively speaking, a stochastic variable represents the values of
numbers chosen as if by chance from some specified PDF so that the
selection of a large set of these numbers reproduces this PDF.
## Statistics, moments
A particularly useful class of special expectation values are the
*moments*. The $n$-th moment of the PDF $p$ is defined as
follows:
$$
\langle x^n\rangle \equiv \int\! x^n p(x)\,dx
$$
The zero-th moment $\langle 1\rangle$ is just the normalization condition of
$p$. The first moment, $\langle x\rangle$, is called the *mean* of $p$
and often denoted by the letter $\mu$:
$$
\langle x\rangle = \mu \equiv \int\! x p(x)\,dx
$$
## Statistics, central moments
A special version of the moments is the set of *central moments*,
the n-th central moment defined as:
$$
\langle (x-\langle x \rangle )^n\rangle \equiv \int\! (x-\langle x\rangle)^n p(x)\,dx
$$
The zero-th and first central moments are both trivial, equal $1$ and
$0$, respectively. But the second central moment, known as the
*variance* of $p$, is of particular interest. For the stochastic
variable $X$, the variance is denoted as $\sigma^2_X$ or $\mathrm{var}(X)$:
<!-- Equation labels as ordinary links -->
<div id="_auto8"></div>
$$
\begin{equation}
\sigma^2_X\ \ =\ \ \mathrm{var}(X) = \langle (x-\langle x\rangle)^2\rangle =
\int\! (x-\langle x\rangle)^2 p(x)\,dx
\label{_auto8} \tag{8}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto9"></div>
$$
\begin{equation}
= \int\! \left(x^2 - 2 x \langle x\rangle^{2} +
\langle x\rangle^2\right)p(x)\,dx
\label{_auto9} \tag{9}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto10"></div>
$$
\begin{equation}
= \langle x^2\rangle - 2 \langle x\rangle\langle x\rangle + \langle x\rangle^2
\label{_auto10} \tag{10}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto11"></div>
$$
\begin{equation}
= \langle x^2\rangle - \langle x\rangle^2
\label{_auto11} \tag{11}
\end{equation}
$$
The square root of the variance, $\sigma =\sqrt{\langle (x-\langle x\rangle)^2\rangle}$ is called the *standard deviation* of $p$. It is clearly just the RMS (root-mean-square)
value of the deviation of the PDF from its mean value, interpreted
qualitatively as the *spread* of $p$ around its mean.
## Statistics, covariance
Another important quantity is the so called covariance, a variant of
the above defined variance. Consider again the set $\{X_i\}$ of $n$
stochastic variables (not necessarily uncorrelated) with the
multivariate PDF $P(x_1,\dots,x_n)$. The *covariance* of two
of the stochastic variables, $X_i$ and $X_j$, is defined as follows:
$$
\mathrm{cov}(X_i,\,X_j) \equiv \langle (x_i-\langle x_i\rangle)(x_j-\langle x_j\rangle)\rangle
\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="eq:def_covariance"></div>
$$
\begin{equation}
=
\int\!\cdots\!\int\!(x_i-\langle x_i \rangle)(x_j-\langle x_j \rangle)\,
P(x_1,\dots,x_n)\,dx_1\dots dx_n
\label{eq:def_covariance} \tag{12}
\end{equation}
$$
with
$$
\langle x_i\rangle =
\int\!\cdots\!\int\!x_i\,P(x_1,\dots,x_n)\,dx_1\dots dx_n
$$
## Statistics, more covariance
If we consider the above covariance as a matrix $C_{ij}=\mathrm{cov}(X_i,\,X_j)$, then the diagonal elements are just the familiar
variances, $C_{ii} = \mathrm{cov}(X_i,\,X_i) = \mathrm{var}(X_i)$. It turns out that
all the off-diagonal elements are zero if the stochastic variables are
uncorrelated. This is easy to show, keeping in mind the linearity of
the expectation value. Consider the stochastic variables $X_i$ and
$X_j$, ($i\neq j$):
<!-- Equation labels as ordinary links -->
<div id="_auto12"></div>
$$
\begin{equation}
\mathrm{cov}(X_i,\,X_j) = \langle(x_i-\langle x_i\rangle)(x_j-\langle x_j\rangle)\rangle
\label{_auto12} \tag{13}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto13"></div>
$$
\begin{equation}
=\langle x_i x_j - x_i\langle x_j\rangle - \langle x_i\rangle x_j + \langle x_i\rangle\langle x_j\rangle\rangle
\label{_auto13} \tag{14}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto14"></div>
$$
\begin{equation}
=\langle x_i x_j\rangle - \langle x_i\langle x_j\rangle\rangle - \langle \langle x_i\rangle x_j\rangle +
\langle \langle x_i\rangle\langle x_j\rangle\rangle
\label{_auto14} \tag{15}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto15"></div>
$$
\begin{equation}
=\langle x_i x_j\rangle - \langle x_i\rangle\langle x_j\rangle - \langle x_i\rangle\langle x_j\rangle +
\langle x_i\rangle\langle x_j\rangle
\label{_auto15} \tag{16}
\end{equation}
$$
<!-- Equation labels as ordinary links -->
<div id="_auto16"></div>
$$
\begin{equation}
=\langle x_i x_j\rangle - \langle x_i\rangle\langle x_j\rangle
\label{_auto16} \tag{17}
\end{equation}
$$
## Statistics, independent variables
If $X_i$ and $X_j$ are independent, we get
$\langle x_i x_j\rangle =\langle x_i\rangle\langle x_j\rangle$, resulting in $\mathrm{cov}(X_i, X_j) = 0\ \ (i\neq j)$.
Also useful for us is the covariance of linear combinations of
stochastic variables. Let $\{X_i\}$ and $\{Y_i\}$ be two sets of
stochastic variables. Let also $\{a_i\}$ and $\{b_i\}$ be two sets of
scalars. Consider the linear combination:
$$
U = \sum_i a_i X_i \qquad V = \sum_j b_j Y_j
$$
By the linearity of the expectation value
$$
\mathrm{cov}(U, V) = \sum_{i,j}a_i b_j \mathrm{cov}(X_i, Y_j)
$$
## Statistics, more variance
Now, since the variance is just $\mathrm{var}(X_i) = \mathrm{cov}(X_i, X_i)$, we get
the variance of the linear combination $U = \sum_i a_i X_i$:
<!-- Equation labels as ordinary links -->
<div id="eq:variance_linear_combination"></div>
$$
\begin{equation}
\mathrm{var}(U) = \sum_{i,j}a_i a_j \mathrm{cov}(X_i, X_j)
\label{eq:variance_linear_combination} \tag{18}
\end{equation}
$$
And in the special case when the stochastic variables are
uncorrelated, the off-diagonal elements of the covariance are as we
know zero, resulting in:
1
1
3
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
\mathrm{var}(\sum_i a_i X_i) = \sum_i a_i^2 \mathrm{var}(X_i)
$$
which will become very useful in our study of the error in the mean
value of a set of measurements.
## Statistics and stochastic processes
A *stochastic process* is a process that produces sequentially a
chain of values:
$$
\{x_1, x_2,\dots\,x_k,\dots\}.
$$
We will call these
values our *measurements* and the entire set as our measured
*sample*. The action of measuring all the elements of a sample
we will call a stochastic *experiment* since, operationally,
they are often associated with results of empirical observation of
some physical or mathematical phenomena; precisely an experiment. We
assume that these values are distributed according to some
PDF $p_X^{\phantom X}(x)$, where $X$ is just the formal symbol for the
stochastic variable whose PDF is $p_X^{\phantom X}(x)$. Instead of
trying to determine the full distribution $p$ we are often only
interested in finding the few lowest moments, like the mean
$\mu_X^{\phantom X}$ and the variance $\sigma_X^{\phantom X}$.
<!-- !split -->
## Statistics and sample variables
In practical situations a sample is always of finite size. Let that
size be $n$. The expectation value of a sample, the *sample mean*, is then defined as follows:
$$
\bar{x}_n \equiv \frac{1}{n}\sum_{k=1}^n x_k
$$
The *sample variance* is:
$$
\mathrm{var}(x) \equiv \frac{1}{n}\sum_{k=1}^n (x_k - \bar{x}_n)^2
$$
its square root being the *standard deviation of the sample*. The
*sample covariance* is:
$$
\mathrm{cov}(x)\equiv\frac{1}{n}\sum_{kl}(x_k - \bar{x}_n)(x_l - \bar{x}_n)
$$
## Statistics, sample variance and covariance
Note that the sample variance is the sample covariance without the
cross terms. In a similar manner as the covariance in Eq. ([12](#eq:def_covariance)) is a measure of the correlation between
two stochastic variables, the above defined sample covariance is a
measure of the sequential correlation between succeeding measurements
of a sample.
These quantities, being known experimental values, differ
significantly from and must not be confused with the similarly named
quantities for stochastic variables, mean $\mu_X$, variance $\mathrm{var}(X)$
and covariance $\mathrm{cov}(X,Y)$.
## Statistics, law of large numbers
The law of large numbers
states that as the size of our sample grows to infinity, the sample
mean approaches the true mean $\mu_X^{\phantom X}$ of the chosen PDF:
$$
\lim_{n\to\infty}\bar{x}_n = \mu_X^{\phantom X}
$$
The sample mean $\bar{x}_n$ works therefore as an estimate of the true
mean $\mu_X^{\phantom X}$.
What we need to find out is how good an approximation $\bar{x}_n$ is to
$\mu_X^{\phantom X}$. In any stochastic measurement, an estimated
mean is of no use to us without a measure of its error. A quantity
that tells us how well we can reproduce it in another experiment. We
are therefore interested in the PDF of the sample mean itself. Its
standard deviation will be a measure of the spread of sample means,
and we will simply call it the *error* of the sample mean, or
just sample error, and denote it by $\mathrm{err}_X^{\phantom X}$. In
practice, we will only be able to produce an *estimate* of the
sample error since the exact value would require the knowledge of the
true PDFs behind, which we usually do not have.
## Statistics, more on sample error
Let us first take a look at what happens to the sample error as the
size of the sample grows. In a sample, each of the measurements $x_i$
can be associated with its own stochastic variable $X_i$. The
stochastic variable $\overline X_n$ for the sample mean $\bar{x}_n$ is
then just a linear combination, already familiar to us:
$$
\overline X_n = \frac{1}{n}\sum_{i=1}^n X_i
$$
All the coefficients are just equal $1/n$. The PDF of $\overline X_n$,
denoted by $p_{\overline X_n}(x)$ is the desired PDF of the sample
means.
## Statistics
The probability density of obtaining a sample mean $\bar x_n$
is the product of probabilities of obtaining arbitrary values $x_1,
x_2,\dots,x_n$ with the constraint that the mean of the set $\{x_i\}$
is $\bar x_n$:
$$
p_{\overline X_n}(x) = \int p_X^{\phantom X}(x_1)\cdots
\int p_X^{\phantom X}(x_n)\
\delta\!\left(x - \frac{x_1+x_2+\dots+x_n}{n}\right)dx_n \cdots dx_1
$$
And in particular we are interested in its variance $\mathrm{var}(\overline X_n)$.
## Statistics, central limit theorem
It is generally not possible to express $p_{\overline X_n}(x)$ in a
closed form given an arbitrary PDF $p_X^{\phantom X}$ and a number
$n$. But for the limit $n\to\infty$ it is possible to make an
approximation. The very important result is called *the central limit theorem*. It tells us that as $n$ goes to infinity,
$p_{\overline X_n}(x)$ approaches a Gaussian distribution whose mean
and variance equal the true mean and variance, $\mu_{X}^{\phantom X}$
and $\sigma_{X}^{2}$, respectively:
<!-- Equation labels as ordinary links -->
<div id="eq:central_limit_gaussian"></div>
$$
\begin{equation}
\lim_{n\to\infty} p_{\overline X_n}(x) =
\left(\frac{n}{2\pi\mathrm{var}(X)}\right)^{1/2}
e^{-\frac{n(x-\bar x_n)^2}{2\mathrm{var}(X)}}
\label{eq:central_limit_gaussian} \tag{19}
\end{equation}
$$
## Statistics, more technicalities
The desired variance
$\mathrm{var}(\overline X_n)$, i.e. the sample error squared
$\mathrm{err}_X^2$, is given by:
<!-- Equation labels as ordinary links -->
<div id="eq:error_exact"></div>
$$
\begin{equation}
\mathrm{err}_X^2 = \mathrm{var}(\overline X_n) = \frac{1}{n^2}
\sum_{ij} \mathrm{cov}(X_i, X_j)
\label{eq:error_exact} \tag{20}
\end{equation}
$$
We see now that in order to calculate the exact error of the sample
with the above expression, we would need the true means
$\mu_{X_i}^{\phantom X}$ of the stochastic variables $X_i$. To
calculate these requires that we know the true multivariate PDF of all
the $X_i$. But this PDF is unknown to us, we have only got the measurements of
one sample. The best we can do is to let the sample itself be an
estimate of the PDF of each of the $X_i$, estimating all properties of
$X_i$ through the measurements of the sample.
## Statistics
Our estimate of $\mu_{X_i}^{\phantom X}$ is then the sample mean $\bar x$
itself, in accordance with the the central limit theorem:
$$
\mu_{X_i}^{\phantom X} = \langle x_i\rangle \approx \frac{1}{n}\sum_{k=1}^n x_k = \bar x
$$
Using $\bar x$ in place of $\mu_{X_i}^{\phantom X}$ we can give an
*estimate* of the covariance in Eq. ([20](#eq:error_exact))
$$
\mathrm{cov}(X_i, X_j) = \langle (x_i-\langle x_i\rangle)(x_j-\langle x_j\rangle)\rangle
\approx\langle (x_i - \bar x)(x_j - \bar{x})\rangle,
$$
resulting in
$$
\frac{1}{n} \sum_{l}^n \left(\frac{1}{n}\sum_{k}^n (x_k -\bar x_n)(x_l - \bar x_n)\right)=\frac{1}{n}\frac{1}{n} \sum_{kl} (x_k -\bar x_n)(x_l - \bar x_n)=\frac{1}{n}\mathrm{cov}(x)
$$
## Statistics and sample variance
By the same procedure we can use the sample variance as an
estimate of the variance of any of the stochastic variables $X_i$
$$
\mathrm{var}(X_i)=\langle x_i - \langle x_i\rangle\rangle \approx \langle x_i - \bar x_n\rangle\nonumber,
$$
which is approximated as
<!-- Equation labels as ordinary links -->
<div id="eq:var_estimate_i_think"></div>
$$
\begin{equation}
\mathrm{var}(X_i)\approx \frac{1}{n}\sum_{k=1}^n (x_k - \bar x_n)=\mathrm{var}(x)
\label{eq:var_estimate_i_think} \tag{21}
\end{equation}
$$
Now we can calculate an estimate of the error
$\mathrm{err}_X^{\phantom X}$ of the sample mean $\bar x_n$:
$$
\mathrm{err}_X^2
=\frac{1}{n^2}\sum_{ij} \mathrm{cov}(X_i, X_j) \nonumber
$$
$$
\approx\frac{1}{n^2}\sum_{ij}\frac{1}{n}\mathrm{cov}(x) =\frac{1}{n^2}n^2\frac{1}{n}\mathrm{cov}(x)\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="eq:error_estimate"></div>
$$
\begin{equation}
=\frac{1}{n}\mathrm{cov}(x)
\label{eq:error_estimate} \tag{22}
\end{equation}
$$
which is nothing but the sample covariance divided by the number of
measurements in the sample.
## Statistics, uncorrelated results
In the special case that the measurements of the sample are
uncorrelated (equivalently the stochastic variables $X_i$ are
uncorrelated) we have that the off-diagonal elements of the covariance
are zero. This gives the following estimate of the sample error:
$$
\mathrm{err}_X^2=\frac{1}{n^2}\sum_{ij} \mathrm{cov}(X_i, X_j) =
\frac{1}{n^2} \sum_i \mathrm{var}(X_i),
$$
resulting in
<!-- Equation labels as ordinary links -->
<div id="eq:error_estimate_uncorrel"></div>
$$
\begin{equation}
\mathrm{err}_X^2\approx \frac{1}{n^2} \sum_i \mathrm{var}(x)= \frac{1}{n}\mathrm{var}(x)
\label{eq:error_estimate_uncorrel} \tag{23}
\end{equation}
$$
where in the second step we have used Eq. ([21](#eq:var_estimate_i_think)).
The error of the sample is then just its standard deviation divided by
the square root of the number of measurements the sample contains.
This is a very useful formula which is easy to compute. It acts as a
first approximation to the error, but in numerical experiments, we
cannot overlook the always present correlations.
## Statistics, computations
For computational purposes one usually splits up the estimate of
$\mathrm{err}_X^2$, given by Eq. ([22](#eq:error_estimate)), into two
parts
$$
\mathrm{err}_X^2 = \frac{1}{n}\mathrm{var}(x) + \frac{1}{n}(\mathrm{cov}(x)-\mathrm{var}(x)),
$$
which equals
<!-- Equation labels as ordinary links -->
<div id="eq:error_estimate_split_up"></div>
$$
\begin{equation}
\frac{1}{n^2}\sum_{k=1}^n (x_k - \bar x_n)^2 +\frac{2}{n^2}\sum_{k<l} (x_k - \bar x_n)(x_l - \bar x_n)
\label{eq:error_estimate_split_up} \tag{24}
\end{equation}
$$
The first term is the same as the error in the uncorrelated case,
Eq. ([23](#eq:error_estimate_uncorrel)). This means that the second
term accounts for the error correction due to correlation between the
measurements. For uncorrelated measurements this second term is zero.
## Statistics, more on computations of errors
Computationally the uncorrelated first term is much easier to treat
efficiently than the second.
$$
\mathrm{var}(x) = \frac{1}{n}\sum_{k=1}^n (x_k - \bar x_n)^2 =
\left(\frac{1}{n}\sum_{k=1}^n x_k^2\right) - \bar x_n^2
$$
We just accumulate separately the values $x^2$ and $x$ for every
measurement $x$ we receive. The correlation term, though, has to be
calculated at the end of the experiment since we need all the
measurements to calculate the cross terms. Therefore, all measurements
have to be stored throughout the experiment.
## Statistics, wrapping up 1
Let us analyze the problem by splitting up the correlation term into
partial sums of the form:
$$
f_d = \frac{1}{n-d}\sum_{k=1}^{n-d}(x_k - \bar x_n)(x_{k+d} - \bar x_n)
$$
The correlation term of the error can now be rewritten in terms of
$f_d$
$$
\frac{2}{n}\sum_{k<l} (x_k - \bar x_n)(x_l - \bar x_n) =
2\sum_{d=1}^{n-1} f_d
$$
The value of $f_d$ reflects the correlation between measurements
separated by the distance $d$ in the sample samples. Notice that for
$d=0$, $f$ is just the sample variance, $\mathrm{var}(x)$. If we divide $f_d$
by $\mathrm{var}(x)$, we arrive at the so called *autocorrelation function*
$$
\kappa_d = \frac{f_d}{\mathrm{var}(x)}
$$
which gives us a useful measure of pairwise correlations
starting always at $1$ for $d=0$.
## Statistics, final expression
The sample error (see eq. ([24](#eq:error_estimate_split_up))) can now be
written in terms of the autocorrelation function:
$$
\mathrm{err}_X^2 =
\frac{1}{n}\mathrm{var}(x)+\frac{2}{n}\cdot\mathrm{var}(x)\sum_{d=1}^{n-1}
\frac{f_d}{\mathrm{var}(x)}\nonumber
$$
$$
=
\left(1+2\sum_{d=1}^{n-1}\kappa_d\right)\frac{1}{n}\mathrm{var}(x)\nonumber
$$
<!-- Equation labels as ordinary links -->
<div id="eq:error_estimate_corr_time"></div>
$$
\begin{equation}
=\frac{\tau}{n}\cdot\mathrm{var}(x)
\label{eq:error_estimate_corr_time} \tag{25}
\end{equation}
$$
and we see that $\mathrm{err}_X$ can be expressed in terms the
uncorrelated sample variance times a correction factor $\tau$ which
accounts for the correlation between measurements. We call this
correction factor the *autocorrelation time*:
<!-- Equation labels as ordinary links -->
<div id="eq:autocorrelation_time"></div>
$$
\begin{equation}
\tau = 1+2\sum_{d=1}^{n-1}\kappa_d
\label{eq:autocorrelation_time} \tag{26}
\end{equation}
$$
## Statistics, effective number of correlations
For a correlation free experiment, $\tau$
equals 1. From the point of view of
eq. ([25](#eq:error_estimate_corr_time)) we can interpret a sequential
correlation as an effective reduction of the number of measurements by
a factor $\tau$. The effective number of measurements becomes:
$$
n_\mathrm{eff} = \frac{n}{\tau}
$$
To neglect the autocorrelation time $\tau$ will always cause our
simple uncorrelated estimate of $\mathrm{err}_X^2\approx \mathrm{var}(x)/n$ to
be less than the true sample error. The estimate of the error will be
too *good*. On the other hand, the calculation of the full
autocorrelation time poses an efficiency problem if the set of
measurements is very large.
<!-- !split -->
## Linking the regression analysis with a statistical interpretation
Finally, we are going to discuss several statistical properties which can be obtained in terms of analytical expressions.
The
advantage of doing linear regression is that we actually end up with
analytical expressions for several statistical quantities.
Standard least squares and Ridge regression allow us to
derive quantities like the variance and other expectation values in a
rather straightforward way.
It is assumed that $\varepsilon_i
\sim \mathcal{N}(0, \sigma^2)$ and the $\varepsilon_{i}$ are
independent, i.e.:
$$
\begin{align*}
\mbox{Cov}(\varepsilon_{i_1},
\varepsilon_{i_2}) & = \left\{ \begin{array}{lcc} \sigma^2 & \mbox{if}
& i_1 = i_2, \\ 0 & \mbox{if} & i_1 \not= i_2. \end{array} \right.
\end{align*}
$$
The randomness of $\varepsilon_i$ implies that
$\mathbf{y}_i$ is also a random variable. In particular,
$\mathbf{y}_i$ is normally distributed, because $\varepsilon_i \sim
\mathcal{N}(0, \sigma^2)$ and $\mathbf{X}_{i,\ast} \, \boldsymbol{\beta}$ is a
non-random scalar. To specify the parameters of the distribution of
$\mathbf{y}_i$ we need to calculate its first two moments.
Recall that $\boldsymbol{X}$ is a matrix of dimensionality $n\times p$. The
notation above $\mathbf{X}_{i,\ast}$ means that we are looking at the
row number $i$ and perform a sum over all values $p$.
## Assumptions made
The assumption we have made here can be summarized as (and this is going to useful when we discuss the bias-variance trade off)
that there exists a function $f(\boldsymbol{x})$ and a normal distributed error $\boldsymbol{\varepsilon}\sim \mathcal{N}(0, \sigma^2)$
which describes our data
$$
\boldsymbol{y} = f(\boldsymbol{x})+\boldsymbol{\varepsilon}
$$
We approximate this function with our model from the solution of the linear regression equations, that is our
function $f$ is approximated by $\boldsymbol{\tilde{y}}$ where we want to minimize $(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2$, our MSE, with
$$
\boldsymbol{\tilde{y}} = \boldsymbol{X}\boldsymbol{\beta}.
$$
## Expectation value and variance
We can calculate the expectation value of $\boldsymbol{y}$ for a given element $i$
$$
\begin{align*}
\mathbb{E}(y_i) & =
\mathbb{E}(\mathbf{X}_{i, \ast} \, \boldsymbol{\beta}) + \mathbb{E}(\varepsilon_i)
\, \, \, = \, \, \, \mathbf{X}_{i, \ast} \, \beta,
\end{align*}
$$
while
its variance is
$$
\begin{align*} \mbox{Var}(y_i) & = \mathbb{E} \{ [y_i
- \mathbb{E}(y_i)]^2 \} \, \, \, = \, \, \, \mathbb{E} ( y_i^2 ) -
[\mathbb{E}(y_i)]^2 \\ & = \mathbb{E} [ ( \mathbf{X}_{i, \ast} \,
\beta + \varepsilon_i )^2] - ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 \\ &
= \mathbb{E} [ ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 + 2 \varepsilon_i
\mathbf{X}_{i, \ast} \, \boldsymbol{\beta} + \varepsilon_i^2 ] - ( \mathbf{X}_{i,
\ast} \, \beta)^2 \\ & = ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2 + 2
\mathbb{E}(\varepsilon_i) \mathbf{X}_{i, \ast} \, \boldsymbol{\beta} +
\mathbb{E}(\varepsilon_i^2 ) - ( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta})^2
\\ & = \mathbb{E}(\varepsilon_i^2 ) \, \, \, = \, \, \,
\mbox{Var}(\varepsilon_i) \, \, \, = \, \, \, \sigma^2.
\end{align*}
$$
Hence, $y_i \sim \mathcal{N}( \mathbf{X}_{i, \ast} \, \boldsymbol{\beta}, \sigma^2)$, that is $\boldsymbol{y}$ follows a normal distribution with
mean value $\boldsymbol{X}\boldsymbol{\beta}$ and variance $\sigma^2$ (not be confused with the singular values of the SVD).
## Expectation value and variance for $\boldsymbol{\beta}$
With the OLS expressions for the parameters $\boldsymbol{\beta}$ we can evaluate the expectation value
$$
\mathbb{E}(\boldsymbol{\beta}) = \mathbb{E}[ (\mathbf{X}^{\top} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1}\mathbf{X}^{T} \mathbb{E}[ \mathbf{Y}]=(\mathbf{X}^{T} \mathbf{X})^{-1} \mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta}=\boldsymbol{\beta}.
$$
This means that the estimator of the regression parameters is unbiased.
We can also calculate the variance
The variance of $\boldsymbol{\beta}$ is
$$
\begin{eqnarray*}
\mbox{Var}(\boldsymbol{\beta}) & = & \mathbb{E} \{ [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})] [\boldsymbol{\beta} - \mathbb{E}(\boldsymbol{\beta})]^{T} \}
\\
& = & \mathbb{E} \{ [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} - \boldsymbol{\beta}]^{T} \}
\\
% & = & \mathbb{E} \{ [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}] \, [(\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y}]^{T} \} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
% \\
% & = & \mathbb{E} \{ (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \mathbf{Y} \, \mathbf{Y}^{T} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} \} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
% \\
& = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, \mathbb{E} \{ \mathbf{Y} \, \mathbf{Y}^{T} \} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
\\
& = & (\mathbf{X}^{T} \mathbf{X})^{-1} \, \mathbf{X}^{T} \, \{ \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} + \sigma^2 \} \, \mathbf{X} \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
% \\
% & = & (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^T \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T % \mathbf{X})^{-1}
% \\
% & & + \, \, \sigma^2 \, (\mathbf{X}^T \mathbf{X})^{-1} \, \mathbf{X}^T \, \mathbf{X} \, (\mathbf{X}^T \mathbf{X})^{-1} - \boldsymbol{\beta} \boldsymbol{\beta}^T
\\
& = & \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} + \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1} - \boldsymbol{\beta} \, \boldsymbol{\beta}^{T}
\, \, \, = \, \, \, \sigma^2 \, (\mathbf{X}^{T} \mathbf{X})^{-1},
\end{eqnarray*}
$$
where we have used that $\mathbb{E} (\mathbf{Y} \mathbf{Y}^{T}) =
\mathbf{X} \, \boldsymbol{\beta} \, \boldsymbol{\beta}^{T} \, \mathbf{X}^{T} +
\sigma^2 \, \mathbf{I}_{nn}$. From $\mbox{Var}(\boldsymbol{\beta}) = \sigma^2
\, (\mathbf{X}^{T} \mathbf{X})^{-1}$, one obtains an estimate of the
variance of the estimate of the $j$-th regression coefficient:
$\hat{\sigma}^2 (\hat{\beta}_j ) = \hat{\sigma}^2 \sqrt{
[(\mathbf{X}^{T} \mathbf{X})^{-1}]_{jj} }$. This may be used to
construct a confidence interval for the estimates.
In a similar way, we cna obtain analytical expressions for say the
expectation values of the parameters $\boldsymbol{\beta}$ and their variance
when we employ Ridge regression, and thereby a confidence interval.
It is rather straightforward to show that
$$
\mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big]=(\mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I}_{pp})^{-1} (\mathbf{X}^{\top} \mathbf{X})\boldsymbol{\beta}^{\mathrm{OLS}}.
$$
We see clearly that
$\mathbb{E} \big[ \boldsymbol{\beta}^{\mathrm{Ridge}} \big] \not= \boldsymbol{\beta}^{\mathrm{OLS}}$ for any $\lambda > 0$. We say then that the ridge estimator is biased.
We can also compute the variance as
$$
\mbox{Var}[\boldsymbol{\beta}^{\mathrm{Ridge}}]=\sigma^2[ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1} \mathbf{X}^{T} \mathbf{X} \{ [ \mathbf{X}^{\top} \mathbf{X} + \lambda \mathbf{I} ]^{-1}\}^{T},
$$
and it is easy to see that if the parameter $\lambda$ goes to infinity then the variance of Ridge parameters $\boldsymbol{\beta}$ goes to zero.
With this, we can compute the difference
$$
\mbox{Var}[\boldsymbol{\beta}^{\mathrm{OLS}}]-\mbox{Var}(\boldsymbol{\beta}^{\mathrm{Ridge}})=\sigma^2 [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}[ 2\lambda\mathbf{I} + \lambda^2 (\mathbf{X}^{T} \mathbf{X})^{-1} ] \{ [ \mathbf{X}^{T} \mathbf{X} + \lambda \mathbf{I} ]^{-1}\}^{T}.
$$
The difference is non-negative definite since each component of the
matrix product is non-negative definite.
This means the variance we obtain with the standard OLS will always for $\lambda > 0$ be larger than the variance of $\boldsymbol{\beta}$ obtained with the Ridge estimator. This has interesting consequences when we discuss the so-called bias-variance trade-off below.
<!-- !split -->
## Cross-validation
Instead of choosing the penalty parameter to balance model fit with
model complexity, cross-validation requires it (i.e. the penalty
parameter) to yield a model with good prediction
performance. Commonly, this performance is evaluated on novel
data. Novel data need not be easy to come by and one has to make do
with the data at hand.
The setting of **original** and novel data is
then mimicked by sample splitting: the data set is divided into two
(groups of samples). One of these two data sets, called the
*training set*, plays the role of **original** data on which the model is
built. The second of these data sets, called the *test set*, plays the
role of the **novel** data and is used to evaluate the prediction
performance (often operationalized as the log-likelihood or the
prediction error or its square or the R2 score) of the model built on the training data set. This
procedure (model building and prediction evaluation on training and
test set, respectively) is done for a collection of possible penalty
parameter choices. The penalty parameter that yields the model with
the best prediction performance is to be preferred. The thus obtained
performance evaluation depends on the actual split of the data set. To
remove this dependence the data set is split many times into a
training and test set. For each split the model parameters are
estimated for all choices of $\lambda$ using the training data and
estimated parameters are evaluated on the corresponding test set. The
penalty parameter that on average over the test sets performs best (in
some sense) is then selected.
## Computationally expensive
The validation set approach is conceptually simple and is easy to implement. But it has two potential drawbacks:
* The validation estimate of the test error rate can be highly variable, depending on precisely which observations are included in the training set and which observations are included in the validation set.
* In the validation approach, only a subset of the observations, those that are included in the training set rather than in the validation set are used to fit the model. Since statistical methods tend to perform worse when trained on fewer observations, this suggests that the validation set error rate may tend to overestimate the test error rate for the model fit on the entire data set.
<!-- !split -->
## Various steps in cross-validation
When the repetitive splitting of the data set is done randomly,
samples may accidently end up in a fast majority of the splits in
either training or test set. Such samples may have an unbalanced
influence on either model building or prediction evaluation. To avoid
this $k$-fold cross-validation structures the data splitting. The
samples are divided into $k$ more or less equally sized exhaustive and
mutually exclusive subsets. In turn (at each split) one of these
subsets plays the role of the test set while the union of the
remaining subsets constitutes the training set. Such a splitting
warrants a balanced representation of each sample in both training and
test set over the splits. Still the division into the $k$ subsets
involves a degree of randomness. This may be fully excluded when
choosing $k=n$. This particular case is referred to as leave-one-out
cross-validation (LOOCV).
<!-- !split -->
## How to set up the cross-validation for Ridge and/or Lasso
* Define a range of interest for the penalty parameter.
* Divide the data set into training and test set comprising samples $\{1, \ldots, n\} \setminus i$ and $\{ i \}$, respectively.
* Fit the linear regression model by means of ridge estimation for each $\lambda$ in the grid using the training set, and the corresponding estimate of the error variance $\boldsymbol{\sigma}_{-i}^2(\lambda)$, as
$$
\begin{align*}
\boldsymbol{\beta}_{-i}(\lambda) & = ( \boldsymbol{X}_{-i, \ast}^{T}
\boldsymbol{X}_{-i, \ast} + \lambda \boldsymbol{I}_{pp})^{-1}
\boldsymbol{X}_{-i, \ast}^{T} \boldsymbol{y}_{-i}
\end{align*}
$$
* Evaluate the prediction performance of these models on the test set by $\log\{L[y_i, \boldsymbol{X}_{i, \ast}; \boldsymbol{\beta}_{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]\}$. Or, by the prediction error $|y_i - \boldsymbol{X}_{i, \ast} \boldsymbol{\beta}_{-i}(\lambda)|$, the relative error, the error squared or the R2 score function.
* Repeat the first three steps such that each sample plays the role of the test set once.
* Average the prediction performances of the test sets at each grid point of the penalty bias/parameter by computing the *cross-validated log-likelihood*. It is an estimate of the prediction performance of the model corresponding to this value of the penalty parameter on novel data. It is defined as
$$
\begin{align*}
\frac{1}{n} \sum_{i = 1}^n \log\{L[y_i, \mathbf{X}_{i, \ast}; \boldsymbol{\beta}_{-i}(\lambda), \boldsymbol{\sigma}_{-i}^2(\lambda)]\}.
\end{align*}
$$
* The value of the penalty parameter that maximizes the cross-validated log-likelihood is the value of choice. Or we can use the MSE or the R2 score functions.
## Resampling methods: Jackknife and Bootstrap
Two famous
resampling methods are the **independent bootstrap** and **the jackknife**.
The jackknife is a special case of the independent bootstrap. Still, the jackknife was made
popular prior to the independent bootstrap. And as the popularity of
the independent bootstrap soared, new variants, such as **the dependent bootstrap**.
The Jackknife and independent bootstrap work for
independent, identically distributed random variables.
If these conditions are not
satisfied, the methods will fail. Yet, it should be said that if the data are
independent, identically distributed, and we only want to estimate the
variance of $\overline{X}$ (which often is the case), then there is no
need for bootstrapping.
## Resampling methods: Jackknife
The Jackknife works by making many replicas of the estimator $\widehat{\theta}$.
The jackknife is a resampling method where we systematically leave out one observation from the vector of observed values $\boldsymbol{x} = (x_1,x_2,\cdots,X_n)$.
Let $\boldsymbol{x}_i$ denote the vector
$$
\boldsymbol{x}_i = (x_1,x_2,\cdots,x_{i-1},x_{i+1},\cdots,x_n),
$$
which equals the vector $\boldsymbol{x}$ with the exception that observation
number $i$ is left out. Using this notation, define
$\widehat{\theta}_i$ to be the estimator
$\widehat{\theta}$ computed using $\vec{X}_i$.
## Jackknife code example
```
from numpy import *
from numpy.random import randint, randn
from time import time
def jackknife(data, stat):
n = len(data);t = zeros(n); inds = arange(n); t0 = time()
## 'jackknifing' by leaving out an observation for each i
for i in range(n):
t[i] = stat(delete(data,i) )
# analysis
print("Runtime: %g sec" % (time()-t0)); print("Jackknife Statistics :")
print("original bias std. error")
print("%8g %14g %15g" % (stat(data),(n-1)*mean(t)/n, (n*var(t))**.5))
return t
# Returns mean of data samples
def stat(data):
return mean(data)
mu, sigma = 100, 15
datapoints = 10000
x = mu + sigma*random.randn(datapoints)
# jackknife returns the data sample
t = jackknife(x, stat)
```
## Resampling methods: Bootstrap
Bootstrapping is a nonparametric approach to statistical inference
that substitutes computation for more traditional distributional
assumptions and asymptotic results. Bootstrapping offers a number of
advantages:
1. The bootstrap is quite general, although there are some cases in which it fails.
2. Because it does not require distributional assumptions (such as normally distributed errors), the bootstrap can provide more accurate inferences when the data are not well behaved or when the sample size is small.
3. It is possible to apply the bootstrap to statistics with sampling distributions that are difficult to derive, even asymptotically.
4. It is relatively simple to apply the bootstrap to complex data-collection plans (such as stratified and clustered samples).
## Resampling methods: Bootstrap background
Since $\widehat{\theta} = \widehat{\theta}(\boldsymbol{X})$ is a function of random variables,
$\widehat{\theta}$ itself must be a random variable. Thus it has
a pdf, call this function $p(\boldsymbol{t})$. The aim of the bootstrap is to
estimate $p(\boldsymbol{t})$ by the relative frequency of
$\widehat{\theta}$. You can think of this as using a histogram
in the place of $p(\boldsymbol{t})$. If the relative frequency closely
resembles $p(\vec{t})$, then using numerics, it is straight forward to
estimate all the interesting parameters of $p(\boldsymbol{t})$ using point
estimators.
## Resampling methods: More Bootstrap background
In the case that $\widehat{\theta}$ has
more than one component, and the components are independent, we use the
same estimator on each component separately. If the probability
density function of $X_i$, $p(x)$, had been known, then it would have
been straight forward to do this by:
1. Drawing lots of numbers from $p(x)$, suppose we call one such set of numbers $(X_1^*, X_2^*, \cdots, X_n^*)$.
2. Then using these numbers, we could compute a replica of $\widehat{\theta}$ called $\widehat{\theta}^*$.
By repeated use of (1) and (2), many
estimates of $\widehat{\theta}$ could have been obtained. The
idea is to use the relative frequency of $\widehat{\theta}^*$
(think of a histogram) as an estimate of $p(\boldsymbol{t})$.
## Resampling methods: Bootstrap approach
But
unless there is enough information available about the process that
generated $X_1,X_2,\cdots,X_n$, $p(x)$ is in general
unknown. Therefore, [Efron in 1979](https://projecteuclid.org/euclid.aos/1176344552) asked the
question: What if we replace $p(x)$ by the relative frequency
of the observation $X_i$; if we draw observations in accordance with
the relative frequency of the observations, will we obtain the same
result in some asymptotic sense? The answer is yes.
Instead of generating the histogram for the relative
frequency of the observation $X_i$, just draw the values
$(X_1^*,X_2^*,\cdots,X_n^*)$ with replacement from the vector
$\boldsymbol{X}$.
## Resampling methods: Bootstrap steps
The independent bootstrap works like this:
1. Draw with replacement $n$ numbers for the observed variables $\boldsymbol{x} = (x_1,x_2,\cdots,x_n)$.
2. Define a vector $\boldsymbol{x}^*$ containing the values which were drawn from $\boldsymbol{x}$.
3. Using the vector $\boldsymbol{x}^*$ compute $\widehat{\theta}^*$ by evaluating $\widehat \theta$ under the observations $\boldsymbol{x}^*$.
4. Repeat this process $k$ times.
When you are done, you can draw a histogram of the relative frequency
of $\widehat \theta^*$. This is your estimate of the probability
distribution $p(t)$. Using this probability distribution you can
estimate any statistics thereof. In principle you never draw the
histogram of the relative frequency of $\widehat{\theta}^*$. Instead
you use the estimators corresponding to the statistic of interest. For
example, if you are interested in estimating the variance of $\widehat
\theta$, apply the etsimator $\widehat \sigma^2$ to the values
$\widehat \theta ^*$.
## Code example for the Bootstrap method
The following code starts with a Gaussian distribution with mean value
$\mu =100$ and variance $\sigma=15$. We use this to generate the data
used in the bootstrap analysis. The bootstrap analysis returns a data
set after a given number of bootstrap operations (as many as we have
data points). This data set consists of estimated mean values for each
bootstrap operation. The histogram generated by the bootstrap method
shows that the distribution for these mean values is also a Gaussian,
centered around the mean value $\mu=100$ but with standard deviation
$\sigma/\sqrt{n}$, where $n$ is the number of bootstrap samples (in
this case the same as the number of original data points). The value
of the standard deviation is what we expect from the central limit
theorem.
```
from numpy import *
from numpy.random import randint, randn
from time import time
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Returns mean of bootstrap samples
def stat(data):
return mean(data)
# Bootstrap algorithm
def bootstrap(data, statistic, R):
t = zeros(R); n = len(data); inds = arange(n); t0 = time()
# non-parametric bootstrap
for i in range(R):
t[i] = statistic(data[randint(0,n,n)])
# analysis
print("Runtime: %g sec" % (time()-t0)); print("Bootstrap Statistics :")
print("original bias std. error")
print("%8g %8g %14g %15g" % (statistic(data), std(data),mean(t),std(t)))
return t
mu, sigma = 100, 15
datapoints = 10000
x = mu + sigma*random.randn(datapoints)
# bootstrap returns the data sample
t = bootstrap(x, stat, datapoints)
# the histogram of the bootstrapped data
n, binsboot, patches = plt.hist(t, 50, normed=1, facecolor='red', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf( binsboot, mean(t), std(t))
lt = plt.plot(binsboot, y, 'r--', linewidth=1)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.axis([99.5, 100.6, 0, 3.0])
plt.grid(True)
plt.show()
```
## Code Example for Cross-validation and $k$-fold Cross-validation
The code here uses Ridge regression with cross-validation (CV) resampling and $k$-fold CV in order to fit a specific polynomial.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import PolynomialFeatures
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(3155)
# Generate the data.
nsamples = 100
x = np.random.randn(nsamples)
y = 3*x**2 + np.random.randn(nsamples)
## Cross-validation on Ridge regression using KFold only
# Decide degree on polynomial to fit
poly = PolynomialFeatures(degree = 6)
# Decide which values of lambda to use
nlambdas = 500
lambdas = np.logspace(-3, 5, nlambdas)
# Initialize a KFold instance
k = 5
kfold = KFold(n_splits = k)
# Perform the cross-validation to estimate MSE
scores_KFold = np.zeros((nlambdas, k))
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
j = 0
for train_inds, test_inds in kfold.split(x):
xtrain = x[train_inds]
ytrain = y[train_inds]
xtest = x[test_inds]
ytest = y[test_inds]
Xtrain = poly.fit_transform(xtrain[:, np.newaxis])
ridge.fit(Xtrain, ytrain[:, np.newaxis])
Xtest = poly.fit_transform(xtest[:, np.newaxis])
ypred = ridge.predict(Xtest)
scores_KFold[i,j] = np.sum((ypred - ytest[:, np.newaxis])**2)/np.size(ypred)
j += 1
i += 1
estimated_mse_KFold = np.mean(scores_KFold, axis = 1)
## Cross-validation using cross_val_score from sklearn along with KFold
# kfold is an instance initialized above as:
# kfold = KFold(n_splits = k)
estimated_mse_sklearn = np.zeros(nlambdas)
i = 0
for lmb in lambdas:
ridge = Ridge(alpha = lmb)
X = poly.fit_transform(x[:, np.newaxis])
estimated_mse_folds = cross_val_score(ridge, X, y[:, np.newaxis], scoring='neg_mean_squared_error', cv=kfold)
# cross_val_score return an array containing the estimated negative mse for every fold.
# we have to the the mean of every array in order to get an estimate of the mse of the model
estimated_mse_sklearn[i] = np.mean(-estimated_mse_folds)
i += 1
## Plot and compare the slightly different ways to perform cross-validation
plt.figure()
plt.plot(np.log10(lambdas), estimated_mse_sklearn, label = 'cross_val_score')
plt.plot(np.log10(lambdas), estimated_mse_KFold, 'r--', label = 'KFold')
plt.xlabel('log10(lambda)')
plt.ylabel('mse')
plt.legend()
plt.show()
```
## The bias-variance tradeoff
We will discuss the bias-variance tradeoff in the context of
continuous predictions such as regression. However, many of the
intuitions and ideas discussed here also carry over to classification
tasks. Consider a dataset $\mathcal{L}$ consisting of the data
$\mathbf{X}_\mathcal{L}=\{(y_j, \boldsymbol{x}_j), j=0\ldots n-1\}$.
Let us assume that the true data is generated from a noisy model
$$
\boldsymbol{y}=f(\boldsymbol{x}) + \boldsymbol{\epsilon}
$$
where $\epsilon$ is normally distributed with mean zero and standard deviation $\sigma^2$.
In our derivation of the ordinary least squares method we defined then
an approximation to the function $f$ in terms of the parameters
$\boldsymbol{\beta}$ and the design matrix $\boldsymbol{X}$ which embody our model,
that is $\boldsymbol{\tilde{y}}=\boldsymbol{X}\boldsymbol{\beta}$.
Thereafter we found the parameters $\boldsymbol{\beta}$ by optimizing the means squared error via the so-called cost function
$$
C(\boldsymbol{X},\boldsymbol{\beta}) =\frac{1}{n}\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2=\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right].
$$
We can rewrite this as
$$
\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\frac{1}{n}\sum_i(f_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\frac{1}{n}\sum_i(\tilde{y}_i-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2+\sigma^2.
$$
The three terms represent the square of the bias of the learning
method, which can be thought of as the error caused by the simplifying
assumptions built into the method. The second term represents the
variance of the chosen model and finally the last terms is variance of
the error $\boldsymbol{\epsilon}$.
To derive this equation, we need to recall that the variance of $\boldsymbol{y}$ and $\boldsymbol{\epsilon}$ are both equal to $\sigma^2$. The mean value of $\boldsymbol{\epsilon}$ is by definition equal to zero. Furthermore, the function $f$ is not a stochastics variable, idem for $\boldsymbol{\tilde{y}}$.
We use a more compact notation in terms of the expectation value
$$
\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}})^2\right],
$$
and adding and subtracting $\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]$ we get
$$
\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{f}+\boldsymbol{\epsilon}-\boldsymbol{\tilde{y}}+\mathbb{E}\left[\boldsymbol{\tilde{y}}\right]-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right],
$$
which, using the abovementioned expectation values can be rewritten as
$$
\mathbb{E}\left[(\boldsymbol{y}-\boldsymbol{\tilde{y}})^2\right]=\mathbb{E}\left[(\boldsymbol{y}-\mathbb{E}\left[\boldsymbol{\tilde{y}}\right])^2\right]+\mathrm{Var}\left[\boldsymbol{\tilde{y}}\right]+\sigma^2,
$$
that is the rewriting in terms of the so-called bias, the variance of the model $\boldsymbol{\tilde{y}}$ and the variance of $\boldsymbol{\epsilon}$.
## Example code for Bias-Variance tradeoff
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.utils import resample
np.random.seed(2018)
n = 500
n_boostraps = 100
degree = 18 # A quite high value, just to show.
noise = 0.1
# Make data set.
x = np.linspace(-1, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2) + np.random.normal(0, 0.1, x.shape)
# Hold out some test data that is never used in training.
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
# Combine x transformation and model into one operation.
# Not neccesary, but convenient.
model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False))
# The following (m x n_bootstraps) matrix holds the column vectors y_pred
# for each bootstrap iteration.
y_pred = np.empty((y_test.shape[0], n_boostraps))
for i in range(n_boostraps):
x_, y_ = resample(x_train, y_train)
# Evaluate the new model on the same test data each time.
y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel()
# Note: Expectations and variances taken w.r.t. different training
# data sets, hence the axis=1. Subsequent means are taken across the test data
# set in order to obtain a total value, but before this we have error/bias/variance
# calculated per data point in the test set.
# Note 2: The use of keepdims=True is important in the calculation of bias as this
# maintains the column vector form. Dropping this yields very unexpected results.
error = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) )
bias = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance = np.mean( np.var(y_pred, axis=1, keepdims=True) )
print('Error:', error)
print('Bias^2:', bias)
print('Var:', variance)
print('{} >= {} + {} = {}'.format(error, bias, variance, bias+variance))
plt.plot(x[::5, :], y[::5, :], label='f(x)')
plt.scatter(x_test, y_test, label='Data points')
plt.scatter(x_test, np.mean(y_pred, axis=1), label='Pred')
plt.legend()
plt.show()
```
## Understanding what happens
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.utils import resample
np.random.seed(2018)
n = 40
n_boostraps = 100
maxdegree = 14
# Make data set.
x = np.linspace(-3, 3, n).reshape(-1, 1)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.normal(0, 0.1, x.shape)
error = np.zeros(maxdegree)
bias = np.zeros(maxdegree)
variance = np.zeros(maxdegree)
polydegree = np.zeros(maxdegree)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
for degree in range(maxdegree):
model = make_pipeline(PolynomialFeatures(degree=degree), LinearRegression(fit_intercept=False))
y_pred = np.empty((y_test.shape[0], n_boostraps))
for i in range(n_boostraps):
x_, y_ = resample(x_train, y_train)
y_pred[:, i] = model.fit(x_, y_).predict(x_test).ravel()
polydegree[degree] = degree
error[degree] = np.mean( np.mean((y_test - y_pred)**2, axis=1, keepdims=True) )
bias[degree] = np.mean( (y_test - np.mean(y_pred, axis=1, keepdims=True))**2 )
variance[degree] = np.mean( np.var(y_pred, axis=1, keepdims=True) )
print('Polynomial degree:', degree)
print('Error:', error[degree])
print('Bias^2:', bias[degree])
print('Var:', variance[degree])
print('{} >= {} + {} = {}'.format(error[degree], bias[degree], variance[degree], bias[degree]+variance[degree]))
plt.plot(polydegree, np.log10(error), label='Error')
plt.plot(polydegree, bias, label='bias')
plt.plot(polydegree, variance, label='Variance')
plt.legend()
plt.show()
```
<!-- !split -->
## Summing up
The bias-variance tradeoff summarizes the fundamental tension in
machine learning, particularly supervised learning, between the
complexity of a model and the amount of training data needed to train
it. Since data is often limited, in practice it is often useful to
use a less-complex model with higher bias, that is a model whose asymptotic
performance is worse than another model because it is easier to
train and less sensitive to sampling noise arising from having a
finite-sized training dataset (smaller variance).
The above equations tell us that in
order to minimize the expected test error, we need to select a
statistical learning method that simultaneously achieves low variance
and low bias. Note that variance is inherently a nonnegative quantity,
and squared bias is also nonnegative. Hence, we see that the expected
test MSE can never lie below $Var(\epsilon)$, the irreducible error.
What do we mean by the variance and bias of a statistical learning
method? The variance refers to the amount by which our model would change if we
estimated it using a different training data set. Since the training
data are used to fit the statistical learning method, different
training data sets will result in a different estimate. But ideally the
estimate for our model should not vary too much between training
sets. However, if a method has high variance then small changes in
the training data can result in large changes in the model. In general, more
flexible statistical methods have higher variance.
## Another Example rom Scikit-Learn's Repository
```
"""
============================
Underfitting vs. Overfitting
============================
This example demonstrates the problems of underfitting and overfitting and
how we can use linear regression with polynomial features to approximate
nonlinear functions. The plot shows the function that we want to approximate,
which is a part of the cosine function. In addition, the samples from the
real function and the approximations of different models are displayed. The
models have polynomial features of different degrees. We can see that a
linear function (polynomial with degree 1) is not sufficient to fit the
training samples. This is called **underfitting**. A polynomial of degree 4
approximates the true function almost perfectly. However, for higher degrees
the model will **overfit** the training data, i.e. it learns the noise of the
training data.
We evaluate quantitatively **overfitting** / **underfitting** by using
cross-validation. We calculate the mean squared error (MSE) on the validation
set, the higher, the less likely the model generalizes correctly from the
training data.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
def true_fun(X):
return np.cos(1.5 * np.pi * X)
np.random.seed(0)
n_samples = 30
degrees = [1, 4, 15]
X = np.sort(np.random.rand(n_samples))
y = true_fun(X) + np.random.randn(n_samples) * 0.1
plt.figure(figsize=(14, 5))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees), i + 1)
plt.setp(ax, xticks=(), yticks=())
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X[:, np.newaxis], y)
# Evaluate the models using crossvalidation
scores = cross_val_score(pipeline, X[:, np.newaxis], y,
scoring="neg_mean_squared_error", cv=10)
X_test = np.linspace(0, 1, 100)
plt.plot(X_test, pipeline.predict(X_test[:, np.newaxis]), label="Model")
plt.plot(X_test, true_fun(X_test), label="True function")
plt.scatter(X, y, edgecolor='b', s=20, label="Samples")
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((0, 1))
plt.ylim((-2, 2))
plt.legend(loc="best")
plt.title("Degree {}\nMSE = {:.2e}(+/- {:.2e})".format(
degrees[i], -scores.mean(), scores.std()))
plt.show()
```
## The one-dimensional Ising model
Let us bring back the Ising model again, but now with an additional
focus on Ridge and Lasso regression as well. We repeat some of the
basic parts of the Ising model and the setup of the training and test
data. The one-dimensional Ising model with nearest neighbor
interaction, no external field and a constant coupling constant $J$ is
given by
<!-- Equation labels as ordinary links -->
<div id="_auto17"></div>
$$
\begin{equation}
H = -J \sum_{k}^L s_k s_{k + 1},
\label{_auto17} \tag{27}
\end{equation}
$$
where $s_i \in \{-1, 1\}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the one-dimensional system there is no phase transition.
We will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies.
```
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
import scipy.linalg as scl
from sklearn.model_selection import train_test_split
import sklearn.linear_model as skl
import tqdm
sns.set(color_codes=True)
cmap_args=dict(vmin=-1., vmax=1., cmap='seismic')
L = 40
n = int(1e4)
spins = np.random.choice([-1, 1], size=(n, L))
J = 1.0
energies = np.zeros(n)
for i in range(n):
energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))
```
A more general form for the one-dimensional Ising model is
<!-- Equation labels as ordinary links -->
<div id="_auto18"></div>
$$
\begin{equation}
H = - \sum_j^L \sum_k^L s_j s_k J_{jk}.
\label{_auto18} \tag{28}
\end{equation}
$$
Here we allow for interactions beyond the nearest neighbors and a more
adaptive coupling matrix. This latter expression can be formulated as
a matrix-product on the form
<!-- Equation labels as ordinary links -->
<div id="_auto19"></div>
$$
\begin{equation}
H = X J,
\label{_auto19} \tag{29}
\end{equation}
$$
where $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the
elements $-J_{jk}$. This form of writing the energy fits perfectly
with the form utilized in linear regression, viz.
<!-- Equation labels as ordinary links -->
<div id="_auto20"></div>
$$
\begin{equation}
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}.
\label{_auto20} \tag{30}
\end{equation}
$$
We organize the data as we did above
```
X = np.zeros((n, L ** 2))
for i in range(n):
X[i] = np.outer(spins[i], spins[i]).ravel()
y = energies
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96)
X_train_own = np.concatenate(
(np.ones(len(X_train))[:, np.newaxis], X_train),
axis=1
)
X_test_own = np.concatenate(
(np.ones(len(X_test))[:, np.newaxis], X_test),
axis=1
)
```
We will do all fitting with **Scikit-Learn**,
```
clf = skl.LinearRegression().fit(X_train, y_train)
```
When extracting the $J$-matrix we make sure to remove the intercept
```
J_sk = clf.coef_.reshape(L, L)
```
And then we plot the results
```
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_sk, **cmap_args)
plt.title("LinearRegression from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
```
The results perfectly with our previous discussion where we used our own code.
## Ridge regression
Having explored the ordinary least squares we move on to ridge
regression. In ridge regression we include a **regularizer**. This
involves a new cost function which leads to a new estimate for the
weights $\boldsymbol{\beta}$. This results in a penalized regression problem. The
cost function is given by
1
6
8
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
```
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train)
J_ridge_sk = clf_ridge.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_ridge_sk, **cmap_args)
plt.title("Ridge from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
```
## LASSO regression
In the **Least Absolute Shrinkage and Selection Operator** (LASSO)-method we get a third cost function.
<!-- Equation labels as ordinary links -->
<div id="_auto22"></div>
$$
\begin{equation}
C(\boldsymbol{X}, \boldsymbol{\beta}; \lambda) = (\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y})^T(\boldsymbol{X}\boldsymbol{\beta} - \boldsymbol{y}) + \lambda \sqrt{\boldsymbol{\beta}^T\boldsymbol{\beta}}.
\label{_auto22} \tag{32}
\end{equation}
$$
Finding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function ``Lasso`` from **Scikit-Learn**.
```
clf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train)
J_lasso_sk = clf_lasso.coef_.reshape(L, L)
fig = plt.figure(figsize=(20, 14))
im = plt.imshow(J_lasso_sk, **cmap_args)
plt.title("Lasso from Scikit-learn", fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
cb = fig.colorbar(im)
cb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)
plt.show()
```
It is quite striking how LASSO breaks the symmetry of the coupling
constant as opposed to ridge and OLS. We get a sparse solution with
$J_{j, j + 1} = -1$.
## Performance as function of the regularization parameter
We see how the different models perform for a different set of values for $\lambda$.
```
lambdas = np.logspace(-4, 5, 10)
train_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
test_errors = {
"ols_sk": np.zeros(lambdas.size),
"ridge_sk": np.zeros(lambdas.size),
"lasso_sk": np.zeros(lambdas.size)
}
plot_counter = 1
fig = plt.figure(figsize=(32, 54))
for i, _lambda in enumerate(tqdm.tqdm(lambdas)):
for key, method in zip(
["ols_sk", "ridge_sk", "lasso_sk"],
[skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)]
):
method = method.fit(X_train, y_train)
train_errors[key][i] = method.score(X_train, y_train)
test_errors[key][i] = method.score(X_test, y_test)
omega = method.coef_.reshape(L, L)
plt.subplot(10, 5, plot_counter)
plt.imshow(omega, **cmap_args)
plt.title(r"%s, $\lambda = %.4f$" % (key, _lambda))
plot_counter += 1
plt.show()
```
We see that LASSO reaches a good solution for low
values of $\lambda$, but will "wither" when we increase $\lambda$ too
much. Ridge is more stable over a larger range of values for
$\lambda$, but eventually also fades away.
## Finding the optimal value of $\lambda$
To determine which value of $\lambda$ is best we plot the accuracy of
the models when predicting the training and the testing set. We expect
the accuracy of the training set to be quite good, but if the accuracy
of the testing set is much lower this tells us that we might be
subject to an overfit model. The ideal scenario is an accuracy on the
testing set that is close to the accuracy of the training set.
```
fig = plt.figure(figsize=(20, 14))
colors = {
"ols_sk": "r",
"ridge_sk": "y",
"lasso_sk": "c"
}
for key in train_errors:
plt.semilogx(
lambdas,
train_errors[key],
colors[key],
label="Train {0}".format(key),
linewidth=4.0
)
for key in test_errors:
plt.semilogx(
lambdas,
test_errors[key],
colors[key] + "--",
label="Test {0}".format(key),
linewidth=4.0
)
plt.legend(loc="best", fontsize=18)
plt.xlabel(r"$\lambda$", fontsize=18)
plt.ylabel(r"$R^2$", fontsize=18)
plt.tick_params(labelsize=18)
plt.show()
```
From the above figure we can see that LASSO with $\lambda = 10^{-2}$
achieves a very good accuracy on the test set. This by far surpasses the
other models for all values of $\lambda$.
## Further Exercises
### Exercise 1
We will generate our own dataset for a function $y(x)$ where $x \in [0,1]$ and defined by random numbers computed with the uniform distribution. The function $y$ is a quadratic polynomial in $x$ with added stochastic noise according to the normal distribution $\cal {N}(0,1)$.
The following simple Python instructions define our $x$ and $y$ values (with 100 data points).
```
x = np.random.rand(100,1)
y = 5*x*x+0.1*np.random.randn(100,1)
```
1. Write your own code (following the examples above) for computing the parametrization of the data set fitting a second-order polynomial.
2. Use thereafter **scikit-learn** (see again the examples in the regression slides) and compare with your own code.
3. Using scikit-learn, compute also the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error defined as
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
and the $R^2$ score function.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}_i)^2}{\sum_{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
You can use the functionality included in scikit-learn. If you feel
for it, you can use your own program and define functions which
compute the above two functions. Discuss the meaning of these
results. Try also to vary the coefficient in front of the added
stochastic noise term and discuss the quality of the fits.
### Exercise 2, variance of the parameters $\beta$ in linear regression
Show that the variance of the parameters $\beta$ in the linear regression method (chapter 3, equation (3.8) of [Trevor Hastie, Robert Tibshirani, Jerome H. Friedman, The Elements of Statistical Learning, Springer](https://www.springer.com/gp/book/9780387848570)) is given as
$$
\mathrm{Var}(\hat{\beta}) = \left(\hat{X}^T\hat{X}\right)^{-1}\sigma^2,
$$
with
$$
\sigma^2 = \frac{1}{N-p-1}\sum_{i=1}^{N} (y_i-\tilde{y}_i)^2,
$$
where we have assumed that we fit a function of degree $p-1$ (for example a polynomial in $x$).
### Exercise 3
This exercise is a continuation of exercise 1. We will
use the same function to generate our data set, still staying with a
simple function $y(x)$ which we want to fit using linear regression,
but now extending the analysis to include the Ridge and the Lasso
regression methods. You can use the code under the Regression as an example on how to use the Ridge and the Lasso methods.
We will thus again generate our own dataset for a function $y(x)$ where
$x \in [0,1]$ and defined by random numbers computed with the uniform
distribution. The function $y$ is a quadratic polynomial in $x$ with
added stochastic noise according to the normal distribution $\cal{N}(0,1)$.
The following simple Python instructions define our $x$ and $y$ values (with 100 data points).
```
x = np.random.rand(100,1)
y = 5*x*x+0.1*np.random.randn(100,1)
```
1. Write your own code for the Ridge method and compute the parametrization for different values of $\lambda$. Compare and analyze your results with those from exercise 1. Study the dependence on $\lambda$ while also varying the strength of the noise in your expression for $y(x)$.
2. Repeat the above but using the functionality of **scikit-learn**. Compare your code with the results from **scikit-learn**. Remember to run with the same random numbers for generating $x$ and $y$.
3. Our next step is to study the variance of the parameters $\beta_1$ and $\beta_2$ (assuming that we are parametrizing our function with a second-order polynomial. We will use standard linear regression and the Ridge regression. You can now opt for either writing your own function that calculates the variance of these paramaters (recall that this is equal to the diagonal elements of the matrix $(\hat{X}^T\hat{X})+\lambda\hat{I})^{-1}$) or use the functionality of **scikit-learn** and compute their variances. Discuss the results of these variances as functions
4. Repeat the previous step but add now the Lasso method. Discuss your results and compare with standard regression and the Ridge regression results.
5. Try to implement the cross-validation as well.
6. Finally, using **scikit-learn** or your own code, compute also the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error defined as
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
and the $R^2$ score function.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}_i)^2}{\sum_{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Discuss these quantities as functions of the variable $\lambda$ in the Ridge and Lasso regression methods.
### Exercise 4
We will study how
to fit polynomials to a specific two-dimensional function called
[Franke's
function](http://www.dtic.mil/dtic/tr/fulltext/u2/a081688.pdf). This
is a function which has been widely used when testing various interpolation and fitting
algorithms. Furthermore, after having established the model and the
method, we will employ resamling techniques such as the cross-validation and/or
the bootstrap methods, in order to perform a proper assessment of our models.
The Franke function, which is a weighted sum of four exponentials reads as follows
$$
\begin{align*}
f(x,y) &= \frac{3}{4}\exp{\left(-\frac{(9x-2)^2}{4} - \frac{(9y-2)^2}{4}\right)}+\frac{3}{4}\exp{\left(-\frac{(9x+1)^2}{49}- \frac{(9y+1)}{10}\right)} \\
&+\frac{1}{2}\exp{\left(-\frac{(9x-7)^2}{4} - \frac{(9y-3)^2}{4}\right)} -\frac{1}{5}\exp{\left(-(9x-4)^2 - (9y-7)^2\right) }.
\end{align*}
$$
The function will be defined for $x,y\in [0,1]$. Our first step will
be to perform an OLS regression analysis of this function, trying out
a polynomial fit with an $x$ and $y$ dependence of the form $[x, y,
x^2, y^2, xy, \dots]$. We will also include cross-validation and
bootstrap as resampling techniques. As in homeworks 1 and 2, we
can use a uniform distribution to set up the arrays of values for $x$
and $y$, or as in the example below just a fix values for $x$ and $y$ with a given step size.
In this case we will have two predictors and need to fit a
function (for example a polynomial) of $x$ and $y$. Thereafter we will
repeat much of the same procedure using the the Ridge and
Lasso regression methods, introducing thus a dependence on the bias
(penalty) $\lambda$.
The Python fucntion for the Franke function is included here (it performs also a three-dimensional plot of it)
```
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
from random import random, seed
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
x = np.arange(0, 1, 0.05)
y = np.arange(0, 1, 0.05)
x, y = np.meshgrid(x,y)
def FrankeFunction(x,y):
term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))
term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))
term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))
term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)
return term1 + term2 + term3 + term4
z = FrankeFunction(x, y)
# Plot the surface.
surf = ax.plot_surface(x, y, z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
ax.set_zlim(-0.10, 1.40)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
```
We will thus again generate our own dataset for a function $\mathrm{FrankeFunction}(x,y)$ where
$x,y \in [0,1]$ could be defined by random numbers computed with the uniform
distribution. The function $f(x,y)$ is the Franke function. You should explore also the addition
an added stochastic noise to this function using the normal distribution $\cal{N}(0,1)$.
Write your own code (using either a matrix inversion or a singular value decomposition from e.g., **numpy** ) or use your code from exercises 1 and 3
and perform a standard least square regression analysis using polynomials in $x$ and $y$ up to fifth order. Find the confidence intervals of the parameters $\beta$ by computing their variances, evaluate the Mean Squared error (MSE)
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
and the $R^2$ score function.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}_i)^2}{\sum_{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Perform a resampling of the data where you split the data in training data and test data. Implement the $k$-fold cross-validation algorithm and/or the bootstrap algorithm
and evaluate again the MSE and the $R^2$ functions resulting from the test data. Evaluate also the bias and variance of the final models.
Write then your own code for the Ridge method, either using matrix
inversion or the singular value decomposition as done for standard OLS. Perform the same analysis as in the
previous exercise (for the same polynomials and include resampling
techniques) but now for different values of $\lambda$. Compare and
analyze your results with those obtained with standard OLS. Study the
dependence on $\lambda$ while also varying eventually the strength of
the noise in your expression for $\mathrm{FrankeFunction}(x,y)$.
Then perform the same studies but now with Lasso regression. Use the functionalities of
**scikit-learn**. Give a critical discussion of the three methods and a
judgement of which model fits the data best.
| github_jupyter |
##FlightPredict Package management
Run these cells only when you need to install the package or update it. Otherwise go directly the next section
1. !pip show flightPredict: provides information about the flightPredict package
2. !pip uninstall --yes flightPredict: uninstall the flight predict package. Run before installing a new version of the package
3. !pip install --user --exists-action=w --egg git+https://github.com/ibm-watson-data-lab/simple-data-pipe-connector-flightstats.git#egg=flightPredict: Install the flightPredict pacakge directly from Github
```
!pip show flightPredict
!pip uninstall --yes flightPredict
!pip install --user --exists-action=w --egg git+https://github.com/ibm-watson-data-lab/simple-data-pipe-connector-flightstats.git#egg=flightPredict
```
# Import required python package and set the Cloudant credentials
flightPredict is a helper package used to load data into RDD of LabeledPoint
```
%matplotlib inline
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import Vectors
from numpy import array
import numpy as np
import math
from datetime import datetime
from dateutil import parser
import flightPredict
sqlContext=SQLContext(sc)
flightPredict.sqlContext = sqlContext
flightPredict.cloudantHost='XXXX'
flightPredict.cloudantUserName='XXXX'
flightPredict.cloudantPassword='XXXX'
flightPredict.weatherUrl='https://XXXX:XXXXb@twcservice.mybluemix.net'
```
# load data from training data set and print the schema
```
dbName = "flightstats_training_data_for_flight_delay_predictive_app_mega_set"
cloudantdata = flightPredict.loadDataSet(dbName,"training")
cloudantdata.printSchema()
cloudantdata.count()
```
# Visualize classes in scatter plot based on 2 features
```
flightPredict.scatterPlotForFeatures(cloudantdata, \
"departureWeather.temp","arrivalWeather.temp","Departure Airport Temp", "Arrival Airport Temp")
flightPredict.scatterPlotForFeatures(cloudantdata,\
"departureWeather.pressure","arrivalWeather.pressure","Departure Airport Pressure", "Arrival Airport Pressure")
flightPredict.scatterPlotForFeatures(cloudantdata,\
"departureWeather.wspd","arrivalWeather.wspd","Departure Airport Wind Speed", "Arrival Airport Wind Speed")
```
# Load the training data as an RDD of LabeledPoint
```
computeClassification = (lambda deltaDeparture: 0 if deltaDeparture<13 else (1 if deltaDeparture < 41 else 2))
def customFeatureHandler(s):
if(s==None):
return ["departureTime"]
dt=parser.parse(s.departureTime)
features=[]
for i in range(0,7):
features.append(1 if dt.weekday()==i else 0)
return features
computeClassification=None
customFeatureHandler=None
numClasses = 5
trainingData = flightPredict.loadLabeledDataRDD("training", computeClassification, customFeatureHandler)
trainingData.take(5)
```
# Train multiple classification models
```
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
logRegModel = LogisticRegressionWithLBFGS.train(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, iterations=100, validateData=False, intercept=True)
print(logRegModel)
from pyspark.mllib.classification import NaiveBayes
#NaiveBayes requires non negative features, set them to 0 for now
modelNaiveBayes = NaiveBayes.train(trainingData.map(lambda lp: LabeledPoint(lp.label, \
np.fromiter(map(lambda x: x if x>0.0 else 0.0,lp.features.toArray()),dtype=np.int)\
))\
)
print(modelNaiveBayes)
from pyspark.mllib.tree import DecisionTree
modelDecisionTree = DecisionTree.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=numClasses, categoricalFeaturesInfo={})
print(modelDecisionTree)
from pyspark.mllib.tree import RandomForest
modelRandomForest = RandomForest.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=numClasses, categoricalFeaturesInfo={},numTrees=100)
print(modelRandomForest)
```
# Load Blind data from Cloudant database
```
dbTestName="flightstats_test_data_for_flight_delay_predictive_app"
testCloudantdata = flightPredict.loadDataSet(dbTestName,"test")
testCloudantdata.count()
testData = flightPredict.loadLabeledDataRDD("test",computeClassification, customFeatureHandler)
flightPredict.displayConfusionTable=True
flightPredict.runMetrics(trainingData,modelNaiveBayes,modelDecisionTree,logRegModel,modelRandomForest)
```
# Run the predictive model
runModel(departureAirportCode, departureDateTime, arrivalAirportCode, arrivalDateTime)
Note: all DateTime must use UTC format
```
from flightPredict import run
run.useModels(modelDecisionTree,modelRandomForest)
run.runModel('BOS', "2016-02-08 20:15-0500", 'LAX', "2016-01-08 22:30-0500" )
rdd = sqlContext.sql("select deltaDeparture from training").map(lambda s: s.deltaDeparture)\
.filter(lambda s: s < 50 and s > 12)
print(rdd.count())
histo = rdd.histogram(50)
#print(histo[0])
#print(histo[1])
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
bins = [i for i in histo[0]]
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Number of records')
plt.xlabel('Bin')
plt.title('Histogram')
intervals = [abs(j-i) for i,j in zip(bins[:-1], bins[1:])]
values=[sum(intervals[:i]) for i in range(0,len(intervals))]
plt.bar(values, histo[1], intervals, color='b', label = "Bins")
plt.xticks(bins[:-1],[int(i) for i in bins[:-1]])
plt.legend()
plt.show()
```
| github_jupyter |
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import pandas as pd
from Bio import Entrez
from Rules_Class import Rules
import functions as fn
from sklearn.metrics import confusion_matrix
import os
import sys
start=0
end=10
input_directory=os.path.realpath('../Data')
result_directory=os.path.realpath('../Data')
list_raw=pd.read_csv(input_directory+"/trrust_conflict.csv",sep=',',header=(0),dtype=object)
list_raw=list_raw.iloc[start:end,]
list_raw_result=pd.DataFrame(columns=['src_entrez','trg_entrez','srcname','trgname','find_pmid','all_pmids','mode','score','evi_pmid','evi_sent','report'])
print(list_raw)
Positive=[]
[Positive.append(line.strip().upper()) for line in open(input_directory+"/Positive.txt")]
Negative=[]
[Negative.append(line.strip().upper()) for line in open(input_directory+"/Negative.txt")]
genes_ents=input_directory + "/ALL_Human_Genes_Info.csv" #NCBI
genes=pd.read_csv(genes_ents,sep=',',header=(0))
genes.fillna('', inplace=True)
lookup_ids=pd.read_csv(input_directory+"/ncbi_id_lookup.csv",sep='\t',header=(0))
for row in list_raw.itertuples():
#try:
query_id=[int(row.srcentz),int(row.trgentz)] #NCBI ID MUST [TF,Target]
query_genes=[row.srcname,row.trgname] #Symbol MUST [TF,Target]
print(query_genes)
query_genes, query_id,single_gene,single_id =fn.make_query(genes,lookup_ids,query_genes,query_id)
status=''
mesh="humans"
try:
myterm=fn.term_maker(single_gene,genes,mesh)
#### ESearch: Searching the Entrez databases
Entrez.email="saman.farahmand001@umb.edu"
handle=Entrez.esearch(db="pubmed", term=myterm, retmax=100000000)
record=Entrez.read(handle)
PubIDs = record["IdList"]
except:
status+="Enterz Fetch Error|||"
#print(status)
list_raw_result=list_raw_result.append({'src_entrez':single_id[0],'trg_entrez':single_id[1],'srcname':single_gene[0],'trgname':single_gene[1],'find_pmid':None,'all_pmids':None,'mode':None,'score':None,'evi_pmid':None,'evi_sent':None,'report':status},ignore_index=True)
if(len(PubIDs)>0):
print(len(PubIDs))
sum_ranks=[]
evi_sentence=[]
evi_pmids=[]
all_pmids=';'.join(PubIDs)
for PubID in PubIDs:
abstract=''
ranks=[]
annot_df=pd.DataFrame(columns=['type','id','text','offset','length'])
try:
annot_df, abstract=fn.pubtator_annot(annot_df,PubID)
except:
abstract=fn.ret_abstract(PubID)
if(abstract=='?'):
#print("PMID=["+PubID+"] does not exist any more!")
continue # remove it from the output results in TRRUST
else:
status+="PMID=["+PubID+"] PubTator Response is not readable, Try to annotate manually..."
#print(status)
# try:
# beCAS_lookup_full=fn.beCAS_lookup(PubID,query_id)
# beCAS_lookup=beCAS_lookup_full[['type','id','text','offset','length']]
# annot_df=pd.concat([annot_df,beCAS_lookup], ignore_index=True)
# except:
# status+="beCAS Server error|||"
lookup_results=fn.lookup_annot(abstract,query_genes,query_id,lookup_ids)
annot_df=annot_df.append(lookup_results)
# surface_annot=fn.surface_similarity(abstract, genes, query_genes, query_id,lookup_ids,single_id)
# annot_df=annot_df.append(surface_annot)
annot_df=annot_df.drop_duplicates(subset=['id','offset'])
annot_df=fn.multiple_taxonomy(annot_df, query_id)
annot_df=annot_df.reset_index(drop=True)
candidate_sentences, covered=fn.candidate_sentence(annot_df,abstract,query_id)
if(len(candidate_sentences.index)==0):
status+="PMID=["+PubID+"] No co-existed sentences found in the abstract...!"
#print(status)
continue
for sentence in candidate_sentences.itertuples():
obj=Rules(Positive,Negative,annot_df,covered,abstract,query_genes,query_id,sentence)
depgraphs=fn.dep_parser('8000',sentence,annot_df,query_id,single_id,Positive,Negative,2)
if(depgraphs):
try:
obj. multiplication_score(depgraphs, single_id)
except:
status+="PMID=["+PubID+"] dependency graph score error...!"
else:
status+="PMID=["+PubID+"] dependency graph co-occurance of single ids error...!"
continue
#obj.search_ranking()
ranks.append(obj.rank)
if(obj.rank!=0):
evi_sentence.append('['+PubID+']'+sentence.sentence)
evi_pmids.append(PubID)
if(len(ranks)!=0):
sum_ranks.append(sum(ranks))
mode=''
rank_T=sum(sum_ranks)
if(rank_T>0):
mode='positive'
if(rank_T<0):
mode='negative'
evi_sentence='|||'.join(evi_sentence)
evi_pmids=';'.join(evi_pmids)
list_raw_result=list_raw_result.append({'src_entrez':single_id[0],'trg_entrez':single_id[1],'srcname':single_gene[0],'trgname':single_gene[1],'find_pmid':str(len(all_pmids)),'all_pmids':all_pmids,'mode':mode,'score':str(rank_T),'evi_pmid':evi_pmids,'evi_sent':evi_sentence,'report':status},ignore_index=True)
else:
status+="Not found any PMIDs for this interaction"
#print(status)
list_raw_result=list_raw_result.append({'src_entrez':single_id[0],'trg_entrez':single_id[1],'srcname':single_gene[0],'trgname':single_gene[1],'find_pmid':'','all_pmids':'','mode':'','score':'','evi_pmid':'','evi_sent':'','report':status},ignore_index=True)
#except:
#print("general Error!!")
#continue
trrust_raw_result.to_csv(result_directory+"/"+"output_"+start+"_"+end+".csv",sep = '$$$')
```
| github_jupyter |
# Counting Objects: Part III
In our [last notebook](https://github.com/JoshVarty/ImageClassification/blob/master/3_CountingAgain.ipynb) we saw that with enough data and sensible transforms we can train convolutional neural networks to classify pictures according to the number of cirlces in them. Even after adding circles of various sizes our network had no challenge reaching 100% accuracy.
The problem with approaching this problem as a classification problem is that it doesn't handle classes outside of the ones we've trained it on. In our last example we classified images with 45-49 objects in them. If we showed this network an image with 30 elements in it, it wouldn't know what to do.
For this reason, this problem is likely better forumalated as a regression problem.
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.vision import *
from fastai.metrics import mean_squared_error
import random
import numpy as np
import matplotlib.pyplot as plt
import os
from tqdm import tnrange
def createNonOverlappingPoints(numElements):
x = np.zeros((numElements)) + 2 #Place the cirlces offscreen
y = np.zeros((numElements)) + 2 #Place the circles offscreen
for i in range(0, numElements):
foundNewCircle = False
#Loop until we find a non-overlapping position for our new circle
while not foundNewCircle:
randX = random.uniform(-1, 1)
randY = random.uniform(-1, 1)
distanceFromOtherCircles = np.sqrt(np.square(x - randX) + np.square(y - randY))
# Ensure that this circle is far enough away from all others
if np.all(distanceFromOtherCircles > 0.2):
break
x[i] = randX
y[i] = randY
return x, y
def generateImages(numberOfPoints):
directory = 'data/counting/' + str(numberOfPoints) + '/'
os.makedirs(directory, exist_ok=True)
#Create 200 images of this class
for j in range(100):
path = directory + str(j) + '.png'
#Get points
x, y = createNonOverlappingPoints(numberOfPoints)
#Create plot
plt.clf()
axes = plt.gca()
axes.set_xlim([-2,2])
axes.set_ylim([-2, 2])
plt.scatter(x,y, s=200)
plt.axes().set_aspect('equal', 'datalim')
#Save to disk
plt.savefig(path)
for i in tnrange(1, 50):
generateImages(i)
path = 'data/counting'
np.random.seed(42)
def parseObjectCount(x):
return float(str(x).split('/')[2])
# Only transform is flipping
transforms = get_transforms(flip_vert=True,
max_zoom=0,
max_rotate=0,
max_lighting=0,
max_warp=0,
p_affine=0,
p_lighting=0)
#Hack, I don't know how to remove crop and pad
flip_transform = transforms[0][1]
only_flip_transforms = [[flip_transform],[]]
print(only_flip_transforms)
data = (ImageItemList.from_folder(path)
.random_split_by_pct(seed=42)
.label_from_func(parseObjectCount, label_cls=FloatList)
.transform(only_flip_transforms, size=224)
.databunch())
print(help(data))
data.normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(7,8))
learner = create_cnn(data, models.resnet34, metrics=mean_squared_error)
learner.unfreeze()
learner.lr_find()
learner.recorder.plot()
learner.fit_one_cycle(30, max_lr=slice(1e-5, 1e-2))
def generateTestImages(numberOfPoints):
directory = 'data/counting/test/'
os.makedirs(directory, exist_ok=True)
path = directory + str(numberOfPoints) + '.png'
#Get points
x, y = createNonOverlappingPoints(numberOfPoints)
#Create plot
plt.clf()
axes = plt.gca()
axes.set_xlim([-2,2])
axes.set_ylim([-2, 2])
plt.scatter(x,y, s=200)
plt.axes().set_aspect('equal', 'datalim')
#Save to disk
plt.savefig(path)
plt.clf()
generateTestImages(10)
img = open_image('data/counting/test/10.png')
img
pred_class, pred_ix, outputs = learner.predict(img)
pred_class
generateTestImages(2)
img = open_image('data/counting/test/2.png')
pred_class, pred_ix, outputs = learner.predict(img)
pred_class
generateTestImages(50)
img = open_image('data/counting/test/50.png')
pred_class, pred_ix, outputs = learner.predict(img)
pred_class
generateTestImages(65)
img = open_image('data/counting/test/65.png')
pred_class, pred_ix, outputs = learner.predict(img)
pred_class
```
| github_jupyter |
<table width = "100%">
<tr style="background-color:white;">
<!-- QWorld Logo -->
<td style="text-align:left; padding:0px; width:200px;">
<img src="images/QWorld.png"> </td>
<!-- Padding -->
<td width="*">     </td>
<td style="padding:0px;width:250px;"></td>
<!-- Social Media Links -->
<td> <a href="https://qworld.net" target="_blank">
<img align="right" src="images/Website.png" style="width:50px;" > </a></td>
<td> <a href="https://discord.gg/akCvr7U87g" target="_blank">
<img align="right" src="images/Discord.png" style="width:50px;"> </a></td>
<td> <a href="https://twitter.com/QWorld19" target="_blank">
<img align="right" src="images/Twitter.png" width=50 > </a></td>
<td> <a href="https://www.youtube.com/qworld19" target="_blank">
<img align="right" src="images/YouTube.png" width=50 > </a></td>
<td> <a href="https://www.facebook.com/qworld19/" target="_blank">
<img align="right" src="images/Facebook.png" width=50 > </a></td>
<td> <a href="https://www.linkedin.com/company/qworld19/" target="_blank">
<img align="right" src="images/LinkedIn.png" width=50 > </a></td>
</tr>
</table>
<hr>
## Credits
Most of this tutorial is developed during [QIntern 2021](https://qworld.net/qintern-2021/) organized by [QWorld](https://qworld.net/) under the project "Educational material development for quantum annealing". We would like to thank the organizers for this event.
The project is lead by [Dr.Özlem Salehi Köken](https://www.cmpe.boun.edu.tr/~ozlem.salehi/). The notebooks are created by [Akash Narayanan B](https://gitlab.com/AkashNarayanan), [Paul Joseph Robin](https://gitlab.com/pjr1363), [Sabah Ud Din Ahmad](https://gitlab.com/sabahuddin.ahmad), [Sourabh Nutakki](https://gitlab.com/Sourabh499). [Manan Sood](https://gitlab.com/Manan-Sood) contributed to one notebook.
---
## References
[D-Wave Documentation](https://docs.dwavesys.com/docs/latest/c_gs_1.html)
Glover, Fred, Gary Kochenberger, and Yu Du. "[Quantum Bridge Analytics I: a tutorial on formulating and using QUBO models.](http://meta-analytics.net/wp-content/uploads/2020/08/Quantum-Bridge-Analytics-Part-1-Glover2019.pdf)" 4OR 17.4 (2019): 335-371.
Lucas, Andrew. "[Ising formulations of many NP problems.](https://www.frontiersin.org/articles/10.3389/fphy.2014.00005/full)" Frontiers in physics 2 (2014): 5.
McGeoch, Catherine C. "[Adiabatic quantum computation and quantum annealing: Theory and practice.](https://ieeexplore.ieee.org/document/7055969)" Synthesis Lectures on Quantum Computing 5.2 (2014): 1-93.
Salehi, Özlem, Adam Glos, and Jarosław Adam Miszczak. "[Unconstrained Binary Models of the Travelling Salesman Problem Variants for Quantum Optimization.](https://arxiv.org/pdf/2106.09056.pdf)" arXiv preprint arXiv:2106.09056 (2021).
| github_jupyter |
# Latent Semantic Indexing
Here, we apply the technique *Latent Semantic Indexing* to capture the similarity of words. We are given a list of words and their frequencies in 9 documents, found on [GitHub](https://github.com/ppham27/MLaPP-solutions/blob/master/chap12/lsiDocuments.pdf).
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
plt.rcParams['font.size'] = 16
words_list = list()
with open('lsiWords.txt') as f:
for line in f:
words_list.append(line.strip())
words = pd.Series(words_list, name="words")
word_frequencies = pd.read_csv('lsiMatrix.txt', sep=' ', index_col=False,
header=None, names=words)
word_frequencies.T.head(20)
```
Now as per part (a), we compute the SVD and use the first two singular values. Recall the model is that
\begin{equation}
\mathbf{x} \sim \mathcal{N}\left(W\mathbf{z},\Psi\right),
\end{equation}
where $\Psi$ is diagonal. If the SVD is $X = UDV^\intercal,$ $W$ will be the first two columns of $V$.
```
X = word_frequencies.as_matrix().astype(np.float64)
U, D, V = np.linalg.svd(X.T) # in matlab the matrix is read in as its transpose
```
In this way, we let $Z = UD$, so $X = ZV^\intercal$. Now, let $\tilde{Z}$ be the approximation from using 2 singular values, so $\tilde{X} = \tilde{Z}W^\intercal$, so $\tilde{Z} = \tilde{U}\tilde{D}$. For some reason, the textbook chooses not to scale by $\tilde{D}$, so we just have $\tilde{U}$. Recall that all the variables are messed up because we used the tranpose.
```
Z = V.T[:,:2]
Z
```
Now, let's plot these results.
```
plt.figure(figsize=(8,8))
def plot_latent_variables(Z, ax=None):
if ax == None:
ax = plt.gca()
ax.plot(Z[:,0], Z[:,1], 'o', markerfacecolor='none')
for i in range(len(Z)):
ax.text(Z[i,0] + 0.005, Z[i,1], i,
verticalalignment='center')
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('PCA with $L = 2$ for Alien Documents')
ax.grid(True)
plot_latent_variables(Z)
plt.show()
```
I, respectfully, disagree with the book for this reason. The optimal latent representation $Z = XW$ (observations are rows here), should be chosen such that
\begin{equation}
J(W,Z) = \frac{1}{N}\left\lVert X - ZW^\intercal\right\rVert^2
\end{equation}
is minimized, where $W$ is orthonormal.
```
U, D, V = np.linalg.svd(X)
V = V.T # python implementation of SVD factors X = UDV (note that V is not tranposed)
```
By section 12.2.3 of the book, $W$ is the first $2$ columns of $V$. Thus, our actual plot should be below.
```
W = V[:,:2]
Z = np.dot(X, W)
plt.figure(figsize=(8,8))
ax = plt.gca();
plot_latent_variables(Z, ax=ax)
ax.set_aspect('equal')
plt.show()
```
Note that this is very similar with the $y$-axis flipped. That part does not actually matter. What matters is the scaling by eigenvalues for computing. Before that scaling the proximity of points may not mean much if the eigenvalue is actually very large.
Now, the second part asks us to see if we can properly identify documents related to abductions by using a document with the single word *abducted* as a probe.
```
probe_document = np.zeros_like(words, dtype=np.float64)
abducted_idx = (words=='abducted').as_matrix()
probe_document[abducted_idx] = 1
X[0:3,abducted_idx]
```
Note that despite the first document being about abductions, it doesn't contain the word *abducted*.
Let's look at the latent variable representation. We'll use cosine similarity to account for the difference in magnitude.
```
from scipy.spatial import distance
z = np.dot(probe_document, W)
similarities = list(map(lambda i : (i, 1 - distance.cosine(z,Z[i,:])), range(len(Z))))
similarities.sort(key=lambda similarity_tuple : -similarity_tuple[1])
similarities
```
Indeed, we find the three alien abduction documents, $0$, $2$, and $1$ are most similar to our probe.
| github_jupyter |
# Gluon example with DALI
## Overview
This is a modified [DCGAN example](https://gluon.mxnet.io/chapter14_generative-adversarial-networks/dcgan.html), which uses DALI for reading and augmenting images.
## Sample
```
import os.path
import matplotlib as mpl
import tarfile
import matplotlib.image as mpimg
from matplotlib import pyplot as plt
import mxnet as mx
from mxnet import gluon
from mxnet import ndarray as nd
from mxnet.gluon import nn, utils
from mxnet import autograd
import numpy as np
epochs = 10 # Set low by default for tests, set higher when you actually run this code.
batch_size = 64
latent_z_size = 100
use_gpu = True
ctx = mx.gpu() if use_gpu else mx.cpu()
lr = 0.0002
beta1 = 0.5
lfw_url = 'http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz'
data_path = 'lfw_dataset'
if not os.path.exists(data_path):
os.makedirs(data_path)
data_file = utils.download(lfw_url)
with tarfile.open(data_file) as tar:
tar.extractall(path=data_path)
target_wd = 64
target_ht = 64
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
import numpy as np
class HybridPipe(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(HybridPipe, self).__init__(batch_size,
num_threads,
device_id,
seed = 12)
self.input = ops.FileReader(file_root=data_path + "/lfw-deepfunneled/", random_shuffle = True,
pad_last_batch=True)
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.resize = ops.Resize(device = "gpu",
resize_x = target_wd, resize_y = target_ht,
interp_type = types.INTERP_LINEAR)
self.rotate = ops.Rotate(device = "gpu", interp_type = types.INTERP_LINEAR)
self.cmnp = ops.CropMirrorNormalize(device = "gpu",
dtype = types.FLOAT,
crop = (target_wd, target_ht),
mean = [127.5, 127.5, 127.5],
std = [127.5, 127.5, 127.5])
self.uniform = ops.random.Uniform(range = (-10., 10.))
self.iter = 0
def define_graph(self):
inputs, labels = self.input(name = "Reader")
images = self.decode(inputs)
angle = self.uniform()
images = self.resize(images)
images = self.rotate(images, angle = angle)
output = self.cmnp(images)
return (output)
def iter_setup(self):
pass
pipe = HybridPipe(batch_size=batch_size, num_threads=4, device_id = 0)
pipe.build()
pipe_out = pipe.run()
pipe_out_cpu = pipe_out[0].as_cpu()
img_chw = pipe_out_cpu.at(20)
%matplotlib inline
plt.imshow((np.transpose(img_chw, (1,2,0))+1.0)/2.0)
from nvidia.dali.plugin.mxnet import DALIGenericIterator, LastBatchPolicy
# recreate pipeline to avoid mixing simple with iterator API
pipe = HybridPipe(batch_size=batch_size, num_threads=4, device_id = 0)
pipe.build()
dali_iter = DALIGenericIterator(pipe, [("data", DALIGenericIterator.DATA_TAG)],
reader_name="Reader", last_batch_policy=LastBatchPolicy.PARTIAL)
# build the generator
nc = 3
ngf = 64
netG = nn.Sequential()
with netG.name_scope():
# input is Z, going into a convolution
netG.add(nn.Conv2DTranspose(ngf * 8, 4, 1, 0, use_bias=False))
netG.add(nn.BatchNorm())
netG.add(nn.Activation('relu'))
# state size. (ngf*8) x 4 x 4
netG.add(nn.Conv2DTranspose(ngf * 4, 4, 2, 1, use_bias=False))
netG.add(nn.BatchNorm())
netG.add(nn.Activation('relu'))
# state size. (ngf*8) x 8 x 8
netG.add(nn.Conv2DTranspose(ngf * 2, 4, 2, 1, use_bias=False))
netG.add(nn.BatchNorm())
netG.add(nn.Activation('relu'))
# state size. (ngf*8) x 16 x 16
netG.add(nn.Conv2DTranspose(ngf, 4, 2, 1, use_bias=False))
netG.add(nn.BatchNorm())
netG.add(nn.Activation('relu'))
# state size. (ngf*8) x 32 x 32
netG.add(nn.Conv2DTranspose(nc, 4, 2, 1, use_bias=False))
netG.add(nn.Activation('tanh'))
# state size. (nc) x 64 x 64
# build the discriminator
ndf = 64
netD = nn.Sequential()
with netD.name_scope():
# input is (nc) x 64 x 64
netD.add(nn.Conv2D(ndf, 4, 2, 1, use_bias=False))
netD.add(nn.LeakyReLU(0.2))
# state size. (ndf) x 32 x 32
netD.add(nn.Conv2D(ndf * 2, 4, 2, 1, use_bias=False))
netD.add(nn.BatchNorm())
netD.add(nn.LeakyReLU(0.2))
# state size. (ndf) x 16 x 16
netD.add(nn.Conv2D(ndf * 4, 4, 2, 1, use_bias=False))
netD.add(nn.BatchNorm())
netD.add(nn.LeakyReLU(0.2))
# state size. (ndf) x 8 x 8
netD.add(nn.Conv2D(ndf * 8, 4, 2, 1, use_bias=False))
netD.add(nn.BatchNorm())
netD.add(nn.LeakyReLU(0.2))
# state size. (ndf) x 4 x 4
netD.add(nn.Conv2D(1, 4, 1, 0, use_bias=False))
# loss
loss = gluon.loss.SigmoidBinaryCrossEntropyLoss()
# initialize the generator and the discriminator
netG.initialize(mx.init.Normal(0.02), ctx=ctx)
netD.initialize(mx.init.Normal(0.02), ctx=ctx)
# trainer for the generator and the discriminator
trainerG = gluon.Trainer(netG.collect_params(), 'adam', {'learning_rate': lr, 'beta1': beta1})
trainerD = gluon.Trainer(netD.collect_params(), 'adam', {'learning_rate': lr, 'beta1': beta1})
from datetime import datetime
import time
import logging
real_label = nd.ones((batch_size,), ctx=ctx)
fake_label = nd.zeros((batch_size,),ctx=ctx)
def facc(label, pred):
pred = pred.ravel()
label = label.ravel()
return ((pred > 0.5) == label).mean()
metric = mx.metric.CustomMetric(facc)
stamp = datetime.now().strftime('%Y_%m_%d-%H_%M')
logging.basicConfig(level=logging.DEBUG)
for epoch in range(epochs):
tic = time.time()
btic = time.time()
iter = 0
for batches in dali_iter: # Using DALI iterator
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
data = batches[0].data[0] # extracting the batch for device 0
latent_z = mx.nd.random_normal(0, 1, shape=(batch_size, latent_z_size, 1, 1), ctx=ctx)
with autograd.record():
# train with real image
output = netD(data).reshape((-1, 1))
errD_real = loss(output, real_label)
metric.update([real_label,], [output,])
# train with fake image
fake = netG(latent_z)
output = netD(fake.detach()).reshape((-1, 1))
errD_fake = loss(output, fake_label)
errD = errD_real + errD_fake
errD.backward()
metric.update([fake_label,], [output,])
trainerD.step(data.shape[0])
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
with autograd.record():
fake = netG(latent_z)
output = netD(fake).reshape((-1, 1))
errG = loss(output, real_label)
errG.backward()
trainerG.step(data.shape[0])
# Print log infomation every ten batches
if iter % 100 == 0:
name, acc = metric.get()
logging.info('speed: {} samples/s'.format(batch_size / (time.time() - btic)))
logging.info('discriminator loss = %f, generator loss = %f, binary training acc = %f at iter %d epoch %d'
%(nd.mean(errD).asscalar(),
nd.mean(errG).asscalar(), acc, iter, epoch))
iter = iter + 1
btic = time.time()
dali_iter.reset()
name, acc = metric.get()
metric.reset()
def visualize(img_arr):
plt.imshow(((img_arr.asnumpy().transpose(1, 2, 0) + 1.0) * 127.5).astype(np.uint8))
plt.axis('off')
num_image = 8
fig = plt.figure(figsize = (16,8))
for i in range(num_image):
latent_z = mx.nd.random_normal(0, 1, shape=(1, latent_z_size, 1, 1), ctx=ctx)
img = netG(latent_z)
plt.subplot(2,4,i+1)
visualize(img[0])
plt.show()
```
| github_jupyter |
**Chapter 7 – Ensemble Learning and Random Forests**
_This notebook contains all the sample code and solutions to the exercises in chapter 7._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/07_ensemble_learning_and_random_forests.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Voting classifiers
```
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
**Note**: to be future-proof, we set `solver="lbfgs"`, `n_estimators=100`, and `gamma="scale"` since these will be the default values in upcoming Scikit-Learn versions.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
**Note**: the results in this notebook may differ slightly from the book, as Scikit-Learn algorithms sometimes get tweaked.
Soft voting:
```
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
# Bagging ensembles
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.sca(axes[1])
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
plt.ylabel("")
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
# Random Forests
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.02, contour=False)
plt.show()
```
## Out-of-Bag evaluation
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
## Feature importance
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
```
# AdaBoost
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
for subplot, learning_rate in ((0, 1), (1, 0.5)):
sample_weights = np.ones(m) / m
plt.sca(axes[subplot])
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.2, gamma=0.6, random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights * m)
y_pred = svm_clf.predict(X_train)
r = sample_weights[y_pred != y_train].sum() / sample_weights.sum() # equation 7-1
alpha = learning_rate * np.log((1 - r) / r) # equation 7-2
sample_weights[y_pred != y_train] *= np.exp(alpha) # equation 7-3
sample_weights /= sample_weights.sum() # normalization step
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 0:
plt.text(-0.75, -0.95, "1", fontsize=14)
plt.text(-1.05, -0.95, "2", fontsize=14)
plt.text(1.0, -0.95, "3", fontsize=14)
plt.text(-1.45, -0.5, "4", fontsize=14)
plt.text(1.36, -0.95, "5", fontsize=14)
else:
plt.ylabel("")
save_fig("boosting_plot")
plt.show()
```
# Gradient Boosting
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.sca(axes[1])
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
## Gradient Boosting with Early stopping
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2, n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.ylabel("Error", fontsize=16)
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.xlabel("$x_1$", fontsize=16)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
```
## Using XGBoost
```
try:
import xgboost
except ImportError as ex:
print("Error: the xgboost library is not installed.")
xgboost = None
if xgboost is not None: # not shown in the book
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
if xgboost is not None: # not shown in the book
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
```
# Exercise solutions
## 1. to 7.
See Appendix A.
## 8. Voting Classifier
Exercise: _Load the MNIST data and split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing)._
The MNIST dataset was loaded earlier.
```
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
```
Exercise: _Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM._
```
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
svm_clf = LinearSVC(max_iter=100, tol=20, random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
```
The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Exercise: _Next, try to combine them into an ensemble that outperforms them all on the validation set, using a soft or hard voting classifier._
```
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
```
Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to `None` using `set_params()` like this:
```
voting_clf.set_params(svm_clf=None)
```
This updated the list of estimators:
```
voting_clf.estimators
```
However, it did not update the list of _trained_ estimators:
```
voting_clf.estimators_
```
So we can either fit the `VotingClassifier` again, or just remove the SVM from the list of trained estimators:
```
del voting_clf.estimators_[2]
```
Now let's evaluate the `VotingClassifier` again:
```
voting_clf.score(X_val, y_val)
```
A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set `voting` to `"soft"`:
```
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
```
Nope, hard voting wins in this case.
_Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?_
```
voting_clf.voting = "hard"
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
```
The voting classifier only very slightly reduced the error rate of the best model in this case.
## 9. Stacking Ensemble
Exercise: _Run the individual classifiers from the previous exercise to make predictions on the validation set, and create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image's class. Train a classifier on this new training set._
```
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
```
You could fine-tune this blender or try other types of blenders (e.g., an `MLPClassifier`), then select the best one using cross-validation, as always.
Exercise: _Congratulations, you have just trained a blender, and together with the classifiers they form a stacking ensemble! Now let's evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble's predictions. How does it compare to the voting classifier you trained earlier?_
```
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
This stacking ensemble does not perform as well as the voting classifier we trained earlier, it's not quite as good as the best individual classifier.
| github_jupyter |
```
Copyright 2020 The IREE Authors
Licensed under the Apache License v2.0 with LLVM Exceptions.
See https://llvm.org/LICENSE.txt for license information.
SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
```
# Training and Executing an MNIST Model with IREE
## Overview
This notebook covers installing IREE and using it to train a simple neural network on the MNIST dataset.
## 1. Install and Import IREE
```
%%capture
!python -m pip install iree-compiler-snapshot iree-runtime-snapshot iree-tools-tf-snapshot -f https://github.com/google/iree/releases
from iree import runtime as ireert
from iree.tf.support import module_utils
from iree.compiler import tf as tfc
# (Temporary) workaround for absl flags...
# https://github.com/googlecolab/colabtools/issues/1323#issuecomment-756343620
import sys
from absl import app
sys.argv = sys.argv[:1]
try:
app.run(lambda argv: None)
except:
pass
```
## 2. Import TensorFlow and Other Dependencies
```
from matplotlib import pyplot as plt
import numpy as np
import os
import tempfile
import tensorflow as tf
plt.style.use("seaborn-whitegrid")
plt.rcParams["font.family"] = "monospace"
plt.rcParams["figure.figsize"] = [8, 4.5]
plt.rcParams["figure.dpi"] = 150
# Print version information for future notebook users to reference.
print("TensorFlow version: ", tf.__version__)
print("Numpy version: ", np.__version__)
```
## 3. Load the MNIST Dataset
```
# Keras datasets don't provide metadata.
NUM_CLASSES = 10
NUM_ROWS, NUM_COLS = 28, 28
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape into grayscale images:
x_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))
x_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))
# Rescale uint8 pixel values into float32 values between 0 and 1:
x_train = x_train.astype(np.float32) / 255
x_test = x_test.astype(np.float32) / 255
# IREE doesn't currently support int8 tensors, so we cast them to int32:
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
print("Sample image from the dataset:")
sample_index = np.random.randint(x_train.shape[0])
plt.figure(figsize=(5, 5))
plt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap="gray")
plt.title(f"Sample #{sample_index}, label: {y_train[sample_index]}")
plt.axis("off")
plt.tight_layout()
```
## 4. Create a Simple DNN
MLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the `tf.TensorSpec`s).
```
BATCH_SIZE = 32
class TrainableDNN(tf.Module):
def __init__(self):
super().__init__()
# Create a Keras model to train.
inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(128)(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Dense(10)(x)
outputs = tf.keras.layers.Softmax()(x)
self.model = tf.keras.Model(inputs, outputs)
# Create a loss function and optimizer to use during training.
self.loss = tf.keras.losses.SparseCategoricalCrossentropy()
self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs
])
def predict(self, inputs):
return self.model(inputs, training=False)
# We compile the entire training step by making it a method on the model.
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs
tf.TensorSpec([BATCH_SIZE], tf.int32) # labels
])
def learn(self, inputs, labels):
# Capture the gradients from forward prop...
with tf.GradientTape() as tape:
probs = self.model(inputs, training=True)
loss = self.loss(labels, probs)
# ...and use them to update the model's weights.
variables = self.model.trainable_variables
gradients = tape.gradient(loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
return loss
```
## 5. Compile the Model with IREE
tf.keras adds a large number of methods to TrainableDNN, and most of them
cannot be compiled with IREE. To get around this we tell IREE exactly which
methods we would like it to compile.
```
exported_names = ["predict", "learn"]
```
Choose one of IREE's three backends to compile to. (*Note: Using Vulkan requires installing additional drivers.*)
```
backend_choice = "llvmaot (CPU)" #@param [ "vmvx (CPU)", "llvmaot (CPU)", "vulkan (GPU/SwiftShader – requires additional drivers) " ]
backend_choice = f"iree_{backend_choice.split(' ')[0]}"
# Compile the TrainableDNN module
compiled_model = module_utils.IreeCompiledModule.create_from_class(
TrainableDNN,
module_utils.BackendInfo(backend_choice),
exported_names=exported_names)
```
## 6. Train the Compiled Model on MNIST
This compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).
```
#@title Benchmark inference and training
print("Inference latency:\n ", end="")
%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])
print("Training latancy:\n ", end="")
%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])
# Run the core training loop.
losses = []
step = 0
max_steps = x_train.shape[0] // BATCH_SIZE
for batch_start in range(0, x_train.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_train.shape[0]:
continue
inputs = x_train[batch_start:batch_start + BATCH_SIZE]
labels = y_train[batch_start:batch_start + BATCH_SIZE]
loss = compiled_model.learn(inputs, labels)
losses.append(loss)
step += 1
print(f"\rStep {step:4d}/{max_steps}: loss = {loss:.4f}", end="")
#@title Plot the training results
import bottleneck as bn
smoothed_losses = bn.move_mean(losses, 32)
x = np.arange(len(losses))
plt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')
plt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')
plt.ylim(0)
plt.legend(frameon=True)
plt.xlabel("training step")
plt.ylabel("cross-entropy")
plt.title("training loss");
```
## 7. Evaluate on Heldout Test Examples
```
#@title Evaluate the network on the test data.
accuracies = []
step = 0
max_steps = x_test.shape[0] // BATCH_SIZE
for batch_start in range(0, x_test.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_test.shape[0]:
continue
inputs = x_test[batch_start:batch_start + BATCH_SIZE]
labels = y_test[batch_start:batch_start + BATCH_SIZE]
prediction = compiled_model.predict(inputs)
prediction = np.argmax(prediction, -1)
accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)
step += 1
print(f"\rStep {step:4d}/{max_steps}", end="")
print()
accuracy = np.mean(accuracies)
print(f"Test accuracy: {accuracy:.3f}")
#@title Display inference predictions on a random selection of heldout data
rows = 4
columns = 4
images_to_display = rows * columns
assert BATCH_SIZE >= images_to_display
random_index = np.arange(x_test.shape[0])
np.random.shuffle(random_index)
x_test = x_test[random_index]
y_test = y_test[random_index]
predictions = compiled_model.predict(x_test[:BATCH_SIZE])
predictions = np.argmax(predictions, -1)
fig, axs = plt.subplots(rows, columns)
for i, ax in enumerate(np.ndarray.flatten(axs)):
ax.imshow(x_test[i, :, :, 0])
color = "#000000" if predictions[i] == y_test[i] else "#ff7f0e"
ax.set_xlabel(f"prediction={predictions[i]}", color=color)
ax.grid(False)
ax.set_yticks([])
ax.set_xticks([])
fig.tight_layout()
```
| github_jupyter |
# `pretty_midi` tutorial
This tutorial goes over the functionality of [pretty_midi](http://github.com/craffel/pretty_midi), a Python library for creating, manipulating, and extracting information from MIDI files. For more information, check [the docs](http://craffel.github.io/pretty-midi/).
```
import pretty_midi
import numpy as np
# For plotting
import mir_eval.display
import librosa.display
import matplotlib.pyplot as plt
%matplotlib inline
# For putting audio in the notebook
import IPython.display
```
## Building a MIDI file from scratch
To start, we'll build a MIDI sequence from the ground up. More specifically, we'll build up a MIDI "object" using `pretty_midi`'s representation of MIDI (which we can optionally write out to a MIDI file later on). This representation is actually a little different than standard MIDI files (it's intended to be less ambiguous and a little easier to work with), but the two are mostly interchangeable. Relevant differences will be pointed out as we go.
### The `PrettyMIDI` class
The `PrettyMIDI` class is the main container in `pretty_midi`. It stores not only all of the events that constitute the piece, but also all of the timing information, meta-events (like key signature changes), and utility functions for manipulating, writing out, and inferring information about the MIDI data it contains.
```
# Construct a PrettyMIDI object.
# We'll specify that it will have a tempo of 80bpm.
pm = pretty_midi.PrettyMIDI(initial_tempo=80)
```
### The `Instrument` class
One of the most important functions of the `PrettyMIDI` class is to hold a `list` of `Instrument` class instances. Each `Instrument` is able to store different events (for example, notes) which are meant to be played on a given general MIDI instrument (for example, instrument 42, "Cello"). The `Instrument` class also has functions for extracting useful information based solely on the events that occur on that particular instrument.
```
# The instruments list from our PrettyMIDI instance starts empty
print pm.instruments
# Let's add a Cello instrument, which has program number 42.
# pretty_midi also keeps track of whether each instrument is a "drum" instrument or not
# because drum/non-drum instruments share program numbers in MIDI.
# You can also optionally give the instrument a name,
# which corresponds to the MIDI "instrument name" meta-event.
inst = pretty_midi.Instrument(program=42, is_drum=False, name='my cello')
pm.instruments.append(inst)
```
### Event containers
`pretty_midi` has individual classes for holding MIDI events: `Note`, `PitchBend`, and `ControlChange`. These classes store information analogous to their corresponding MIDI events. The `Instrument` class has separate `list`s for each of these event types.
#### Note
The `Note` class represents a MIDI note, which has a pitch, a start time, and and end time. Notes have a pitch number from 0 to 127, representing the notes from C-1 to G9. In addition, notes have a velocity (volume) from 1 to 127, from quietest to loudest. The way `pretty_midi` stores notes is slightly different from standard MIDI, in the sense that in MIDI the note-on and note-off events are separate and their correspondences must be guessed from the MIDI stream. `pretty_midi` keeps the start and end times of a note coupled together, so things are less ambiguous. Furthermore, `pretty_midi` stores all times in terms of actual wall clock time from the start of the MIDI sequence, rather than in "ticks" (discussed further below). This is much easier to work with!
```
# Let's add a few notes to our instrument
velocity = 100
for pitch, start, end in zip([60, 62, 64], [0.2, 0.6, 1.0], [1.1, 1.7, 2.3]):
inst.notes.append(pretty_midi.Note(velocity, pitch, start, end))
print inst.notes
```
#### Pitch bends
Since MIDI notes are all defined to have a specific integer pitch value, in order to represent arbitrary pitch frequencies we need to use pitch bends. A `PitchBend` class in `pretty_midi` holds a time (in seconds) and a pitch offset. The pitch offset is an integer in the range [-8192, 8191], which in General MIDI spans the range from -2 to +2 semitones. As with `Note`s, the `Instrument` class has a `list` for `PitchBend` class instances.
```
# We'll just do a 1-semitone pitch ramp up
n_steps = 512
bend_range = 8192/2
for time, pitch in zip(np.linspace(1.5, 2.3, n_steps),
range(0, bend_range, bend_range/n_steps)):
inst.pitch_bends.append(pretty_midi.PitchBend(pitch, time))
```
#### Control changes
The `Instrument` class also holds control changes. Control changes include things like modulation, reverb, etc., which may or may not be supported by a given General MIDI synthesizer. As usual, they are stored in an `Instrument`'s `control_changes` `list`. We won't be covering them here.
### Putting it all together
To give you a taste of what sorts of analysis you can do (which we'll cover in more detail in the next section), here are some examples using the (simple) MIDI sequence we just constructed.
#### Plotting a piano roll
A great way to visualize MIDI data is via a piano roll, which is a time-frequency matrix where each row is a different MIDI pitch and each column is a different slice in time. `pretty_midi` can produce piano roll matrices for each indivual instrument (via `Instrument.get_piano_roll`) or the entire `PrettyMIDI` object (summed across instruments, via `PrettyMIDI.get_piano_roll`). The spacing in time between subsequent columns of the matrix is determined by the `fs` parameter.
```
def plot_piano_roll(pm, start_pitch, end_pitch, fs=100):
# Use librosa's specshow function for displaying the piano roll
librosa.display.specshow(pm.get_piano_roll(fs)[start_pitch:end_pitch],
hop_length=1, sr=fs, x_axis='time', y_axis='cqt_note',
fmin=pretty_midi.note_number_to_hz(start_pitch))
plt.figure(figsize=(8, 4))
plot_piano_roll(pm, 56, 70)
# Note the blurry section between 1.5s and 2.3s - that's the pitch bending up!
```
#### Sonification
`pretty_midi` has two main ways to sonify MIDI data: the `synthesize` and `fluidsynth` functions. `synthesize` is a simple and rudimentary method which just synthesizes each note as a sine wave. `fluidsynth` uses the Fluidsynth program along with a SoundFont file (a simple one is installed alongside `pretty_midi`) to create a General MIDI synthesis. Note that you must have the Fluidsynth program installed to use the `fluidsynth` function. Both the `Instrument` and `PrettyMIDI` classes have these methods; the `PrettyMIDI` versions just sum up the syntheses for all of the contained instruments.
```
# Synthesis frequency
fs = 16000
IPython.display.Audio(pm.synthesize(fs=16000), rate=16000)
# Sounds like sine waves...
IPython.display.Audio(pm.fluidsynth(fs=16000), rate=16000)
# Sounds (kind of) like a cello!
```
#### Writing it out
Finally, we can easily write out what we just made to a standard MIDI file via the `write` function. `pretty_midi` puts each `Instrument` on a separate track in the MIDI file. Note that because `pretty_midi` uses a slightly different representation of MIDI data (for example, representing time as absolute seconds rather than as ticks), the information written out to the file will be slightly different. Otherwise, everything in your `PrettyMIDI` object will be included in the generated MIDI file.
```
pm.write('out.mid')
```
## Parsing a MIDI file
The other intended use of `pretty_midi` is to parse MIDI files, so that they can be manipulated and analyzed. Loading in a MIDI file is simple. `pretty_midi` should be able to handle all valid MIDI files, and will raise an Exception of the MIDI data is corrupt.
```
# We'll load in the example.mid file distributed with pretty_midi
pm = pretty_midi.PrettyMIDI('example.mid')
plt.figure(figsize=(12, 4))
plot_piano_roll(pm, 24, 84)
# Let's look at what's in this MIDI file
print 'There are {} time signature changes'.format(len(pm.time_signature_changes))
print 'There are {} instruments'.format(len(pm.instruments))
print 'Instrument 3 has {} notes'.format(len(pm.instruments[0].notes))
print 'Instrument 4 has {} pitch bends'.format(len(pm.instruments[4].pitch_bends))
print 'Instrument 5 has {} control changes'.format(len(pm.instruments[5].control_changes))
```
#### A note on timing information
As discussed above, `pretty_midi` stores the time of different events in absolute seconds. This is different from MIDI, where the timing of events is determined in terms of relative "ticks" from the previous event. The amount of time each tick corresponds to depends on the current tempo and the file's resolution. Naturally, this is a woefully difficult way to deal with timing, which is why `pretty_midi` represents time in terms of absolute seconds. Hoever, we don't want to totally get rid of the metrical grid, so `pretty_midi` retains a mapping between times and ticks which is based on tempo change events.
```
# What's the start time of the 10th note on the 3rd instrument?
print pm.instruments[2].notes[10].start
# What's that in ticks?
tick = pm.time_to_tick(pm.instruments[2].notes[10].start)
print tick
# Note we can also go in the opposite direction
print pm.tick_to_time(tick)
```
### Modifying the MIDI data
Anything we did above when creating a MIDI file can now be done to the parsed MIDI object. For example, we can add or remove notes from instruments, add or remove instruments from the MIDI sequence, or even modify the attributes of individual events.
```
# Let's shift the entire piece up by 2 semitones.
for instrument in pm.instruments:
# Skip drum instruments - their notes aren't pitched!
if instrument.is_drum:
continue
for note in instrument.notes:
note.pitch += 2
```
#### Adjusting timing
There are two ways to modify the timing of a MIDI sequence in `pretty_midi`. The first way is to directly changing the timing attributes of all of the events in the `PrettyMIDI` object (and its `Instrument`s). While simple, the issue with this approach is that the timing of these events will no longer match the metrical information in the MIDI file. The second approach is to use the `adjust_times` function, which effectively takes an original and adjusted temporal grid and correctly performs the warping - ensuring that the timing/metrical information remains correct. This can also be used for cropping out portions of the MIDI file.
```
# Get the length of the MIDI file
length = pm.get_end_time()
# This will effectively slow it down to 110% of its original length
pm.adjust_times([0, length], [0, length*1.1])
# Let's check what time our tick from above got mapped to - should be 1.1x
print pm.tick_to_time(tick)
```
### Analyzing the MIDI data
`pretty_midi` contains extensive functionality for deriving information from a MIDI sequence. Much of this information is not readibly accessible from the MIDI file itself, so a primary goal of `pretty_midi` is to take care of all of the parsing and analysis in a correct, efficient, and easy-to-use way.
#### Timing information
Inferring, for example, the beat or downbeat times from a MIDI file requires keeping careful track of tempo change and time signature change events. `pretty_midi` handles this for you, and keeps them correct when you use `adjust_times`.
```
# Plot the tempo changes over time
# Many MIDI files won't have more than one tempo change event,
# but this particular file was transcribed to somewhat closely match the original song.
times, tempo_changes = pm.get_tempo_changes()
plt.plot(times, tempo_changes, '.')
plt.xlabel('Time')
plt.ylabel('Tempo');
# Get and downbeat times
beats = pm.get_beats()
downbeats = pm.get_downbeats()
# Plot piano roll
plt.figure(figsize=(12, 4))
plot_piano_roll(pm, 24, 84)
ymin, ymax = plt.ylim()
# Plot beats as grey lines, downbeats as white lines
mir_eval.display.events(beats, base=ymin, height=ymax, color='#AAAAAA')
mir_eval.display.events(downbeats, base=ymin, height=ymax, color='#FFFFFF', lw=2)
# Only display 20 seconds for clarity
plt.xlim(25, 45);
```
#### Harmonic information
Beyond metrical information, `pretty_midi` contains a few utility functions for measuring statistics about the harmonic content of the MIDI sequence. However, it's also designed so that additional analysis is easy.
```
# Plot a pitch class distribution - sort of a proxy for key
plt.bar(np.arange(12), pm.get_pitch_class_histogram());
plt.xticks(np.arange(12), ['C', '', 'D', '', 'E', 'F', '', 'G', '', 'A', '', 'B'])
plt.xlabel('Note')
plt.ylabel('Proportion')
# Let's count the number of transitions from C to D in this song
n_c_to_d = 0
for instrument in pm.instruments:
# Drum instrument notes don't have pitches!
if instrument.is_drum:
continue
for first_note, second_note in zip(instrument.notes[:-1], instrument.notes[1:]):
n_c_to_d += (first_note.pitch % 12 == 0) and (second_note.pitch % 12 == 2)
print '{} C-to-D transitions.'.format(n_c_to_d)
```
## Utility functions
Since the MIDI specification is not a terribly user-friendly format (e.g. instruments are identified by integers with no discernible order), `pretty_midi` provides various functions for converting between MIDI format and human-friendly/readable format.
```
print 'Program number 42 is {}'.format(pretty_midi.program_to_instrument_name(42))
print '... and has instrument class {}'.format(pretty_midi.program_to_instrument_class(42))
print 'Bassoon has program number {}'.format(pretty_midi.instrument_name_to_program('Bassoon'))
print 'Splash Cymbal has note number {} on drum instruments'.format(
pretty_midi.drum_name_to_note_number('Splash Cymbal'))
print 'A pitch bend value of 1000 is {:.3f} semitones'.format(
pretty_midi.pitch_bend_to_semitones(1000))
print 'To pitch bend by -1.3 semitones, use the value {}'.format(
pretty_midi.semitones_to_pitch_bend(-1.3))
```
## Additional resources
As mentioned above, [the docs](http://craffel.github.io/pretty-midi/) cover all of the functionality in `pretty_midi`. For additional usage examples, check the [examples directory](https://github.com/craffel/pretty-midi/tree/master/examples). If you encounter an issue or have a feature request, feel free to [create an issue](https://github.com/craffel/pretty-midi/issues/new).
| github_jupyter |
# Multivariate Joint Use Case (Single DataFrameCase)
In this vignette a use case of the Multivariate Channel Entropy Triangle is presented. We are going to evaluate the effectiveness of feature transformation using PCA in entropic terms.
### Importing Libraries
We import the package entropytriangle, which will import the modules needed for the evaluation.
```
from entropytriangle import * #importing all modules necessary for the plotting
```
## Download the databases
In this case, the csv files for the use case, are stored locally. Now it´s time to load the database in which we are going to apply the feature transformation.
```
#df = pd.read_csv('Arthitris.csv',delimiter=',',index_col='Unnamed: 0').drop(['ID'],axis = 1)
#df = pd.read_csv('Breast_data.csv',delimiter=',',index_col='Unnamed: 0').drop(['Sample code number'],axis = 1).replace('?',np.nan) # in this DB the missing values are represented as '?'
#df = pd.read_csv('Glass.csv',delimiter=',')
#df = pd.read_csv('Ionosphere.csv',delimiter=',')
df = pd.read_csv('Iris.csv',delimiter=',',index_col='Id')
#df = pd.read_csv('Wine.csv',delimiter=',').drop(['Wine'],axis = 1)
df.info(verbose=True)
df.head(5)
df = discretization(df).fillna(0)
```
### Prepare the data for the PCA feature transformation (Features - Classes)
Importing the Sklearn modules for the feature transformation.
```
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
```
Splitting the Data for the Standarization of the features before the transformation.
```
features = df.columns.drop('Species')
x = df[df.columns.drop('Species')].values
# Separating out the target
y = df.loc[:,['Species']].values
# Standardizing the features
x = StandardScaler().fit_transform(x)
```
Transformation of the data. We will store the entropy dataframes in a list, which will store in each position the transformed features with the corresponding number of number of principal components which will be:
Number of cols of original df - index
Example list[0] = Feature transformation with (iris features cols = 4) - (index = 0) = 4 Principal components
```
li = list()
for i in range(len(df.columns)):
pca = PCA(n_components = (len(df.columns)-1)-i)
principalComponents = pca.fit_transform(x)
columns = list(map(lambda x: "principal component " + str(x), range(len(df.columns)-1-i)))
principalDf = pd.DataFrame(data = principalComponents, columns= columns)
li.append(principalDf)
```
### Channel Multivariate Entropy Triangle
Calculation of the entropy Data Frame for each of the dataframes of the list:
```
edf = list()
for i in range(len(li)-1):
edf.append(jentropies(df,li[i]))
entriangle_list(edf,s_mk=300,pltscale=15)
```
We can see that using three components maximizes the per-feature transmitted information in the case of Iris.
| github_jupyter |
# Reconstruction de synonymes - énoncé
Ce notebook est plus un jeu. On récupère d'abord des synonymes via la base [WOLF](http://alpage.inria.fr/~sagot/wolf-en.html). On ne garde que les synonymes composé d'un seul mot. On prend ensuite un texte quelconque qu'on découpe en phrase. Pour chaque phrase qu'on rencontre, on remplace des mots au hasard par leur synonymes. Chaque phrase sera présente une fois à l'identique et plusieurs fois avec des synonymes différents. L'objectif est alors de proposer une méthode pour reconstruire la base de synonymes.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
## Construction de la base de synonymes
```
from actuariat_python.data import wolf_xml
wolf_xml()
import os
if not os.path.exists("wolf-1.0b4.xml"):
raise FileNotFoundError("wolf-1.0b4.xml")
if os.stat("wolf-1.0b4.xml").st_size < 3000000:
raise FileNotFoundError("Size of 'wolf-1.0b4.xml' is very small: {0}".format(os.stat("wolf-1.0b4.xml").st_size))
from actuariat_python.data import enumerate_wolf_synonyms
for syn in enumerate_wolf_synonyms("wolf-1.0b4.xml", errors="ignore"):
print(syn)
break
```
On passe en revue toute la basse (il y a environ 120.000 lignes) et on s'arrête après 10000 synonymes car sinon, cela prend un temps fou.
```
allsyn = {}
for line, syn in enumerate(enumerate_wolf_synonyms("wolf-1.0b4.xml")):
if line % 10000 == 0: print("line", line, "allsyn", len(allsyn))
clean = [_.lower() for _ in syn if " " not in _]
if len(clean) > 1:
for word in clean:
if word not in allsyn:
allsyn[word] = set(clean)
continue
else:
for cl in clean:
allsyn[word].add(cl)
if len(allsyn) > 10000:
break
len(allsyn)
```
On affiche les premiers groupes :
```
i = 0
for k, v in allsyn.items():
print(k,v)
i += 1
if i > 10:
break
```
## Génération d'une base de phrases modifiées
On utilise [Zadig](https://fr.wikipedia.org/wiki/Zadig).
```
import urllib.request
with urllib.request.urlopen("http://www.gutenberg.org/cache/epub/4647/pg4647.txt") as u:
content = u.read()
char = content.decode(encoding="utf-8")
```
On découpe en mot.
```
import re
reg = re.compile("([- a-zA-Zâàäéèëêîïôöùûü']+)")
phrases = [_.lower() for _ in reg.findall(char)]
for i, phrase in enumerate(phrases):
if i >= 990:
print(phrase)
if i >= 1000:
break
```
On génère les phrases modifiées :
```
import random
def modification(phrase, allsyn, nmax=10):
mots = phrase.split()
options = []
nb = 1
for mot in mots:
if mot in allsyn:
options.append(list(set([mot] + list(allsyn[mot]))))
else:
options.append([mot])
nb *= len(options[-1])
if nb == 1:
return [phrase]
else:
res = []
for i in range(0, min(nmax, nb//2+1, nb)):
sol = []
for mot in options:
h = random.randint(0, len(mot)-1)
sol.append(mot[h])
res.append(sol)
return res
modification("chatouiller le cérébral", allsyn)
```
On traite tous les mots :
```
len(phrases)
with open("zadig_augmente.txt", "w", encoding="utf-8") as f:
total = 0
init = 0
for i, phrase in enumerate(phrases):
augm = modification(phrase, allsyn)
init += 1
for au in augm:
f.write(" ".join(au) + "\n")
total += 1
"total", total, "initial", init
```
## Exercice : retrouver une partie des synonymes à partir du dernier fichier créé
Le fichier utilisé peut être généré à partir du code précédent ou utiliser cette version : [zadig_augmente.zip](http://www.xavierdupre.fr/enseignement/complements/zadig_augmente.zip).
```
from pyensae.datasource import download_data
download_data("zadig_augmente.zip")
```
| github_jupyter |
```
# 파이썬은 요렇게 샾으로 주석을 만듬
print("Hello Python")
print('Hello Python')
# 보는 바와 같이 쌍따옴표나 홑따옴표 구별이 없음
num = 1
Num = 10
print(num, Num)
# Shift + Enter: 실행
# dd: 셀 삭제
# b: 아래쪽 셀 생성
z = 3 - 4j
print(type(z))
print(z.imag)
print(z.real)
print(z.conjugate())
# / 는 일반적인 나누기
# // 는 나머지를 버림 (몫만 취함)
num1 = 3 // 7
num2 = 3333 // 10
num3 = 3 / 7
num4 = 3333 / 10
print(num1, num2, num3, num4)
testStr = 'test' + ' python'
print(testStr)
str1 = "pointer"
print(str1)
print(str1[0])
print(str1[3])
print(str1[0:1]) # 0 ~ 1 미만
print(str1[1:4]) # 1 ~ 4 미만
print(str1[:2]) # 시작이 없으면 자동 0
print(str1[-2:]) # 끝이 없으면 자동 0
print(str1[:]) # 통채로 복사
print(str1[::2]) # 두칸씩 건너뛰기
colors = ['red', 'green', 'blue']
print(colors)
print(type(colors))
colors.append('gold')
print(colors)
colors.insert(1, 'black')
print(colors)
colors.extend(['white', 'gray'])
print(colors)
colors += ['purple']
colors += ['red']
print(colors)
colors = ['red', 'black', 'green', 'blue', 'gold', 'white', 'gray', 'purple', 'red']
print(colors.index('purple'))
print(colors.index('purple', 1))
print(colors.index('purple', 4, 8))
print(colors.count('red'))
print(colors.pop())
print(colors.pop())
print(colors.pop())
print(colors)
# Key: Value
color = {"apple": "red", "banana": "yellow"}
print(color)
print(color["apple"])
color["cherry"] = "red"
print(color)
color["apple"] = "green"
print(color)
for c in color.items():
print("color.items()")
print(c)
for k, v in color.items():
print("color.items() - k, v")
print(k, v)
for k in color.keys():
print("color.keys()")
print(k)
for v in color.values():
print("color.values()")
print(v)
print(type(color.keys()))
print(list(color.keys()))
# 파이썬에서 제어문은 어떻게 쓸까 ?
# 파이썬은 중괄호를 통해 스코핑을 하지 않는다.
# 오로지 탭키를 통해서만 스코핑을 한다.
# 그렇기 때문에 들여쓰기를 잘 적용해줘야함
val = 10
if val > 5:
print("val 은 5 보다 크다!")
money = 10
if money > 100:
item = "apple"
else:
item = "banana"
print(item)
score = int(input(('점수를 입력하시오: ')))
if 90 <= score <= 100:
grade = "A"
elif 80 <= score < 90:
grade = "B"
elif 70 <= score < 80:
grade = "C"
elif 60 <= score < 70:
grade = "D"
else:
grade = "F"
print("등급은: " + grade)
numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in numList:
if i > 5:
break
print("아이템: {0}".format(i))
numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in numList:
if i % 2 == 0:
continue
print("아이템: {0}, {1}".format(i, i * 2))
print(list(range(10)))
print(list(range(5, 10)))
print(list(range(10, 0, -1)))
print(list(range(10, 20, 2)))
for i in range(10, 20, 2):
print(i)
strList = ['Apple', 'Orange', 'Banana']
for i in range(len(strList)):
print("인덱스: {0}, 값: {1}".format(i, strList[i]))
numList = list(range(1, 6))
print([i ** 2 for i in numList])
strList = ("apple", "banana", "strawberry")
print([len(i) for i in strList])
strList = ("apple", "banana", "strawberry", "kiwi", "orange")
print([i for i in strList if len(i) > 5])
numList = [10, 25, 30]
# def 는 함수 - 매서드 (파이썬은 클래스 없이 매서드 만들기가 가능함)
def get_bigger_than20(i):
return i > 20
iterList = filter(get_bigger_than20, numList)
for i in iterList:
print("결과: {0}".format(i))
iterList = filter(lambda i: i > 20, numList)
for i in iterList:
print("결과: {0}".format(i))
numList = [1, 2, 3]
def add10(i):
return i + 10
for i in map(add10, numList):
print("결과: {0}".format(i))
retList = list(map((lambda i: i + 10), numList))
print(retList)
```
| github_jupyter |
# 1. Python performance
<br><br><br><br><br>
## Python is now mainstream
Below is an analysis of GitHub repos created by CMS physicists (i.e. "everyone who forked cms-sw/cmssw").
GitHub labels these repos as C/C++, Python, or Jupyter: the Python and Jupyter categories are now the most common.
<img src="img/lhlhc-github-languages.svg" style="width: 800px">
Furthermore, if we search for strings inside these repos, words like "`numpy`" are found in more repos than words like "`TFile`" (proxy for ROOT).
`"uproot"` is also fairly common, but not as much as the likes of NumPy and ROOT.
<img src="img/lhlhc-github-overlay-lin.svg" style="width: 800px">
We also asked questions about this at last year's PyHEP workshop (408 respondents, about 90 in CMS).
You use Python about equally with C++, and primarily for analysis (not just machine learning).
<img src="img/pyhep2020-survey-5.svg" style="width: 80%">
<br><br><br><br><br>
## General facts about Python
1. Python is fun and easy.
2. Python is slow.
But if you're working with large datasets, you need it to be fast.
What can you do?
<br><br><br><br><br>
<div style="font-size: 25px; margin-bottom: 30px;">
Try this one weird trick!
</div>
<div style="font-size: 25px; border: 1px dashed black; padding: 15px; margin-bottom: 30px;">
<div><b>Step 1:</b> isolate the "number crunching" part of your task</div>
<div><b>Step 2:</b> offload that part to compiled code</div>
</div>
This will be the core message of our overview of Uproot and Awkward Array.
<br><br><br><br><br>
## Way-too-simple example
Let's say that you have a lot of $p_x$ and $p_y$ values and you want to compute $p_T$.
```
import random
import math
px = [random.gauss(0, 10) for i in range(100000)]
py = [random.gauss(0, 10) for i in range(100000)]
```
The pure Python way to do this is with a loop:
```
%%timeit
pt = []
for px_i, py_i in zip(px, py):
pt.append(math.sqrt(px_i**2 + py_i**2))
```
But Python is slow: each step in the for loop performs type-checking, boxing/unboxing number classes, chasing references, etc.
Putting it in a list comprehension doesn't help (much), because that's still a Python for loop.
```
%%timeit
pt = [math.sqrt(px_i**2 + py_i**2) for px_i, py_i in zip(px, py)]
```
That's why we use NumPy.
```
import numpy as np
px = np.array(px)
py = np.array(py)
px, py
%%timeit
pt = np.sqrt(px**2 + py**2)
```
But notice that just "using NumPy" isn't what makes it faster: it's the fact that all the data were handled in few Python function calls.
Calling NumPy on individual items in a loop doesn't help. (In fact, it's worse than iteration over builtin Python lists.)
```
%%timeit
pt = [np.sqrt(px_i**2 + py_i**2) for px_i, py_i in zip(px, py)]
```
Rough way to estimate performance of small datasets: count the number of Python steps.
<br><br><br>
Unfortunately, some problems (not this one) are more easily expressed one item at a time (e.g. one event at a time).
You can still write iterative loops if they are in compiled code.
ROOT can compile C++ functions and make them accessible in Python with PyROOT and RDataFrame (example in the next notebook).
```
import ROOT
ROOT.gInterpreter.Declare('''
void compute_pt(int32_t N, double* px, double* py, double* pt) {
for (int32_t i = 0; i < N; i++) {
pt[i] = sqrt(px[i]*px[i] + py[i]* py[i]);
}
}
''')
pt = np.empty_like(px)
ROOT.compute_pt(len(px), px, py, pt)
pt
%%timeit
ROOT.compute_pt(len(px), px, py, pt)
```
ROOT is just-in-time (JIT) compiling the C++ code.
You can also JIT-compile a subset of _Python_ code using Numba.
```
import numba as nb
@nb.jit
def compute_pt(px, py):
pt = np.empty_like(px)
for i, (px_i, py_i) in enumerate(zip(px, py)):
pt[i] = np.sqrt(px_i**2 + py_i**2)
return pt
compute_pt(px, py)
%%timeit
compute_pt(px, py)
```
For a tutorial on Numba, [watch this](https://youtu.be/X_BJrmofRWQ).
<br><br><br><br><br>
## Not-too-simple example
Let's do a more interesting calculation: reproduce this cellular automata ([source](http://www.ericweisstein.com/encyclopedias/life/Puffer.html)):
<img src="img/game-of-life-puffer.gif" style="width: 813px">
It works by applying the following rules to a 2-dimensional grid of boolean-valued cells:
* If the cell is filled and is surrounded by more than 1 and fewer than 4 filled cells, it will remain filled.
* If the cell is empty and is surrounded by 3 filled cells, it will become filled.
* Otherwise, the cell is or becomes empty.
For the self-propelling "puffer" pattern, the grid must be initially filled like this:
```
WIDTH = 128
HEIGHT = 32
def new_world():
world = [[0 for i in range(WIDTH)] for j in range(HEIGHT)]
for x, y in [
( 4, 125), ( 3, 124), ( 3, 123), ( 3, 122), ( 3, 121), ( 3, 120), ( 3, 119), ( 4, 119), ( 5, 119), ( 6, 120),
(10, 121), (11, 120), (12, 119), (12, 120), (13, 120), (13, 121), (14, 121),
(20, 121), (19, 120), (18, 120), (18, 119), (17, 121), (17, 120), (16, 121),
(26, 125), (27, 124), (27, 123), (27, 122), (27, 121), (27, 120), (27, 119), (26, 119), (25, 119), (24, 120)
]:
world[x][y] = 1
return world
world = new_world()
def show(world):
for row in world:
stars = "".join("*" if cell else " " for cell in row)
print("|" + stars + "|")
show(world)
```
<br><br><br><br><br>
### Try it in Python!
```
def step_python(world):
outworld = []
for i, row in enumerate(world):
outrow = []
for j, cell in enumerate(row):
# count the number of living neighbors
num_neighbors = 0
for di in -1, 0, 1:
for dj in -1, 0, 1:
if (di, dj) != (0, 0):
if world[(i + di) % HEIGHT][(j + dj) % WIDTH]:
num_neighbors += 1
# use that information to decide if the next value of this cell is 0 or 1
if cell and 1 < num_neighbors < 4:
outrow.append(1)
elif not cell and num_neighbors == 3:
outrow.append(1)
else:
outrow.append(0)
outworld.append(outrow)
return outworld
```
Repeatedly evaluate the next cell to animate it.
```
world = step_python(world)
show(world)
```
But the Python implementation is slow.
```
%%timeit
step_python(world)
```
<br><br><br><br><br>
### Try it in C!
```
%%writefile game-of-life.c
#include <stdint.h>
const int32_t WIDTH = 128;
const int32_t HEIGHT = 32;
void step_c(int8_t* inarray, int8_t* outarray) {
for (int32_t i = 0; i < HEIGHT; i++) {
for (int32_t j = 0; j < WIDTH; j++) {
// count the number of living neighbors
int32_t num_neighbors = 0;
for (int32_t di = -1; di <= 1; di++) {
for (int32_t dj = -1; dj <= 1; dj++) {
if (!(di == 0 && dj == 0)) {
if (inarray[((i + di + HEIGHT) % HEIGHT) * WIDTH + ((j + dj + WIDTH) % WIDTH)]) {
num_neighbors++;
}
}
}
}
// use that information to decide if the next value of this cell is 0 or 1
int8_t cell = inarray[i * WIDTH + j];
if (cell && 1 < num_neighbors && num_neighbors < 4) {
outarray[i * WIDTH + j] = 1;
}
else if (!cell && num_neighbors == 3) {
outarray[i * WIDTH + j] = 1;
}
else {
outarray[i * WIDTH + j] = 0;
}
}
}
// copy the outarray buffer to the inarray buffer (so the above can be repeated)
for (int32_t i = 0; i < HEIGHT; i++) {
for (int32_t j = 0; j < WIDTH; j++) {
inarray[i * WIDTH + j] = outarray[i * WIDTH + j];
}
}
}
!gcc -std=c99 game-of-life.c -O3 -shared -o game-of-life.so
```
Here's a different way to access C code (not C++, or use `extern C`) in Python: ctypes.
```
import ctypes
ArrayType = ctypes.c_int8 * WIDTH * HEIGHT
step_c = ctypes.cdll.LoadLibrary("./game-of-life.so").step_c
step_c.argtypes = [ArrayType, ArrayType]
step_c.restype = None
inarray = ArrayType()
outarray = ArrayType()
for i, row in enumerate(new_world()):
for j, cell in enumerate(row):
inarray[i][j] = cell
step_c(inarray, outarray, 1)
show(outarray)
```
The C code is much faster.
```
%%timeit
step_c(inarray, outarray)
```
<br><br><br><br><br>
### Try it in NumPy!
The NumPy implementation is tricky because we have to mix data from different cells.
* [np.roll](https://numpy.org/doc/stable/reference/generated/numpy.roll.html) rotates an array, which we use to add up the neighbors an array at a time.
* `&` and `|` perform bitwise logic, but be careful! Comparisons like `<`, `==`, `>` have to be parenthesized because of [order of operations](https://docs.python.org/3/reference/expressions.html#operator-precedence).
```
import numpy as np
def step_numpy(world):
num_neighbors = np.zeros(world.shape, dtype=int) # initialize neighbors count
num_neighbors += np.roll(np.roll(world, 1, axis=0), 1, axis=1) # add southwest
num_neighbors += np.roll(np.roll(world, 1, axis=0), 0, axis=1) # add south
num_neighbors += np.roll(np.roll(world, 1, axis=0), -1, axis=1) # add southeast
num_neighbors += np.roll(np.roll(world, 0, axis=0), 1, axis=1) # add west
num_neighbors += np.roll(np.roll(world, 0, axis=0), -1, axis=1) # add east
num_neighbors += np.roll(np.roll(world, -1, axis=0), 1, axis=1) # add northwest
num_neighbors += np.roll(np.roll(world, -1, axis=0), 0, axis=1) # add north
num_neighbors += np.roll(np.roll(world, -1, axis=0), -1, axis=1) # add northeast
survivors = ((world == 1) & (num_neighbors > 1) & (num_neighbors < 4)) # old cells that survive
births = ((world == 0) & (num_neighbors == 3)) # new cells that are born
return (births | survivors).astype(world.dtype) # union as booleans
world = np.array(new_world())
world = step_numpy(world)
show(world)
```
NumPy is _between_ the pure Python and the C code because the C code scans the data in a single pass.
NumPy makes many passes over the data (creating temporary arrays) in each `np.roll`.
```
%%timeit
step_numpy(world)
```
<br><br><br><br><br>
### Try it in SciPy!
The many-passes problem can be reduced by finding a function that computes what we want in a single pass.
SciPy has one: counting neighbors is a special case of convolution: [scipy.signal.convolve2d](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html).
```
import scipy.signal
num_neighbors_convolver = np.array([[1, 1, 1],
[1, 0, 1],
[1, 1, 1]])
def step_scipy(world):
# this is the step that can now be one function call:
num_neighbors = scipy.signal.convolve2d(world, num_neighbors_convolver, mode="same", boundary="wrap")
survivors = ((world == 1) & (num_neighbors > 1) & (num_neighbors < 4)) # old cells that survive
births = ((world == 0) & (num_neighbors == 3)) # new cells that are born
return (births | survivors).astype(world.dtype) # union as booleans
world = np.array(new_world())
world = step_scipy(world)
show(world)
```
This version performs fewer passes and is a little better.
```
%%timeit
step_scipy(world)
```
<br><br><br><br><br>
### Try it in Numba!
JIT-compilation is the ultimate solution to the many-pass problem. Numba lets us write code that looks like our pure Python function.
```
import numba as nb
@nb.jit
def step_numba(world):
outworld = np.empty_like(world)
for i, row in enumerate(world):
for j, cell in enumerate(row):
# count the number of living neighbors
num_neighbors = 0
for di in -1, 0, 1:
for dj in -1, 0, 1:
if (di, dj) != (0, 0):
if world[(i + di) % HEIGHT][(j + dj) % WIDTH]:
num_neighbors += 1
# use that information to decide if the next value of this cell is 0 or 1
if cell and 1 < num_neighbors < 4:
outworld[i, j] = 1
elif not cell and num_neighbors == 3:
outworld[i, j] = 1
else:
outworld[i, j] = 0
return outworld
world = np.array(new_world())
world = step_numba(world)
show(world)
```
It's _almost_ as fast as C code, but easier to integrate into a Python session.
```
%%timeit
step_numba(world)
```
<br><br><br><br><br>
### Now let's watch an animation
We wanted to compute this function quickly so that we can use it to make animations.
```
import matplotlib
import matplotlib.pyplot as plt
world = np.array(new_world())
fig = plt.figure(figsize=(8.13, 2.97), dpi=125)
plt.imshow(world)
matplotlib.rc("animation", html="jshtml")
fig = plt.figure(figsize=(8.13, 2.97), dpi=125)
plt.figure(1)
graphic = plt.imshow(world)
plt.close(1)
def update(i):
global world, graphic
world = step_numba(world)
graphic.set_array(world)
return [graphic]
matplotlib.animation.FuncAnimation(fig, update, frames=250, interval=50, blit=True)
```
| github_jupyter |
```
! conda install geopandas -qy
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import geopandas as gpd
from shapely.geometry import Point, Polygon
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
import zipfile
import requests
import os
import shutil
%matplotlib inline
from downloading_funcs import addr_shape, down_extract_zip
from supp_funcs import zoneConcentration, pointInZone
import lnks
#Load the BBL list
BBL12_17CSV = ['https://opendata.arcgis.com/datasets/82ab09c9541b4eb8ba4b537e131998ce_22.csv', 'https://opendata.arcgis.com/datasets/4c4d6b4defdf4561b737a594b6f2b0dd_23.csv', 'https://opendata.arcgis.com/datasets/d7aa6d3a3fdc42c4b354b9e90da443b7_1.csv', 'https://opendata.arcgis.com/datasets/a8434614d90e416b80fbdfe2cb2901d8_2.csv', 'https://opendata.arcgis.com/datasets/714d5f8b06914b8596b34b181439e702_36.csv', 'https://opendata.arcgis.com/datasets/c4368a66ce65455595a211d530facc54_3.csv',]
def data_pipeline(shapetype, bbl_links, supplement=None,
dex=None, ts_lst_range=None):
#A pipeline for group_e dataframe operations
#Test inputs --------------------------------------------------------------
if supplement:
assert isinstance(supplement, list)
assert isinstance(bbl_links, list)
if ts_lst_range:
assert isinstance(ts_lst_range, list)
assert len(ts_lst_range) == 2 #Must be list of format [start-yr, end-yr]
#We'll need our addresspoints and our shapefile
if not dex:
dex = addr_shape(shapetype)
#We need a list of time_unit_of_analysis
if ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(1980, 2025)]
ts_lst = [x for x in ts_lst if
x >= ts_lst_range[0] and x <= ts_lst_range[1]]
ts_lst = sorted(ts_lst)
if not ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(2012, 2017)]
ts_lst = sorted(ts_lst)
#Now we need to stack our BBL data ----------------------------------------
#Begin by forming an empty DF
bbl_df = pd.DataFrame()
for i in bbl_links:
bbl = pd.read_csv(i, encoding='utf-8', low_memory=False)
col_len = len(bbl.columns)
bbl_df = bbl_df.append(bbl)
if len(bbl.columns) != col_len:
print('Column Mismatch!')
del bbl
bbl_df.LICENSE_START_DATE = pd.to_datetime(
bbl_df.LICENSE_START_DATE)
bbl_df.LICENSE_EXPIRATION_DATE = pd.to_datetime(
bbl_df.LICENSE_EXPIRATION_DATE)
bbl_df.LICENSE_ISSUE_DATE = pd.to_datetime(
bbl_df.LICENSE_ISSUE_DATE)
bbl_df.sort_values('LICENSE_START_DATE')
#Set up our time unit of analysis
bbl_df['month'] = 0
bbl_df['endMonth'] = 0
bbl_df['issueMonth'] = 0
bbl_df['month'] = bbl_df['LICENSE_START_DATE'].dt.year + (
bbl_df['LICENSE_START_DATE'].dt.month/100
)
bbl_df['endMonth'] = bbl_df['LICENSE_EXPIRATION_DATE'].dt.year + (
bbl_df['LICENSE_EXPIRATION_DATE'].dt.month/100
)
bbl_df['issueMonth'] = bbl_df['LICENSE_ISSUE_DATE'].dt.year + (
bbl_df['LICENSE_ISSUE_DATE'].dt.month/100
)
bbl_df.endMonth.fillna(max(ts_lst))
bbl_df['endMonth'][bbl_df['endMonth'] > max(ts_lst)] = max(ts_lst)
#Sort on month
bbl_df = bbl_df.dropna(subset=['month'])
bbl_df = bbl_df.set_index(['MARADDRESSREPOSITORYID','month'])
bbl_df = bbl_df.sort_index(ascending=True)
bbl_df.reset_index(inplace=True)
bbl_df = bbl_df[bbl_df['MARADDRESSREPOSITORYID'] >= 0]
bbl_df = bbl_df.dropna(subset=['LICENSESTATUS', 'issueMonth', 'endMonth',
'MARADDRESSREPOSITORYID','month',
'LONGITUDE', 'LATITUDE'
])
#Now that we have the BBL data, let's create our flag and points data -----
#This is the addresspoints, passed from the dex param
addr_df = dex[0]
#Zip the latlongs
addr_df['geometry'] = [
Point(xy) for xy in zip(
addr_df.LONGITUDE.apply(float), addr_df.LATITUDE.apply(float)
)
]
addr_df['Points'] = addr_df['geometry'] #Duplicate, so raw retains points
addr_df['dummy_counter'] = 1 #Always one, always dropped before export
crs='EPSG:4326' #Convenience assignment of crs
#Now we're stacking for each month ----------------------------------------
out_gdf = pd.DataFrame() #Empty storage df
for i in ts_lst[:2]: #iterate through the list of months
#dex[1] is the designated shapefile passed from the dex param,
#and should match the shapetype defined in that param
#Copy of the dex[1] shapefile
shp_gdf = dex[1]
#Active BBL in month i
bbl_df['inRange'] = 0
bbl_df['inRange'][(bbl_df.endMonth > i) & (bbl_df.month <= i)] = 1
#Issued BBL in month i
bbl_df['isuFlag'] = 0
bbl_df['isuFlag'][bbl_df.issueMonth == i] = 1
#Merge BBL and MAR datasets -------------------------------------------
addr = pd.merge(addr_df, bbl_df, how='left',
left_on='ADDRESS_ID', right_on='MARADDRESSREPOSITORYID')
addr = gpd.GeoDataFrame(addr, crs=crs, geometry=addr.geometry)
addr.crs = shp_gdf.crs
raw = gpd.sjoin(shp_gdf, addr, how='left', op='intersects')
#A simple percent of buildings with active flags per shape,
#and call it a 'utilization index'
numer = raw.groupby('NAME').sum()
numer = numer.inRange
denom = raw.groupby('NAME').sum()
denom = denom.dummy_counter
issue = raw.groupby('NAME').sum()
issue = issue.isuFlag
flags = []
utl_inx = pd.DataFrame(numer/denom)
utl_inx.columns = [
'Util_Indx_BBL'
]
flags.append(utl_inx)
#This is number of buildings with an active BBL in month i
bbl_count = pd.DataFrame(numer)
bbl_count.columns = [
'countBBL'
]
flags.append(bbl_count)
#This is number of buildings that were issued a BBL in month i
isu_count = pd.DataFrame(issue)
isu_count.columns = [
'countIssued'
]
flags.append(isu_count)
for flag in flags:
flag.crs = shp_gdf.crs
shp_gdf = shp_gdf.merge(flag,
how="left", left_on='NAME', right_index=True)
shp_gdf['month'] = i
#Head will be the list of retained columns
head = ['NAME', 'Util_Indx_BBL',
'countBBL', 'countIssued',
'month', 'geometry']
shp_gdf = shp_gdf[head]
if supplement: #this is where your code will be fed into the pipeline.
for supp_func in supplement:
if len(supp_func) == 2:
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1])
if len(supp_func) == 3:
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2])
if len(supp_func) == 4:
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2], supp_func[3])
out_gdf = out_gdf.append(shp_gdf) #This does the stacking
print('Merged month:', i)
del shp_gdf, addr, utl_inx #Save me some memory please!
#Can't have strings in our matrix
out_gdf = pd.get_dummies(out_gdf, columns=['NAME'])
out_gdf = out_gdf.drop('geometry', axis=1)
out_gdf.to_csv('./data/' + shapetype + '_out.csv') #Save
return [bbl_df, addr_df, out_gdf, raw] #Remove this later, for testing now
dex = addr_shape('anc')
def metro_prox(shp_gdf, raw, bufr=None):
#Flag properties within distance "bufr" of metro stations
if not bufr:
bufr = 1/250 #Hard to say what a good buffer is.
assert isinstance(bufr, float) #buffer must be float!
#Frame up the metro buffer shapes
metro = down_extract_zip(
'https://opendata.arcgis.com/datasets/54018b7f06b943f2af278bbe415df1de_52.zip'
)
metro = gpd.read_file(metro, crs=shp_gdf.crs)
metro.geometry = metro.geometry.buffer(bufr)
metro['bymet'] = 1
metro.drop(['NAME'], axis=1, inplace=True)
#Frame up the raw address points data
pointy = raw[['NAME', 'Points', 'dummy_counter']]
pointy = gpd.GeoDataFrame(pointy, crs=metro.crs,
geometry=pointy.Points)
pointy = gpd.sjoin(pointy, metro,
how='left', op='intersects')
denom = pointy.groupby('NAME').sum()
denom = denom.dummy_counter
numer = pointy.groupby('NAME').sum()
numer = numer.bymet
pct_metro_coverage = pd.DataFrame(numer/denom)
pct_metro_coverage.columns = [
'pct_metro_coverage'
]
pct_metro_coverage.fillna(0, inplace=True)
pct_metro_coverage.crs = pointy.crs
shp_gdf = shp_gdf.merge(pct_metro_coverage,
how="left", left_on='NAME', right_index=True)
return shp_gdf
#sets0-address df,
#sets[2] #Our number of rows equals our number of shapes * number of months
cz1217 = ['https://opendata.arcgis.com/datasets/9cbe8553d4e2456ab6c140d83c7e83e0_15.csv', 'https://opendata.arcgis.com/datasets/3d49e06d51984fa2b68f21eed21eba1f_14.csv', 'https://opendata.arcgis.com/datasets/54b57e15f6944af8b413a5e4f88b070c_13.csv', 'https://opendata.arcgis.com/datasets/b3283607f9b74457aff420081eec3190_29.csv', 'https://opendata.arcgis.com/datasets/2dc1a7dbb705471eb38af39acfa16238_28.csv', 'https://opendata.arcgis.com/datasets/585c8c3ef58c4f1ab1ddf1c759b3a8bd_39.csv']
constr12 = pd.read_csv('https://opendata.arcgis.com/datasets/9cbe8553d4e2456ab6c140d83c7e83e0_15.csv')
constr13 = pd.read_csv('https://opendata.arcgis.com/datasets/3d49e06d51984fa2b68f21eed21eba1f_14.csv')
constr14 = pd.read_csv('https://opendata.arcgis.com/datasets/54b57e15f6944af8b413a5e4f88b070c_13.csv')
constr15 = pd.read_csv('https://opendata.arcgis.com/datasets/b3283607f9b74457aff420081eec3190_29.csv')
constr16 = pd.read_csv('https://opendata.arcgis.com/datasets/2dc1a7dbb705471eb38af39acfa16238_28.csv')
constr17 = pd.read_csv('https://opendata.arcgis.com/datasets/585c8c3ef58c4f1ab1ddf1c759b3a8bd_39.csv')
constr12_17 = pd.concat([constr12, constr13, constr14, constr15, constr16, constr17], axis=0, ignore_index=True)
constr12_17.head().T
constr12_17.columns
constr12_17.shape
constr12_17['effective_date'] = pd.to_datetime(constr12_17['EFFECTIVEDATE'])
constr12_17['expire_date'] = pd.to_datetime(constr12_17['EXPIRATIONDATE'])
permit_length = (constr12_17.expire_date - constr12_17.effective_date)
print(permit_length)
permits = constr12_17[(constr12_17['STATUS']=='Permit Expired') |
(constr12_17['STATUS']=='Approved (Pending Payment)') |
(constr12_17['STATUS']=='Issued') |
(constr12_17['STATUS']=='Assigned')]
print(permits.STATUS.unique())
permits.sort_values('effective_date')
permits['month'] = 0
permits['month'] = 0
permits['month'] = permits['effective_date'].dt.year + (permits['effective_date'].dt.month/100)
permits['endmonth'] = permits['expire_date'].dt.year + (permits['expire_date'].dt.month/100)
permits = permits.set_index(['month'])
permits = permits.sort_index(ascending=True)
permits.reset_index(inplace=True)
permits = permits.dropna(subset=['STATUS', 'endmonth', 'month', 'LONGITUDE', 'LATITUDE'])
! wget https://opendata.arcgis.com/datasets/6969dd63c5cb4d6aa32f15effb8311f3_8.geojson -O census2012
import geopandas as gpd
import os
census = gpd.read_file('census2012')
census.columns
from shapely.geometry import Point
geometry = [Point(xy) for xy in zip(permits.LONGITUDE.apply(float), permits.LATITUDE.apply(float))]
crs = {'init': 'epsg:4326'}
points = gpd.GeoDataFrame(permits, crs=crs, geometry=geometry)
fig, ax = plt.subplots()
census.plot(ax=ax, color='red')
points.plot(ax=ax, color='black', marker='.', markersize=5)
ax.set_aspect('equal')
geo_constr = gpd.sjoin(census, points, how='left', op='intersects')
geo_constr.head()
geo_constr.geometry.head()
Contruct2HouseRatio = pd.DataFrame(geo_constr.TRACT.value_counts()*100000/geo_constr.H0010002.sum())
Contruct2HouseRatio.columns = ['Contruct2HouseRatio']
print(Contruct2HouseRatio)
tracts_housepermits = census.merge(Contruct2HouseRatio, how="left", left_on='TRACT', right_index=True)
tracts_housepermits = tracts_housepermits.dropna(subset=['Contruct2HouseRatio'])
tracts_housepermits.head()
Construct = pd.DataFrame(geo_constr.TRACT.value_counts())
Construct.columns = ['Construct']
print(Construct)
tracts_construction = census.merge(Construct, how="left", left_on='TRACT', right_index=True)
tracts_construction.head()
tracts_construction.corr()['Construct'].sort_values()
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
---
## Imports
```
import pickle
import numpy as np
import matplotlib.pyplot as plt
import random
import cv2 as cv
import tensorflow as tf
import tqdm
import pandas as pd
import os
import glob
from sklearn.utils import shuffle
from tensorflow.contrib.layers import flatten
%matplotlib inline
```
---
## Step 0: Load The Data
```
# Load pickled data
training_file = './train.p'
validation_file= './valid.p'
testing_file = './test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
```
# Number of training examples
n_train = X_train.shape[0]
# Number of validation examples
n_validation = X_valid.shape[0]
# Number of testing examples.
n_test = X_test.shape[0]
# Shape of an traffic sign image
image_shape = X_train[0].shape
# Number of unique classes/labels in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
```
### Data exploration visualization
index = random.randint(0, n_train)
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index])
### Plot the count of each sign
uniqueValues, occurCount = np.unique(y_train, return_counts=True)
print("Unique Values : " , uniqueValues)
print("Occurrence Count : ", occurCount)
### Distribution of classes in train set
plt.bar(uniqueValues, occurCount)
plt.title('Classes Distribution Training Set')
plt.xlabel('Classes')
plt.ylabel('Number of samples');
### Distribution of classes in validation set
uniqueValuesValid, occurCountValid = np.unique(y_valid, return_counts=True)
plt.bar(uniqueValuesValid, occurCountValid)
plt.title('Classes Distribution Validation Set')
plt.xlabel('Classes')
plt.ylabel('Number of samples');
### Distribution of classes in test set
uniqueValuesTest, occurCountTest = np.unique(y_test, return_counts=True)
plt.bar(uniqueValues, occurCount)
plt.title('Classes Distribution Test Set')
plt.xlabel('Classes')
plt.ylabel('Number of samples');
```
----
## Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
- Neural network architecture (is the network over or underfitting?)
- Play around preprocessing techniques (normalization, rgb to grayscale, etc)
- Number of examples per label (some have more than others).
- Generate fake data.
Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
### Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
```
### Convert images to YUV space and keep only Y and apply Histogram Equalizer for contrast
X_train_proc = np.zeros((X_train.shape[0], X_train.shape[1], X_train.shape[2], 1))
for i in range(X_train.shape[0]):
img_yuv = cv.cvtColor(X_train[i], cv.COLOR_BGR2YUV)
y_ch,_,_ = cv.split(img_yuv)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
X_train_proc[i,:,:,0] = clahe.apply(y_ch)
X_valid_proc = np.zeros((X_valid.shape[0], X_valid.shape[1], X_valid.shape[2], 1))
for i in range(X_valid.shape[0]):
img_yuv = cv.cvtColor(X_valid[i], cv.COLOR_BGR2YUV)
y_ch,_,_ = cv.split(img_yuv)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
X_valid_proc[i,:,:,0] = clahe.apply(y_ch)
X_test_proc = np.zeros((X_test.shape[0], X_test.shape[1], X_test.shape[2], 1))
for i in range(X_test.shape[0]):
img_yuv = cv.cvtColor(X_test[i], cv.COLOR_BGR2YUV)
y_ch,_,_ = cv.split(img_yuv)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
X_test_proc[i,:,:,0] = clahe.apply(y_ch)
### Augment images
def rotate(img, angle = 15):
"""Rotation augmentation
Args:
x: Image
angle: value of angle by which rotate image in degrees
Returns:
Augmented image
"""
return tf.contrib.image.rotate(img, angle * math.pi / 180, interpolation='BILINEAR')
def translate(image, pixels=2):
tx = np.random.choice(range(-pixels, pixels))
ty = np.random.choice(range(-pixels, pixels))
M = np.float32([[1, 0, tx], [0, 1, ty]])
return cv.warpAffine(src=image, M=M, dsize=(32, 32)).reshape(32,32,1)
def change_scale(img, size):
shape = tf.shape(img)
height = shape[0]
width = shape[1]
return tf.image.resize_images(img, (size*height, size*width))
def augment(image):
tmp_img = image.copy()
tmp_img = rotate(tmp_img, 15)
tmp_img = change_scale(tmp_img)
tmp_img = translate(tmp_img)
return tmp_img
aug_X_train = X_train_proc.copy()
aug_y_train = y_train.copy()
tmp_aug = np.array(random.sample(list(X_train_proc), 5000))
for idx in tqdm.tqdm(range(len(tmp_aug))):
tmp_img = augment(tmp_aug[idx]).reshape(1,32,32,1)
aug_X_train = np.append(aug_X_train, tmp_img, axis=0)
aug_y_train = np.append(aug_y_train, y_train[idx])
### Normalize the data.
X_train_proc = (X_train_proc - np.mean(X_train_proc))/np.std(X_train_proc)
X_valid_proc = (X_valid_proc - np.mean(X_valid_proc))/np.std(X_valid_proc)
X_test_proc = (X_test_proc - np.mean(X_test_proc))/np.std(X_test_proc)
### Shuffle the data
X_train, y_train = shuffle(X_train, y_train)
X_valid, y_valid = shuffle(X_valid, y_valid)
X_test, y_test = shuffle(X_test, y_test)
```
### Model Architecture
```
### Define your architecture here.
### Feel free to use as many code cells as needed.
```
### Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
```
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
```
---
## Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
```
### Predict the Sign Type for Each Image
```
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
```
### Analyze Performance
```
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
```
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
```
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
```
Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
```
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
```
### Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
---
## Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
```
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
```
| github_jupyter |
# Pairs Trading with Machine Learning
Jonathan Larkin
August, 2017
In developing a Pairs Trading strategy, finding valid, eligible pairs which exhibit unconditional mean-reverting behavior is of critical importance. This notebook walks through an example implementation of finding eligible pairs. We show how popular algorithms from Machine Learning can help us navigate a very high-dimensional seach space to find tradeable pairs.
```
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans, DBSCAN
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn import preprocessing
from statsmodels.tsa.stattools import coint
from scipy import stats
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.filters.morningstar import Q500US, Q1500US, Q3000US
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
print "Numpy: %s " % np.__version__
print "Pandas: %s " % pd.__version__
study_date = "2016-12-31"
```
## Define Universe
We start by specifying that we will constrain our search for pairs to the a large and liquid single stock universe.
```
universe = Q1500US()
```
## Choose Data
In addition to pricing, let's use some fundamental and industry classification data. When we look for pairs (or model anything in quantitative finance), it is generally good to have an "economic prior", as this helps mitigate overfitting. I often see Quantopian users create strategies with a fixed set of pairs that they have likely chosen by some fundamental rationale ("KO and PEP should be related becuase..."). A purely fundamental approach is a fine way to search for pairs, however breadth will likely be low. As discussed in [The Foundation of Algo Success](https://blog.quantopian.com/the-foundation-of-algo-success/), you can maximize Sharpe by having high breadth (high number of bets). With `N` stocks in the universe, there are `N*(N-1)/2` pair-wise relationships. However, if we do a brute-force search over these, we will likely end up with many spurious results. As such, let's narrow down the search space in a reasonable way. In this study, I start with the following priors:
- Stocks that share loadings to common factors (defined below) in the past should be related in the future.
- Stocks of similar market caps should be related in the future.
- We should exclude stocks in the industry group "Conglomerates" (industry code 31055). Morningstar analysts classify stocks into industry groups primarily based on similarity in revenue lines. "Conglomerates" is a catch-all industry. As described in the [Morningstar Global Equity
Classification Structure manual](http://corporate.morningstar.com/us/documents/methodologydocuments/methodologypapers/equityclassmethodology.pdf): "If the company has more than three sources of revenue and
income and there is no clear dominant revenue and income stream, the company
is assigned to the Conglomerates industry." We should not expect these stocks to be good members of any pairs in the future. This turns out to have zero impact on the Q500 and removes only 1 stock from the Q1500, but I left this idea in for didactic purposes.
- Creditworthiness in an important feature in future company performance. It's difficult to find credit spread data and map the reference entity to the appropriate equity security. There is a model, colloquially called the [Merton Model](http://www.investopedia.com/terms/m/mertonmodel.asp), however, which takes a contingent claims approach to modeling the capital structure of the firm. The output is an implied probability of default. Morningstar analysts calculate this for us and the field is called `financial_health_grade`. A full description of this field is in the [help docs](https://www.quantopian.com/help/fundamentals#asset-classification).
```
pipe = Pipeline(
columns= {
'Market Cap': morningstar.valuation.market_cap.latest.quantiles(5),
'Industry': morningstar.asset_classification.morningstar_industry_group_code.latest,
'Financial Health': morningstar.asset_classification.financial_health_grade.latest
},
screen=universe
)
res = run_pipeline(pipe, study_date, study_date)
res.index = res.index.droplevel(0) # drop the single date from the multi-index
print res.shape
res.head()
# remove stocks in Industry "Conglomerates"
res = res[res['Industry']!=31055]
print res.shape
# remove stocks without a Financial Health grade
res = res[res['Financial Health']!= None]
print res.shape
# replace the categorical data with numerical scores per the docs
res['Financial Health'] = res['Financial Health'].astype('object')
health_dict = {u'A': 0.1,
u'B': 0.3,
u'C': 0.7,
u'D': 0.9,
u'F': 1.0}
res = res.replace({'Financial Health': health_dict})
res.describe()
```
## Define Horizon
We are going to work with a daily return horizon in this strategy.
```
pricing = get_pricing(
symbols=res.index,
fields='close_price',
start_date=pd.Timestamp(study_date) - pd.DateOffset(months=24),
end_date=pd.Timestamp(study_date)
)
pricing.shape
returns = pricing.pct_change()
returns[symbols(['AAPL'])].plot();
# we can only work with stocks that have the full return series
returns = returns.iloc[1:,:].dropna(axis=1)
print returns.shape
```
## Find Candidate Pairs
Given the pricing data and the fundamental and industry/sector data, we will first classify stocks into clusters and then, within clusters, looks for strong mean-reverting pair relationships.
The first hypothesis above is that "Stocks that share loadings to common factors in the past should be related in the future". Common factors are things like sector/industry membership and widely known ranking schemes like momentum and value. We could specify the common factors *a priori* to well known factors, or alternatively, we could let the data speak for itself. In this post we take the latter approach. We use PCA to reduce the dimensionality of the returns data and extract the historical latent common factor loadings for each stock. For a nice visual introduction to what PCA is doing, take a look [here](http://setosa.io/ev/principal-component-analysis/) (thanks to Gus Gordon for pointing out this site).
We will take these features, add in the fundamental features, and then use the `DBSCAN` **unsupervised** [clustering algorithm](http://hdbscan.readthedocs.io/en/latest/comparing_clustering_algorithms.html#dbscan) which is available in [`scikit-learn`](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html). Thanks to Thomas Wiecki for pointing me to this specific clustering technique and helping with implementation. Initially I looked at using `KMeans` but `DBSCAN` has advantages in this use case, specifically
- `DBSCAN` does not cluster *all* stocks; it leaves out stocks which do not neatly fit into a cluster;
- relatedly, you do not need to specify the number of clusters.
The clustering algorithm will give us sensible *candidate* pairs. We will need to do some validation in the next step.
### PCA Decomposition and DBSCAN Clustering
```
N_PRIN_COMPONENTS = 50
pca = PCA(n_components=N_PRIN_COMPONENTS)
pca.fit(returns)
pca.components_.T.shape
```
We have reduced data now with the first `N_PRIN_COMPONENTS` principal component loadings. Let's add some fundamental values as well to make the model more robust.
```
X = np.hstack(
(pca.components_.T,
res['Market Cap'][returns.columns].values[:, np.newaxis],
res['Financial Health'][returns.columns].values[:, np.newaxis])
)
print X.shape
X = preprocessing.StandardScaler().fit_transform(X)
print X.shape
clf = DBSCAN(eps=1.9, min_samples=3)
print clf
clf.fit(X)
labels = clf.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print "\nClusters discovered: %d" % n_clusters_
clustered = clf.labels_
# the initial dimensionality of the search was
ticker_count = len(returns.columns)
print "Total pairs possible in universe: %d " % (ticker_count*(ticker_count-1)/2)
clustered_series = pd.Series(index=returns.columns, data=clustered.flatten())
clustered_series_all = pd.Series(index=returns.columns, data=clustered.flatten())
clustered_series = clustered_series[clustered_series != -1]
CLUSTER_SIZE_LIMIT = 9999
counts = clustered_series.value_counts()
ticker_count_reduced = counts[(counts>1) & (counts<=CLUSTER_SIZE_LIMIT)]
print "Clusters formed: %d" % len(ticker_count_reduced)
print "Pairs to evaluate: %d" % (ticker_count_reduced*(ticker_count_reduced-1)).sum()
```
We have reduced the search space for pairs from >1mm to approximately 2,000.
### Cluster Visualization
We have found 11 clusters. The data are clustered in 52 dimensions. As an attempt to visualize what has happened in 2d, we can try with [T-SNE](https://distill.pub/2016/misread-tsne/). T-SNE is an algorithm for visualizing very high dimension data in 2d, created in part by Geoff Hinton. We visualize the discovered pairs to help us gain confidence that the `DBSCAN` output is sensible; i.e., we want to see that T-SNE and DBSCAN both find our clusters.
```
X_tsne = TSNE(learning_rate=1000, perplexity=25, random_state=1337).fit_transform(X)
plt.figure(1, facecolor='white')
plt.clf()
plt.axis('off')
plt.scatter(
X_tsne[(labels!=-1), 0],
X_tsne[(labels!=-1), 1],
s=100,
alpha=0.85,
c=labels[labels!=-1],
cmap=cm.Paired
)
plt.scatter(
X_tsne[(clustered_series_all==-1).values, 0],
X_tsne[(clustered_series_all==-1).values, 1],
s=100,
alpha=0.05
)
plt.title('T-SNE of all Stocks with DBSCAN Clusters Noted');
```
We can also see how many stocks we found in each cluster and then visualize the normalized time series of the members of a handful of the smaller clusters.
```
plt.barh(
xrange(len(clustered_series.value_counts())),
clustered_series.value_counts()
)
plt.title('Cluster Member Counts')
plt.xlabel('Stocks in Cluster')
plt.ylabel('Cluster Number');
```
To again visualize if our clustering is doing anything sensible, let's look at a few clusters (for reproducibility, keep all random state and dates the same in this notebook).
```
# get the number of stocks in each cluster
counts = clustered_series.value_counts()
# let's visualize some clusters
cluster_vis_list = list(counts[(counts<20) & (counts>1)].index)[::-1]
# plot a handful of the smallest clusters
for clust in cluster_vis_list[0:min(len(cluster_vis_list), 3)]:
tickers = list(clustered_series[clustered_series==clust].index)
means = np.log(pricing[tickers].mean())
data = np.log(pricing[tickers]).sub(means)
data.plot(title='Stock Time Series for Cluster %d' % clust)
```
We might be interested to see how a cluster looks for a particular stock. Large bank stocks share similar strict regulatory oversight and are similarly economic and interest rate sensitive. We indeed see that our clustering has found a bank stock cluster.
```
which_cluster = clustered_series.loc[symbols('JPM')]
clustered_series[clustered_series == which_cluster]
tickers = list(clustered_series[clustered_series==which_cluster].index)
means = np.log(pricing[tickers].mean())
data = np.log(pricing[tickers]).sub(means)
data.plot(legend=False, title="Stock Time Series for Cluster %d" % which_cluster);
```
Now that we have sensible clusters of common stocks, we can validate the cointegration relationships.
```
def find_cointegrated_pairs(data, significance=0.05):
# This function is from https://www.quantopian.com/lectures/introduction-to-pairs-trading
n = data.shape[1]
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = data.keys()
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = data[keys[i]]
S2 = data[keys[j]]
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < significance:
pairs.append((keys[i], keys[j]))
return score_matrix, pvalue_matrix, pairs
cluster_dict = {}
for i, which_clust in enumerate(ticker_count_reduced.index):
tickers = clustered_series[clustered_series == which_clust].index
score_matrix, pvalue_matrix, pairs = find_cointegrated_pairs(
pricing[tickers]
)
cluster_dict[which_clust] = {}
cluster_dict[which_clust]['score_matrix'] = score_matrix
cluster_dict[which_clust]['pvalue_matrix'] = pvalue_matrix
cluster_dict[which_clust]['pairs'] = pairs
pairs = []
for clust in cluster_dict.keys():
pairs.extend(cluster_dict[clust]['pairs'])
pairs
print "We found %d pairs." % len(pairs)
print "In those pairs, there are %d unique tickers." % len(np.unique(pairs))
```
### Pair Visualization
Lastly, for the pairs we found and validated, let's visualize them in 2d space with T-SNE again.
```
stocks = np.unique(pairs)
X_df = pd.DataFrame(index=returns.T.index, data=X)
in_pairs_series = clustered_series.loc[stocks]
stocks = list(np.unique(pairs))
X_pairs = X_df.loc[stocks]
X_tsne = TSNE(learning_rate=50, perplexity=3, random_state=1337).fit_transform(X_pairs)
plt.figure(1, facecolor='white')
plt.clf()
plt.axis('off')
for pair in pairs:
ticker1 = pair[0].symbol
loc1 = X_pairs.index.get_loc(pair[0])
x1, y1 = X_tsne[loc1, :]
ticker2 = pair[0].symbol
loc2 = X_pairs.index.get_loc(pair[1])
x2, y2 = X_tsne[loc2, :]
plt.plot([x1, x2], [y1, y2], 'k-', alpha=0.3, c='gray');
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], s=220, alpha=0.9, c=[in_pairs_series.values], cmap=cm.Paired)
plt.title('T-SNE Visualization of Validated Pairs');
```
## Conclusion and Next Steps
We have found a nice number of pairs to use in a pairs trading strategy. Note that the unique number of stocks is less than the number of pairs. This means that the same stock, e.g., AEP, is in more than one pair. This is fine, but we will need to take some special precautions in the **Portfolio Construction** stage to avoid excessive concentration in any one stock. Happy hunting for pairs!
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
# Equilibrium Properties and Partial Ordering (Al-Fe and Al-Ni)
```
# Only needed in a Jupyter Notebook
%matplotlib inline
# Optional plot styling
import matplotlib
matplotlib.style.use('bmh')
import matplotlib.pyplot as plt
from pycalphad import equilibrium
from pycalphad import Database, Model
import pycalphad.variables as v
import numpy as np
```
## Al-Fe (Heat Capacity and Degree of Ordering)
Here we compute equilibrium thermodynamic properties in the Al-Fe system. We know that only B2 and liquid are stable in the temperature range of interest, but we just as easily could have included all the phases in the calculation using `my_phases = list(db.phases.keys())`. Notice that the syntax for specifying a range is `(min, max, step)`. We can also directly specify a list of temperatures using the list syntax, e.g., `[300, 400, 500, 1400]`.
We explicitly indicate that we want to compute equilibrium values of the `heat_capacity` and `degree_of_ordering` properties. These are both defined in the default `Model` class. For a complete list, see the documentation. `equilibrium` will always return the Gibbs energy, chemical potentials, phase fractions and site fractions, regardless of the value of `output`.
```
db = Database('alfe_sei.TDB')
my_phases = ['LIQUID', 'B2_BCC']
eq = equilibrium(db, ['AL', 'FE', 'VA'], my_phases, {v.X('AL'): 0.25, v.T: (300, 2000, 50), v.P: 101325},
output=['heat_capacity', 'degree_of_ordering'])
print(eq)
```
We also compute degree of ordering at fixed temperature as a function of composition.
```
eq2 = equilibrium(db, ['AL', 'FE', 'VA'], 'B2_BCC', {v.X('AL'): (0,1,0.01), v.T: 700, v.P: 101325},
output='degree_of_ordering')
print(eq2)
```
### Plots
Next we plot the degree of ordering versus temperature. We can see that the decrease in the degree of ordering is relatively steady and continuous. This is indicative of a second-order transition from partially ordered B2 to disordered bcc (A2).
```
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where B2 is stable
phase_indices = np.nonzero(eq.Phase.values == 'B2_BCC')
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq['T'].values, phase_indices[2]), eq['degree_of_ordering'].values[phase_indices])
plt.show()
```
For the heat capacity curve shown below we notice a sharp increase in the heat capacity around 750 K. This is indicative of a magnetic phase transition and, indeed, the temperature at the peak of the curve coincides with 75% of 1043 K, the Curie temperature of pure Fe. (Pure bcc Al is paramagnetic so it has an effective Curie temperature of 0 K.)
We also observe a sharp jump in the heat capacity near 1800 K, corresponding to the melting of the bcc phase.
```
plt.gca().set_title('Al-Fe: Heat capacity vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Heat Capacity (J/mol-atom-K)')
# np.squeeze is used to remove all dimensions of size 1
# For a 1-D/"step" calculation, this aligns the temperature and heat capacity arrays
# In 2-D/"map" calculations, we'd have to explicitly select the composition of interest
plt.plot(eq['T'].values, np.squeeze(eq['heat_capacity'].values))
plt.show()
```
To understand more about what's happening around 700 K, we plot the degree of ordering versus composition. Note that this plot excludes all other phases except `B2_BCC`. We observe the presence of disordered bcc (A2) until around 13% Al or Fe, when the phase begins to order.
```
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs X(AL) [T=700 K]')
plt.gca().set_xlabel('X(AL)')
plt.gca().set_ylabel('Degree of ordering')
# Select all points in the datasets where B2_BCC is stable, dropping the others
eq2_b2_bcc = eq2.where(eq2.Phase == 'B2_BCC', drop=True)
plt.plot(eq2_b2_bcc['X_AL'].values, eq2_b2_bcc['degree_of_ordering'].values.squeeze())
plt.show()
```
## Al-Ni (Degree of Ordering)
```
db_alni = Database('NI_AL_DUPIN_2001.TDB')
phases = ['LIQUID', 'FCC_L12']
eq_alni = equilibrium(db_alni, ['AL', 'NI', 'VA'], phases, {v.X('AL'): 0.10, v.T: (300, 2500, 20), v.P: 101325},
output='degree_of_ordering')
print(eq_alni)
```
### Plots
In the plot below we observe two phases designated `FCC_L12`. This is indicative of a miscibility gap. The ordered gamma-prime phase steadily decreases in amount with increasing temperature until it completely disappears around 750 K, leaving only the disordered gamma phase.
```
from pycalphad.plot.utils import phase_legend
phase_handles, phasemap = phase_legend(phases)
plt.gca().set_title('Al-Ni: Phase fractions vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Phase Fraction')
plt.gca().set_ylim((0,1.1))
plt.gca().set_xlim((300, 2000))
for name in phases:
phase_indices = np.nonzero(eq_alni.Phase.values == name)
plt.scatter(np.take(eq_alni['T'].values, phase_indices[2]), eq_alni.NP.values[phase_indices], color=phasemap[name])
plt.gca().legend(phase_handles, phases, loc='lower right')
```
In the plot below we see that the degree of ordering does not change at all in each phase. There is a very abrupt disappearance of the completely ordered gamma-prime phase, leaving the completely disordered gamma phase. This is a first-order phase transition.
```
plt.gca().set_title('Al-Ni: Degree of fcc ordering vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where FCC_L12 is stable and ordered
L12_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values > 0.01)))
# Generate a list of all indices where FCC_L12 is stable and disordered
fcc_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values <= 0.01)))
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq_alni['T'].values, L12_phase_indices[2]), eq_alni['degree_of_ordering'].values[L12_phase_indices],
label='$\gamma\prime$ (ordered fcc)', color='red')
plt.plot(np.take(eq_alni['T'].values, fcc_phase_indices[2]), eq_alni['degree_of_ordering'].values[fcc_phase_indices],
label='$\gamma$ (disordered fcc)', color='blue')
plt.legend()
plt.show()
```
| github_jupyter |
```
%matplotlib notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import csv
import sklearn.feature_extraction.text
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
import itertools
from sklearn import preprocessing
import re
import string
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from tqdm import tqdm
plt.ioff()
pd.set_option('display.max_columns', None)
```
## Download data from https://www.kaggle.com/mczielinski/bitcoin-historical-data/data
```
data_frame = pd.read_csv('data/coinbaseUSD_1-min_data_2014-12-01_to_2017-10-20.csv.csv')
data_frame.tail()
prices = data_frame['Weighted_Price']
prices = np.array(prices[-500000:])
plt.plot(prices[0:100])
plt.show()
X = []
y = []
context_length = 50
gap = 10
for dt in tqdm(range(context_length+1, len(prices)-gap, gap)):
input_prices = prices[dt-context_length:dt]
input_percent_change = [[1e5*(prices[dt]-prices[dt-1])/(prices[dt-1])] for dt in range(dt-context_length, dt)]
price_now = prices[dt+gap]
percent_change_since_last_knowledge = (input_prices[-1]-price_now)/input_prices[-1]
is_change_positive = [1, 0] if percent_change_since_last_knowledge > 0 else [0, 1]
X.append(input_percent_change)
y.append(is_change_positive)
X = np.array(X)
# X = X.reshape((len(X), 1, 100))
y = np.array(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4., random_state=1)
print(X_train.shape)
print(y_train.shape)
print(X_train[0:])
plt.plot(X[0:50])
plt.show()
# embedding_vector_length = 64
model = Sequential()
# model.add(Embedding(vocab_size, embedding_vector_length, input_length=max_review_length))
model.add(LSTM(128, input_shape=(context_length, 1)))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, nb_epoch=5, batch_size=64, validation_data=(X_test, y_test))
# Final evaluation of the model
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
y_train_pred_class = np.argmax(y_train_pred, axis=1)
y_test_pred_class = np.argmax(y_test_pred, axis=1)
y_train_class = np.argmax(y_train, axis=1)
y_test_class = np.argmax(y_test, axis=1)
train_accuracy = sklearn.metrics.accuracy_score(y_train_class, y_train_pred_class)
test_accuracy = sklearn.metrics.accuracy_score(y_test_class, y_test_pred_class)
print("Train Accuracy {}%".format(train_accuracy*100.0))
print("Test Accuracy {}%".format(test_accuracy*100))
cf_matrix = confusion_matrix(y_test_class, y_test_pred_class)
print("Confusion Matrix")
print(cf_matrix)
```
| github_jupyter |
# `Практикум по программированию на языке Python`
<br>
## `Занятие 9: Web-разработка на Python`
<br><br>
### `Роман Ищенко (roman.ischenko@gmail.com)`
#### `Москва, 2021`
```
import warnings
warnings.filterwarnings('ignore')
```
### `HTTP`
HTTP (HyperText Transfer Protocol) — это протокол, позволяющий получать различные ресурсы. Изначально, как следует из названия — для документов, но сейчас уже для передачи произвольных данных.
#### Преимущества
- Прост и человекочитаем
- Расширяем
- Не имеет состояния (каждый запрос — в отрыве от остальных)
#### Расширения
- Кэш
- Ослабления ограничения источника
- Аутентификация
- Прокси и туннелирование
- Сессии
### `Состав запроса`
- HTTP-метод: GET, POST, OPTIONS и т. д., определяющее операцию, которую клиент хочет выполнить
- Путь к ресурсу
- Версию HTTP-протокола
- Заголовки (опционально)
- Тело (для некоторых методов, таких как POST)
```
GET / HTTP/1.1
Host: ya.ru
User-Agent: Python script
Accept: */*
```
### `Состав ответа`
- Версию HTTP-протокола
- HTTP код состояния, сообщающий об успешности запроса или причине неудачи
- Сообщение состояния -- краткое описание кода состояния
- HTTP заголовки
- Опционально: тело, содержащее пересылаемый ресурс
```
HTTP/1.1 200 Ok
Cache-Control: no-cache,no-store,max-age=0,must-revalidate
Content-Length: 59978
Content-Type: text/html; charset=UTF-8
Date: Thu, 29 Apr 2021 03:48:39 GMT
Set-Cookie: yp=1622260119.ygu.1; Expires=Sun, 27-Apr-2031 03:48:39 GMT; Domain=.ya.ru; Path=/
```
### `Типы запросов`
- GET
- HEAD
- POST
- PUT
- DELETE
- CONNECT
- OPTIONS
- TRACE
- PATCH
### `Заголовки`
- Authentication
- Caching
- Client hints
- Conditionals
- Connection management
- Cookies
- Message body information
- Request context
- Response context
- Security
- WebSockets
### `Коды`
- Информационные (100 - 199)
- Успешные (200 - 299)
- Перенаправления (300 - 399)
- Клиентские ошибки (400 - 499)
- Серверные ошибки (500 - 599)
```
200 OK
302 Found
400 Bad Request
401 Unauthorized
404 Not Found
500 Internal Server Error
503 Service Unavailable
```
### `HTTPS`
- HTTPS не является отдельным протоколом передачи данных, а представляет собой расширение протокола HTTP с надстройкой шифрования
- передаваемые по протоколу HTTP данные не защищены, HTTPS обеспечивает конфиденциальность информации путем ее шифрования
- HTTP использует порт 80, HTTPS — порт 443
Принцип работы:
- С помощью ассиметричного шифрования устанавливается ключ соединения
- Всё дальнейшее общение шифруется сессионным ключом
### `Python Web-clients`
Стандартная библиотека `urllib`
```
import json
import urllib.request
ur = urllib.request.urlopen('https://postman-echo.com/get?foo=bar')
print(ur.code)
content = json.loads(ur.read())
print(json.dumps(content, indent=4, sort_keys=True))
import json
from urllib import request, parse
data = parse.urlencode({ 'foo': 'bar' }).encode()
req = request.Request('https://postman-echo.com/post', method="POST", data=data)
ur = request.urlopen(req)
print(ur.headers)
content = json.loads(ur.read())
print(json.dumps(content, indent=4, sort_keys=True))
```
Библиотека `requests`
Установка:
`pipenv install requests`
Запрос GET
```
import requests
r = requests.get('https://postman-echo.com/get', params={'foo': 'bar'}, headers={'user-agent': 'Python Script'})
print(r.status_code)
content = json.loads(r.content)
print(json.dumps(content, indent=4, sort_keys=True))
```
Запрос POST
```
import requests
r = requests.post('https://postman-echo.com/post', json={'foo': 'bar'},headers = {'user-agent': 'Python Script'})
print(r.status_code)
print(r.request.headers)
content = json.loads(r.content)
print(json.dumps(content, indent=4, sort_keys=True))
```
### `Python web-server libs`
Flask — микрофреймворк для создания вебсайтов на языке Python.
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run()
import requests
r = requests.get('http://127.0.0.1:5000/')
print(r.headers)
print(r.content)
```
Если нужно добавить HTTP-методы
```
from flask import Flask, request
app = Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def hello_world():
print(request.method)
return {'data': 'Hello, World!'}
if __name__ == '__main__':
app.run()
```
В пути можно использовать переменные
Синтаксис: `<converter:variable_name>`
Доступные converters:
- string
- int
- float
- path
- uuid
```
@app.route('/hello/<string:name>')
def hello_name(name):
return f'Hello {name}!'
import requests
r = requests.get('http://127.0.0.1:5000/hello/John')
print(r.content)
```
Flask используется для разработки и отладки.
Для промышленной эксплуатации необходимо использование WSGI (Web Server Gateway Interface) сервера:
- WSGI-сервера были разработаны чтобы обрабатывать множество запросов одновременно. А фреймворки (в том числе flask) не предназначены для обработки тысяч запросов и не дают решения того, как наилучшим образом маршрутизировать запросы с веб-сервера.
— с WSGI не нужно беспокоиться о том, как ваша конкретная инфраструктура использует стандарт WSGI.
— WSGI дает Вам гибкость в изменении компонентов веб-стека без изменения приложения, которое работает с WSGI.
Если не планируется большой нагрузки, для `flask` это может быть `waitress`.
Установка: `pipenv install waitress`
Использование:
```
from waitress import serve
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
# Вместо запуска flask запускаем waitress.serve
# app.run()
serve(app, host='0.0.0.0', port='5000')
```
Либо запускаем из командной строки: `waitress-serve --port 5000 '<имя модуля>:<перемнная приложения>'`
Если наш файл называется `server.py`, то наш пример можно запустить командой: `waitress-serve --port 5000 'server:app'`
Все запросы к веб-сервису выполняются последовательно. Можно использовать асинхронность и многопоточность, но мы знаем, что она сработает не во всех случаях.
Эту проблему решают масштабированием через внешние WSGI-серверы. Для Python их существует некоторое количество: Bjoern, uWSGI, mod_wsgi, Meinheld, CherryPy, Gunicorn.
Gunicorn — это WSGI-сервер, созданный для использования в UNIX-системах. Название — сокращенная и комбинированная версия слов «Green Unicorn». На самом сайте проекта есть зеленый единорог. Gunicorn был перенесен из проекта «Unicorn» из языка Ruby. Он относительно быстрый, не требует много ресурсов, легко запускается и работает с широким спектром веб-фреймворков.
<img src="https://cdn-images-1.medium.com/max/1200/1*nFxyDwJ2DEH1G5PMKPMj1g.png"/>
Запуск для нашего примера: `gunicorn --bind 0.0.0.0:5000 --workers 4 'server:app'`
| github_jupyter |
# HEART DISEASE PREDICTION
# OVERVIEW :
## PREDICTING HEART DISESES BASED ON TARGET VARIABLE
# IMPORTING THE DATASET AND REQ LIB
```
import pandas as pd
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
from matplotlib.cm import rainbow
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
data=pd.read_csv(r'C:\Users\neeraj\OneDrive\Desktop\Predicting-Heart-Disease-master\dataset.csv')
data.head()
```
# FEATURE ENGG :
```
dataset = pd.get_dummies(data, columns = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal'])
data.info()
data.hist()
import seaborn as sns
#get correlations of each features in dataset
corrmat = data.corr()
top_corr_features = corrmat.index
plt.figure(figsize=(20,20))
#plot heat map
g=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap="RdYlGn")
```
# FEATURE SCALING
```
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
standardScaler = StandardScaler()
columns_to_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak']
dataset[columns_to_scale] = standardScaler.fit_transform(dataset[columns_to_scale])
```
# SETTING UP DATA VARIABLES
```
y = dataset['target']
X = dataset.drop(['target'], axis = 1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
```
# RANDOM FOREST :
```
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=10, criterion='entropy', random_state=0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
#
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
(25+37)/(25+8+6+37)
```
# SVM :
```
# Training the Kernel SVM model on the Training set
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
(26+38)/(26+38+12)
# Training the Naive Bayes model on the Training set
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
(25+39)/(25+39+12)
```
# LOGISTIC REGRESSION :
```
# Training the Logistic Regression model on the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
(26+39)/(26+39+11)
```
# XGBOOST CLASSIFIER :
```
# Training XGBoost on the Training set
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
(25+37)/(25+37+14)
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
print("Accuracy: {:.2f} %".format(accuracies.mean()*100))
print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
```
# ANN :
```
b = dataset['target'].values
A= dataset.drop(['target'], axis = 1).values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(A, b, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
A = sc.fit_transform(A)
print(A)
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LeakyReLU,PReLU,ELU
from keras.layers import Dropout
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(output_dim = 6, init = 'he_uniform',activation='relu'))
# Adding the second hidden layer
classifier.add(Dense(output_dim = 6, init = 'he_uniform',activation='relu'))
# Adding the output layer
classifier.add(Dense(output_dim = 1, init = 'glorot_uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'Adamax', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
model_history=classifier.fit(X_train, y_train,validation_split=0.33, batch_size = 10, nb_epoch = 100)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Calculate the Accuracy
from sklearn.metrics import accuracy_score
score=accuracy_score(y_pred,y_test)
score
```
## BEST ACCURACY WITH LOGISTIC REGRESSION.
# WHICH IS 85%...!!
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# load mosaic data
```
path = '/content/drive/MyDrive/Research/AAAI/dataset1/first_layer_without_entropy/'
data = np.load(path+"mosaic_dataset_1.npy",allow_pickle=True)
mosaic_list_of_images = data[0]["mosaic_list"]
mosaic_label = data[0]["mosaic_label"]
fore_idx = data[0]["fore_idx"]
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50) #,self.output)
self.linear2 = nn.Linear(50,50)
self.linear3 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64)
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = self.linear1(x)
x1 = F.tanh(x)
x = F.relu(x)
x = F.relu(self.linear2(x))
x = self.linear3(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 20
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
full_analysis= []
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
torch.manual_seed(n)
what = Classification_deep(50,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)#,momentum=0.9)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)#,momentum=0.9)
criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 2000
# calculate zeroth epoch loss and FTPT values
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0))
FTPT_analysis
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
```
| github_jupyter |
```
import networkx as nx
import numpy as np
import pandas as pd
import random
import scipy.stats
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier
from sklearn import svm
from sklearn.metrics import accuracy_score, precision_recall_fscore_support, roc_curve, auc
from sklearn.model_selection import cross_val_predict, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
```
### Convert gml to edgelist.csv. Optional.
```
def gml_to_edgelist(f):
g = nx.read_gml(f)
nx.write_edgelist(g, 'power.csv', delimiter=',')
def load_data(path):
g = nx.read_edgelist(path, delimiter=',')
return g
def generate_non_edge_list(g):
n = len(g.edges()) # Negative edge list size is same as positive list size
non_edges = []
for u in g.nodes():
for v in g.nodes():
if u == v: continue
if g.has_edge(u, v): continue
non_edges.append((u, v))
neg_sample = random.sample(non_edges, n)
return neg_sample
def generate_class_labels(g, edges):
y = []
for edge in edges:
if g.has_edge(edge[0], edge[1]):
y.append(1)
else:
y.append(0)
return y
```
### Implement features
```
def adamic_adar(g, X):
preds = nx.adamic_adar_index(g, X)
lst = []
for u, v, p in preds:
lst.append(p)
max_p = max(lst)
return [x/max_p for x in lst]
def jaccard(g, X):
preds = nx.jaccard_coefficient(g, X)
lst = []
for u, v, p in preds:
lst.append(p)
max_p = max(lst)
return [x/max_p for x in lst]
def common_neighbors(g, X):
lst = []
for x in X:
cn = nx.common_neighbors(g, x[0], x[1])
lst.append(len(list(cn)))
max_p = float(max(lst))
return [x/max_p for x in lst]
```
#### Jaccard
It's a simple measure of common features between two entities (in this case common neighbours). Jaccard coef is high if there are more similar features between 2 entiteis.
score(a, b) = p(a) n p (b) / p(a) u p (b)
#### Adamic Adar
This is a refinement jaccard coef. Rarer features are weighted heavily. If there is a common neighbour shared between two nodes which is rarely shared by any other node pairs. Then the score of this pair of nodes is high.
### Plot Curves
```
def plot_roc(fpt, tpr, roc_auc):
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
### Load data
```
g = load_data('karate.csv')
```
### List of positive edges
```
EDGES_POSITIVE = list(g.edges())
```
### List of negative edges
```
EDGES_NEGATIVE = generate_non_edge_list(g)
EDGES = EDGES_POSITIVE + EDGES_NEGATIVE
random.shuffle(EDGES)
Y = generate_class_labels(g, EDGES)
feature1 = adamic_adar(g, EDGES)
feature2 = jaccard(g, EDGES)
feature3 = common_neighbors(g, EDGES)
data = [feature1, feature2, feature3, Y]
df = pd.DataFrame(data).transpose()
df.columns = ["Adamic_adar", "Jaccard", "Common neighbor", "Label"]
feature_values = df[["Adamic_adar", "Jaccard", "Common neighbor"]]
print ("Total nodes", len(g.nodes()))
print ("Total edges", len(g.edges()))
# print (df)
```
## Training
### Multilayer perceptron
```
clf = MLPClassifier()
random_search = RandomizedSearchCV(clf, param_distributions={
'learning_rate': ["constant", "invscaling", "adaptive"],
'alpha': [1e-2, 1e-3, 1e-4, 1e-5],
'hidden_layer_sizes': scipy.stats.randint(4, 12),
'solver': ['lbfgs', 'sgd', 'adam'],
'early_stopping': [True, False],
'activation': ["relu", "logistic", "tanh"]})
random_search.fit(feature_values, Y)
```
### Learning Rate
* How much the current situation affect the next step
* Step size
### Momentum
* How much past steps affect the next step
* Used to prevent converging at a local minimum
### Nesterov momentum
* Performs slightly better than regular momentum
* Momentum is calculated for the "lookahead" gradient step instead of the current stale position
http://cs231n.github.io/neural-networks-3/#sgd
### Early Stopping
A technique for controlling overfitting in machine learning models, especially neural networks, by stopping training before the weights have converged. Often the training is stopped when performance has stopped improving on a held-out validation set
### Activation Functions
#### Sigmoid
* Tends to vanish gradient
* Not good for deep learning
* Range: [0, 1]
#### Tanh
* Has stronger gradients than sigmoid
* Range: [-1, 1]
#### Relu
* Reduces the likelihood of vanishing gradient
* Less calculation load
* Sparse (percentage of neurons that are active at the same time are very low)
* Range: [0, inf]
```
### Validation
pred_proba = cross_val_predict(random_search, feature_values, Y, cv=6, method='predict_proba')
pred = cross_val_predict(random_search, feature_values, Y, cv=6)
print ("Accuracy:", accuracy_score(df[["Label"]], pred))
precision, recall, fscore, support = precision_recall_fscore_support(df[["Label"]], pred, average='binary')
print ("Precision:", precision)
print ("Recall:", recall)
print ("f-score:", fscore)
```
### ROC Curve
```
fpr, tpr, threshold = roc_curve(df[["Label"]], pred_proba[:, 1])
roc_auc = auc(fpr, tpr)
plot_roc(fpr, tpr, roc_auc)
```
### SVM
```
clf = svm.SVC(probability=True)
random_search = RandomizedSearchCV(clf, param_distributions={
'kernel': ["rbf"],
'gamma': np.logspace(-9, 3, 13),
'C': np.logspace(-2, 10, 13)})
random_search.fit(feature_values, Y)
pred = cross_val_predict(random_search, feature_values, Y, cv=6)
pred_proba = cross_val_predict(random_search, feature_values, Y, cv=6, method='predict_proba')
print ("Accuracy:", accuracy_score(Y, pred))
precision, recall, fscore, support = precision_recall_fscore_support(Y, pred, average='binary')
print ("Precision:", precision)
print ("Recall:", recall)
print ("f-score:", fscore)
fpr, tpr, threshold = roc_curve(df[["Label"]], pred_proba[:, 1])
roc_auc = auc(fpr, tpr)
plot_roc(fpr, tpr, roc_auc)
```
### Random Forest
```
clf = RandomForestClassifier(n_jobs=2, random_state=0)
param_dist = {"max_depth": [3, None],
"min_samples_split": [2, 3, 4, 5, 6, 7, 8, 9, 10],
"min_samples_leaf": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
# run randomized search
n_iter_search = 20
print (param_dist)
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search)
random_search.fit(feature_values, Y)
pred = cross_val_predict(random_search, feature_values, Y, cv=6)
pred_proba = cross_val_predict(random_search, feature_values, Y, cv=6, method='predict_proba')
print ("Accuracy:", accuracy_score(Y, pred))
precision, recall, fscore, support = precision_recall_fscore_support(Y, pred, average='binary')
print ("Precision:", precision)
print ("Recall:", recall)
print ("f-score:", fscore)
fpr, tpr, threshold = roc_curve(df[["Label"]], pred_proba[:, 1])
roc_auc = auc(fpr, tpr)
plot_roc(fpr, tpr, roc_auc)
```
| github_jupyter |
```
from __future__ import print_function
from imp import reload
```
## UAT for NbAgg backend.
The first line simply reloads matplotlib, uses the nbagg backend and then reloads the backend, just to ensure we have the latest modification to the backend code. Note: The underlying JavaScript will not be updated by this process, so a refresh of the browser after clearing the output and saving is necessary to clear everything fully.
```
import matplotlib
reload(matplotlib)
matplotlib.use('nbagg')
import matplotlib.backends.backend_nbagg
reload(matplotlib.backends.backend_nbagg)
```
### UAT 1 - Simple figure creation using pyplot
Should produce a figure window which is interactive with the pan and zoom buttons. (Do not press the close button, but any others may be used).
```
import matplotlib.backends.backend_webagg_core
reload(matplotlib.backends.backend_webagg_core)
import matplotlib.pyplot as plt
plt.interactive(False)
fig1 = plt.figure()
plt.plot(range(10))
plt.show()
```
### UAT 2 - Creation of another figure, without the need to do plt.figure.
As above, a new figure should be created.
```
plt.plot([3, 2, 1])
plt.show()
```
### UAT 3 - Connection info
The printout should show that there are two figures which have active CommSockets, and no figures pending show.
```
print(matplotlib.backends.backend_nbagg.connection_info())
```
### UAT 4 - Closing figures
Closing a specific figure instance should turn the figure into a plain image - the UI should have been removed. In this case, scroll back to the first figure and assert this is the case.
```
plt.close(fig1)
```
### UAT 5 - No show without plt.show in non-interactive mode
Simply doing a plt.plot should not show a new figure, nor indeed update an existing one (easily verified in UAT 6).
The output should simply be a list of Line2D instances.
```
plt.plot(range(10))
```
### UAT 6 - Connection information
We just created a new figure, but didn't show it. Connection info should no longer have "Figure 1" (as we closed it in UAT 4) and should have figure 2 and 3, with Figure 3 without any connections. There should be 1 figure pending.
```
print(matplotlib.backends.backend_nbagg.connection_info())
```
### UAT 7 - Show of previously created figure
We should be able to show a figure we've previously created. The following should produce two figure windows.
```
plt.show()
plt.figure()
plt.plot(range(5))
plt.show()
```
### UAT 8 - Interactive mode
In interactive mode, creating a line should result in a figure being shown.
```
plt.interactive(True)
plt.figure()
plt.plot([3, 2, 1])
```
Subsequent lines should be added to the existing figure, rather than creating a new one.
```
plt.plot(range(3))
```
Calling connection_info in interactive mode should not show any pending figures.
```
print(matplotlib.backends.backend_nbagg.connection_info())
```
Disable interactive mode again.
```
plt.interactive(False)
```
### UAT 9 - Multiple shows
Unlike most of the other matplotlib backends, we may want to see a figure multiple times (with or without synchronisation between the views, though the former is not yet implemented). Assert that plt.gcf().canvas.manager.reshow() results in another figure window which is synchronised upon pan & zoom.
```
plt.gcf().canvas.manager.reshow()
```
### UAT 10 - Saving notebook
Saving the notebook (with CTRL+S or File->Save) should result in the saved notebook having static versions of the figues embedded within. The image should be the last update from user interaction and interactive plotting. (check by converting with ``ipython nbconvert <notebook>``)
### UAT 11 - Creation of a new figure on second show
Create a figure, show it, then create a new axes and show it. The result should be a new figure.
**BUG: Sometimes this doesn't work - not sure why (@pelson).**
```
fig = plt.figure()
plt.axes()
plt.show()
plt.plot([1, 2, 3])
plt.show()
```
### UAT 12 - OO interface
Should produce a new figure and plot it.
```
from matplotlib.backends.backend_nbagg import new_figure_manager,show
manager = new_figure_manager(1000)
fig = manager.canvas.figure
ax = fig.add_subplot(1,1,1)
ax.plot([1,2,3])
fig.show()
```
## UAT 13 - Animation
The following should generate an animated line:
```
import matplotlib.animation as animation
import numpy as np
fig, ax = plt.subplots()
x = np.arange(0, 2*np.pi, 0.01) # x-array
line, = ax.plot(x, np.sin(x))
def animate(i):
line.set_ydata(np.sin(x+i/10.0)) # update the data
return line,
#Init only required for blitting to give a clean slate.
def init():
line.set_ydata(np.ma.array(x, mask=True))
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(1, 200), init_func=init,
interval=32., blit=True)
plt.show()
```
### UAT 14 - Keyboard shortcuts in IPython after close of figure
After closing the previous figure (with the close button above the figure) the IPython keyboard shortcuts should still function.
### UAT 15 - Figure face colours
The nbagg honours all colours apart from that of the figure.patch. The two plots below should produce a figure with a red background. There should be no yellow figure.
```
import matplotlib
matplotlib.rcParams.update({'figure.facecolor': 'red',
'savefig.facecolor': 'yellow'})
plt.figure()
plt.plot([3, 2, 1])
plt.show()
```
### UAT 16 - Events
Pressing any keyboard key or mouse button (or scrolling) should cycle the line line while the figure has focus. The figure should have focus by default when it is created and re-gain it by clicking on the canvas. Clicking anywhere outside of the figure should release focus, but moving the mouse out of the figure should not release focus.
```
import itertools
fig, ax = plt.subplots()
x = np.linspace(0,10,10000)
y = np.sin(x)
ln, = ax.plot(x,y)
evt = []
colors = iter(itertools.cycle(['r', 'g', 'b', 'k', 'c']))
def on_event(event):
if event.name.startswith('key'):
fig.suptitle('%s: %s' % (event.name, event.key))
elif event.name == 'scroll_event':
fig.suptitle('%s: %s' % (event.name, event.step))
else:
fig.suptitle('%s: %s' % (event.name, event.button))
evt.append(event)
ln.set_color(next(colors))
fig.canvas.draw()
fig.canvas.draw_idle()
fig.canvas.mpl_connect('button_press_event', on_event)
fig.canvas.mpl_connect('button_release_event', on_event)
fig.canvas.mpl_connect('scroll_event', on_event)
fig.canvas.mpl_connect('key_press_event', on_event)
fig.canvas.mpl_connect('key_release_event', on_event)
plt.show()
```
### UAT 17 - Timers
Single-shot timers follow a completely different code path in the nbagg backend than regular timers (such as those used in the animation example above.) The next set of tests ensures that both "regular" and "single-shot" timers work properly.
The following should show a simple clock that updates twice a second:
```
import time
fig, ax = plt.subplots()
text = ax.text(0.5, 0.5, '', ha='center')
def update(text):
text.set(text=time.ctime())
text.axes.figure.canvas.draw()
timer = fig.canvas.new_timer(500, [(update, [text], {})])
timer.start()
plt.show()
```
However, the following should only update once and then stop:
```
fig, ax = plt.subplots()
text = ax.text(0.5, 0.5, '', ha='center')
timer = fig.canvas.new_timer(500, [(update, [text], {})])
timer.single_shot = True
timer.start()
plt.show()
```
And the next two examples should never show any visible text at all:
```
fig, ax = plt.subplots()
text = ax.text(0.5, 0.5, '', ha='center')
timer = fig.canvas.new_timer(500, [(update, [text], {})])
timer.start()
timer.stop()
plt.show()
fig, ax = plt.subplots()
text = ax.text(0.5, 0.5, '', ha='center')
timer = fig.canvas.new_timer(500, [(update, [text], {})])
timer.single_shot = True
timer.start()
timer.stop()
plt.show()
```
### UAT17 - stopping figure when removed from DOM
When the div that contains from the figure is removed from the DOM the figure should shut down it's comm, and if the python-side figure has no more active comms, it should destroy the figure. Repeatedly running the cell below should always have the same figure number
```
fig, ax = plt.subplots()
ax.plot(range(5))
plt.show()
```
Running the cell below will re-show the figure. After this, re-running the cell above should result in a new figure number.
```
fig.canvas.manager.reshow()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.