code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
**Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
## One Hot Encoding - variables with many categories
We observed in the previous lecture that if a categorical variable contains multiple labels, then by re-encoding them using one hot encoding we will expand the feature space dramatically.
See below:
```
import pandas as pd
import numpy as np
# let's load the mercedes benz dataset for demonstration, only the categorical variables
data = pd.read_csv('mercedesbenz.csv', usecols=['X1', 'X2', 'X3', 'X4', 'X5', 'X6'])
data.head()
# let's have a look at how many labels each variable has
for col in data.columns:
print(col, ': ', len(data[col].unique()), ' labels')
# let's examine how many columns we will obtain after one hot encoding these variables
pd.get_dummies(data, drop_first=True).shape
```
We can see that from just 6 initial categorical variables, we end up with 117 new variables.
These numbers are still not huge, and in practice we could work with them relatively easily. However, in business datasets and also other Kaggle or KDD datasets, it is not unusual to find several categorical variables with multiple labels. And if we use one hot encoding on them, we will end up with datasets with thousands of columns.
What can we do instead?
In the winning solution of the KDD 2009 cup: "Winning the KDD Cup Orange Challenge with Ensemble Selection" (http://www.mtome.com/Publications/CiML/CiML-v3-book.pdf), the authors limit one hot encoding to the 10 most frequent labels of the variable. This means that they would make one binary variable for each of the 10 most frequent labels only. This is equivalent to grouping all the other labels under a new category, that in this case will be dropped. Thus, the 10 new dummy variables indicate if one of the 10 most frequent labels is present (1) or not (0) for a particular observation.
How can we do that in python?
```
# let's find the top 10 most frequent categories for the variable X2
data.X2.value_counts().sort_values(ascending=False).head(10)
# let's make a list with the most frequent categories of the variable
top_10 = [x for x in data.X2.value_counts().sort_values(ascending=False).head(10).index]
top_10
# and now we make the 10 binary variables
for label in top_10:
data[label] = np.where(data['X2']==label, 1, 0)
data[['X2']+top_10].head(10)
# get whole set of dummy variables, for all the categorical variables
def one_hot_top_x(df, variable, top_x_labels):
# function to create the dummy variables for the most frequent labels
# we can vary the number of most frequent labels that we encode
for label in top_x_labels:
df[variable+'_'+label] = np.where(data[variable]==label, 1, 0)
# read the data again
data = pd.read_csv('mercedesbenz.csv', usecols=['X1', 'X2', 'X3', 'X4', 'X5', 'X6'])
# encode X2 into the 10 most frequent categories
one_hot_top_x(data, 'X2', top_10)
data.head()
# find the 10 most frequent categories for X1
top_10 = [x for x in data.X1.value_counts().sort_values(ascending=False).head(10).index]
# now create the 10 most frequent dummy variables for X1
one_hot_top_x(data, 'X1', top_10)
data.head()
```
### One Hot encoding of top variables
### Advantages
- Straightforward to implement
- Does not require hrs of variable exploration
- Does not expand massively the feature space (number of columns in the dataset)
### Disadvantages
- Does not add any information that may make the variable more predictive
- Does not keep the information of the ignored labels
Because it is not unusual that categorical variables have a few dominating categories and the remaining labels add mostly noise, this is a quite simple and straightforward approach that may be useful on many occasions.
It is worth noting that the top 10 variables is a totally arbitrary number. You could also choose the top 5, or top 20.
This modelling was more than enough for the team to win the KDD 2009 cup. They did do some other powerful feature engineering as we will see in following lectures, that improved the performance of the variables dramatically.
**Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
| github_jupyter |
<a href="https://colab.research.google.com/github/newcooldiscoveries/Algorithms-DNA-Sequencing/blob/master/Class_Worksheet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!wget https://d28rh4a8wq0iu5.cloudfront.net/ads1/data/ERR037900_1.first1000.fastq
def readFastq(filename):
sequences = []
qualities = []
with open(filename) as fh:
while True:
fh.readline() # skip name line
seq = fh.readline().rstrip() # read base sequence
fh.readline() # skip placeholder line
qual = fh.readline().rstrip() #base quality line
if len(seq) == 0:
break
sequences.append(seq)
qualities.append(qual)
return sequences, qualities
seqs, quals = readFastq('ERR037900_1.first1000.fastq')
def phred33ToQ(qual):
return ord(qual) - 33
def createHist(qualities):
# Create a histogram of quality scores
hist = [0]*50
for qual in qualities:
for phred in qual:
q = phred33ToQ(phred)
hist[q] += 1
return hist
h = createHist(quals)
print(h)
# Plot the histogram
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(h)), h)
plt.show()
def readFastq(filename):
sequences = []
qualities = []
with open (filename) as fh:
while True:
fh.readline()
seq = fh.readline().rstrip()
fh.readline()
qual = fh.readline().rstrip()
if len(seq) == 0:
break
sequences.append(seq)
qualities.append(qual)
return sequences, qualities
def createHist(qualities):
sequences, qualities = readFastq('ERR037900_1.first1000.fastq')
phredscore = []
qualities = qualities[-5]
for phred in qualities:
q = ord(phred)-33
phredscore.append(q)
return phredscore
h = createHist('qualities')
print (h)
import matplotlib.pyplot as plt
plt.bar(range(len(h)), h)
plt.show()
min_h = min(h)
index_min_h = h.index(min_h)
print (index_min_h)
def findGCByPos(reads):
''' Find the GC ratio at each position in the read '''
# Keep track of the number of G/C bases and the total number of bases at each position
gc = [0] * 100
totals = [0] * 100
for read in reads:
for i in range(len(read)):
if read[i] == 'C' or read[i] == 'G':
gc[i] += 1
totals[i] += 1
# Divide G/C counts by total counts to get the average at each position
for i in range(len(gc)):
if totals[i] > 0:
gc[i] /= float(totals[i])
return gc
gc = findGCByPos(seqs)
plt.plot(range(len(gc)), gc)
plt.show()
import collections
count = collections.Counter()
for seq in seqs:
count.update(seq)
count
print('offset of leftmost occurrence: %d' % min(gc))
def createHist(sequences):
# Create a histogram of quality scores
hist = [0]*50
for seq in sequences:
for s in seq:
q = numSeq('sequences')
hist[q] += 1
return hist
h = createHist(sequences)
print(h)
def createHist(seqs):
# Create a histogram of quality scores
hist = [0]*50
for seq in seqs:
for phred in seqs:
q = numSeq(phred)
hist[q] += 1
return hist
h = createHist(seqs)
print(h)
# Plot the histogram
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(seqs)), seqs)
plt.show()
def findGCByPos(reads):
''' Find the GC ratio at each position in the read '''
# Keep track of the number of G/C bases and the total number of bases at each position
gc = [0] * 100
totals = [0] * 100
for read in reads:
for i in range(len(read)):
if read[i] == 'C' or read[i] == 'G':
gc[i] += 1
totals[i] += 1
# Divide G/C counts by total counts to get the average at each position
for i in range(len(gc)):
if totals[i] > 0:
gc[i] /= float(totals[i])
return gc
gc = findGCByPos(seqs)
plt.plot(range(len(gc)), gc)
plt.show()
```
| github_jupyter |
# GSEA analysis on leukemia dataset
```
%load_ext autoreload
%autoreload 2
from gsea import *
import numpy as np
%pylab
%matplotlib inline
```
## Load data
```
genes, D, C = read_expression_file("data/leukemia.txt")
gene_sets, gene_set_names = read_genesets_file("data/pathways.txt", genes)
gene_set_hash = {}
for i in range(len(gene_sets)):
gene_set_hash[gene_set_names[i][0]] = {'indexes':gene_sets[i],'desc':gene_set_names[i][1]}
# verify that the dimensions make sense
len(genes),D.shape,len(C)
```
## Enrichment score calculations
We graphically present the calculation of ES.
```
L,r = rank_genes(D,C)
```
See if the first genes in *L* are indeed correlated with *C*
```
scatter(D[L[1],:],C)
scatter(D[L[-1],:],C)
scatter(D[L[1000],:],C)
```
## Graphical ilustration of ES calculations
```
p_exp = 1
def plot_es_calculations(name, L, r):
S = gene_set_hash[name]['indexes']
N = len(L)
S_mask = np.zeros(N)
S_mask[S] = 1
# reorder gene set mask
S_mask = S_mask[L]
N_R = sum(abs(r*S_mask)**p_exp)
P_hit = np.cumsum(abs(r*S_mask)**p_exp)/N_R if N_R!=0 else np.zeros_like(S_mask)
N_H = len(S)
P_mis = np.cumsum((1-S_mask))/(N-N_H) if N!=N_H else np.zeros_like(S_mask)
idx = np.argmax(abs(P_hit - P_mis))
print("ES =", P_hit[idx]-P_mis[idx])
f, axarr = plt.subplots(3, sharex=True)
axarr[0].plot(S_mask)
axarr[0].set_title('gene set %s' % name)
axarr[1].plot(r)
axarr[1].set_title('correlation with phenotype')
axarr[2].plot(P_hit-P_mis)
axarr[2].set_title('random walk')
L,r = rank_genes(D,C)
plot_es_calculations('CBF_LEUKEMIA_DOWNING_AML', L, r)
```
## Random phenotype labels
Now let's assign phenotype labels randomly. Is the ES much different?
```
N, k = D.shape
pi = np.array([np.random.randint(0,2) for i in range(k)])
L, r = rank_genes(D,pi)
print(pi)
plot_es_calculations('CBF_LEUKEMIA_DOWNING_AML', L, r)
```
## GSEA analysis
```
# use `n_jobs=-1` to use all cores
%time order, NES, p_values = gsea(D, C, gene_sets, n_jobs=-1)
from IPython.display import display, Markdown
s = "| geneset | NES | p-value | number of genes in geneset |\n |-------|---|---|---|\n "
for i in range(len(order)):
s = s + "| **%s** | %.3f | %.7f | %d |\n" % (gene_set_names[order[i]][0], NES[i], p_values[i], len(gene_sets[order[i]]))
display(Markdown(s))
```
## Multiple Hypotesis testing
We present two example gene sets. One with a high *NES* and low *p-value* and one with a low *NES* and a high *p-value*. We plot the histograms of null distribution for ES.
```
name = 'DNA_DAMAGE_SIGNALLING'
L,r = rank_genes(D,C)
plot_es_calculations(name, L, r)
n = 1000
S = gene_set_hash[name]['indexes']
L, r = rank_genes(D,C)
ES = enrichment_score(L,r,S)
ES_pi = np.zeros(n)
for i in range(n):
pi = np.array([np.random.randint(0,2) for i in range(k)])
L, r = rank_genes(D,pi)
ES_pi[i] = enrichment_score(L,r,S)
hist(ES_pi,bins=100)
plot([ES,ES],[0,20],'r-',label="ES(S)")
title("Histogram of ES vlues for random phenotype labels.\nRed line is ES for the selected gene set.")
name = 'tcrPathway'
L,r = rank_genes(D,C)
plot_es_calculations(name, L, r)
n = 1000
S = gene_set_hash[name]['indexes']
L, r = rank_genes(D,C)
ES = enrichment_score(L,r,S)
ES_pi = np.zeros(n)
for i in range(n):
pi = np.array([np.random.randint(0,2) for i in range(k)])
L, r = rank_genes(D,pi)
ES_pi[i] = enrichment_score(L,r,S)
hist(ES_pi,bins=100)
plot([ES,ES],[0,20],'r-',label="ES(S)")
title("Histogram of ES vlues for random phenotype labels.\nRed line is ES for the selected gene set.")
```
## Performance optimizations
```
%timeit L,R = rank_genes(D,C)
%timeit ES = enrichment_score(L,r,S)
%prun order, NES, p_values = gsea(D, C, gene_sets)
```
| github_jupyter |
# Goal
* Follow-up to `atomIncorp_taxaIncorp` simulation run.
* Investigating factors that influenced accuracy
* e.g., pre-fractionation abundance or G+C of fragments
# Setting parameters
```
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/atomIncorp_taxaIncorp/'
frag_info_file = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde_info.txt'
```
## Init
```
import os
import glob
import itertools
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
```
### BD min/max
```
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_BD)
print 'Max BD: {}'.format(max_BD)
```
# Reading in all necessary files
# Comm files
```
F = os.path.join(workDir, '*', '*', '*', 'comm.txt')
files = glob.glob(F)
print len(files)
%%R -i files
df_comm = list()
for (f in files){
df.tmp = read.delim(f, sep='\t')
ff = strsplit(f, '/') %>% unlist
df.tmp$percIncorp = ff[9]
df.tmp$percTaxa = ff[10]
df.tmp$sim_rep = ff[11]
f_name = ff[12]
df_comm[[f]] = df.tmp
}
df_comm = do.call(rbind, df_comm)
rownames(df_comm) = 1:nrow(df_comm)
df_comm %>% head(n=3)
```
## Classification data
```
F = os.path.join(workDir, '*', '*', '*', '*_data.txt')
files = glob.glob(F)
print len(files)
%%R -i files
cols = c('library', 'taxon', 'min', 'q25', 'mean', 'median', 'q75', 'max', 'incorp.known', 'incorp.pred')
df_data = list()
for (f in files){
df.tmp = read.delim(f, sep='\t')
df.tmp = df.tmp[,cols]
ff = strsplit(f, '/') %>% unlist
df.tmp$percIncorp = ff[9]
df.tmp$percTaxa = ff[10]
df.tmp$sim_rep = ff[11]
df.tmp$method = gsub('-cMtx_data.txt', '', ff[12])
f_name = ff[12]
df_data[[f]] = df.tmp
}
df_data = do.call(rbind, df_data)
rownames(df_data) = 1:nrow(df_data)
df_data %>% head(n=3)
```
## Fragment GC & length info
```
%%R -i frag_info_file
df_info = read.delim(frag_info_file, sep='\t')
df_info %>% head(n=3)
```
# Formatting table
```
%%R
clsfy = function(guess,known){
if(is.na(guess) | is.na(known)){
return(NA)
}
if(guess == TRUE){
if(guess == known){
return('True positive')
} else {
return('False positive')
}
} else
if(guess == FALSE){
if(guess == known){
return('True negative')
} else {
return('False negative')
}
} else {
stop('Error: true or false needed')
}
}
%%R
# comm & classificatino
join.on = c(
'library' = 'library',
'taxon_name' = 'taxon',
'percIncorp' = 'percIncorp',
'percTaxa' = 'percTaxa',
'sim_rep' = 'sim_rep')
df.j = inner_join(df_comm, df_data, join.on) %>%
filter(library %in% c(2,4,6)) %>%
mutate(cls = mapply(clsfy, incorp.pred, incorp.known))
# frag info
df.j = inner_join(df.j, df_info, c('taxon_name'='taxon_ID'))
df.j %>% head(n=3)
%%R
# renaming method
rename = data.frame(method = c('DESeq2', 'heavy', 'qSIP'),
method_new = c('HR-SIP', 'Heavy-SIP', 'qSIP'))
df.j = inner_join(df.j, rename, c('method'='method')) %>%
select(-method) %>%
rename('method' = method_new)
# reorder
as.Num = function(x) x %>% as.character %>% as.numeric
df.j$percTaxa = reorder(df.j$percTaxa, df.j$percTaxa %>% as.Num)
df.j$percIncorp = reorder(df.j$percIncorp, df.j$percIncorp %>% as.Num)
df.j %>% head(n=3)
```
## accuracy ~ abundance
```
%%R -w 800 -h 600
df.j.f = df.j %>%
filter(KDE_ID == 1,
cls != 'True negative')
ggplot(df.j.f, aes(cls, rel_abund_perc, fill=method)) +
geom_boxplot() +
facet_grid(percTaxa ~ percIncorp) +
scale_y_log10() +
labs(y='Pre-fractionation\nrelative abundance (%)') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.x = element_blank()
)
```
## accuracy ~ fragment BD
```
%%R -w 800 -h 600
ggplot(df.j.f, aes(cls, median.y, fill=method)) +
geom_boxplot() +
facet_grid(percTaxa ~ percIncorp) +
labs(y='Median fragment BD (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.x = element_blank()
)
```
| github_jupyter |
# Get rich or die… coding.
People want to become rich, to secure their old age, live comfort life and travel a lot. Or to make fortune and save the world. Whatever the reason is to become rich, a person has to do what knows how to do it best. I this post, I will describe my thoughts on how to create a simple machine learning model on stock price prediction in Python.
DISCLAIMER / LEGAL NOTICE
The data in this article is for your information only. It does not constitute investment advice.
### Post outline
- Defining the scope of our task
- What stock to choose?
- What to include in model as features?
- What to consider before creating predicting model in Python?
- Summary
### Defining the scope of our task
The task is to predict a next trading day price of a certain stock. Since price of a stock is continuous variable, regression analysis will be applied. The stock price data will be retrieved from **pandas_datareader** package.
```
from pandas_datareader import data
import matplotlib.pyplot as plt
import pandas as pd
```
### Which stock to choose?
When I asked this question myself, I did not think a lot to answer. A stock has to be liquid that is first requirement.
Definition from Investopedia:
*A stock's liquidity generally refers to how rapidly shares of a stock can be bought or sold without substantially impacting the stock price. Stocks with low liquidity may be difficult to sell and may cause you to take a bigger loss if you cannot sell the shares when you want to.*
By other words, a liquid stock easy to buy, easy to sell and supposedly easy to predict.
*Liquidity can be calculated as ratio of volume traded multiplied by the closing price divided by the price range from high to low, for the whole trading day, on a logarithmic scale (from https://www.cfainstitute.org).*
To roughly estimate liquidity, one just needs to multiply stock volume by stock closed price at the last trading day, the higher number, the more liquid stock is. Note: it is supposed there were no abnormal volume increase last trading day. One can ask: There are many liquid stocks at the exchange, which should I choose? The way I did, just picked the most liquid in its sector (Pfizer, ticker PFE, sector - Healthcare).
### What to include in model as features?
The more relevant information the better. I decided to include gold price and S&P500 index, in some cases they could act as leading indicators… I hope. Also adding day of the week would be beneficial, since there happens large price change from Friday to Monday.
```
pfe = data.DataReader('PFE', start='2018', end='2022',
data_source='yahoo')
sp500 = data.DataReader('^GSPC', start='2018', end='2022',
data_source='yahoo')
gold = data.DataReader('GOLD', start='2018', end='2022',
data_source='yahoo')
gold.head()
```
### What to consider before creating predicting model in Python?
The price is time series which is not stationary. As I was taught, predictive model has to be based on stationary time series. The stationarity definition by Wikipedia:
*In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time.*
Thus, we have to convert the time series to stationary. Non-stationary time series consists of the following components: level, trend, seasonality, noise. To simplify the task we will decompose trend only from a stock price and use Augmented Dickey-Fuller test to verify stationarity.
```
plt.style.use('fivethirtyeight')
```
Let's have a look at the graph
```
#plt.rcParams['figure.figsize'] = (20, 10)
fig, ax1 = plt.subplots(figsize=(20,10))
ax1.set_ylim([5, 60])
ax1.plot(pfe['Adj Close'], color='Red', label = 'PFE')
ax1.plot(gold['Adj Close'], color='Green', label='Gold')
ax2 = ax1.twinx()
ax2.plot(sp500['Adj Close'], label = 'SP500')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax1.legend(h1+h2, l1+l2, loc=2)
plt.show()
from statsmodels.tsa.stattools import adfuller
```
Defining Augmented Dickey-Fuller test function
```
def ADF_Stationarity_Test(timeseries, significance_level):
result = adfuller(timeseries, regression='ctt')
print(f'p-value: {result[1]} and significance level: {significance_level}')
if result[1] < significance_level:
print ('Timeseries is stationary')
else:
print ('Timeseries is NOT stationary')
ADF_Stationarity_Test (pfe['Adj Close'], 0.05)
```
As we can see from the line above the price is NOT stationary timeseries.
The function below adds moving averages to the dataframe with different time frames.
```
def adding_MAs(df, col_label, MAs=[5, 10, 15, 20, 30]):
for period in MAs:
df['MA'+ str(period)]=df[col_label].rolling(period).mean()
df[col_label + '-MA30'] = df[col_label] - df['MA30'] #This will become our target
df.dropna(inplace=True)
return df
def renaming_cols(df, prefix):
cols = df.columns.tolist()
cols_new = {}
for item in cols:
cols_new[item]=prefix+item
df.rename(columns =cols_new, inplace=True)
pfe = adding_MAs(pfe, 'Adj Close')
gold = adding_MAs(gold, 'Adj Close')
sp500 = adding_MAs(sp500, 'Adj Close')
pfe['Volume by Adj Close']=pfe['Volume'] * pfe['Adj Close']
pfe.drop(columns=['High', 'Low', 'Open', 'Volume', 'Close', 'Adj Close'], inplace=True)
gold.drop(columns=['High', 'Low', 'Open', 'Volume', 'Close', 'Adj Close'], inplace=True)
sp500.drop(columns=['High', 'Low', 'Open', 'Volume', 'Close', 'Adj Close'], inplace=True)
renaming_cols(pfe, 'pfe_')
renaming_cols(gold, 'gold_')
renaming_cols(sp500, 'sp500_')
ADF_Stationarity_Test (pfe['pfe_Adj Close-MA30'], 0.05)
```
From the line above we see that 'pfe_Adj Close-MA30' field is stationary timeseries and it will become our target. To define the price one just needs to add last MA30 to the predicted value.
Stationary target variable shown on the graph below. By try and error I identified that "Adj Close-MA30" gives best r2 score.
```
plt.rcParams["figure.figsize"] = (20,10)
plt.plot(pfe['pfe_Adj Close-MA30'])
plt.show()
```
Now time to merge all three dataframes into one
```
final_df = pfe.merge(gold, on=['Date'])
final_df = final_df.merge(sp500, on=['Date'])
```
Since we predicting stock price on next trading day we have to shift our target variable by one row.
```
final_df ['pfe_Adj Close-MA30'] = final_df ['pfe_Adj Close-MA30'].shift(-1)
final_df.tail()
final_df.reset_index(inplace=True)
final_df['Weekday'] = final_df['Date'].dt.day_name()
final_df =pd.get_dummies(final_df, columns=['Weekday'])
final_df.drop(columns=['Date'], inplace=True)
final_df.dropna( inplace=True)
```
The final dataframe looks like this.
```
final_df
X = final_df.drop(columns='pfe_Adj Close-MA30')
y = final_df['pfe_Adj Close-MA30']
```
Scaling data.
```
from sklearn.preprocessing import StandardScaler
scaling = StandardScaler()
X=scaling.fit_transform(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
regr= SVR()
params ={'kernel':['linear', 'rbf'],
'C':[0.2, 0.5, 1, 10, 100],
'epsilon':[1, 0.1, 0.01, 0.001, 0.0001]
}
gridSr = GridSearchCV(regr, params, n_jobs=5, verbose=4, scoring='r2')
gridSr.fit(X_train, y_train)
y_predict = gridSr.predict(X_test)
from sklearn.metrics import r2_score
r2_score (y_test, y_predict)
gridSr.best_score_
gridSr.best_params_
```
### Summary
With this simple model we managed to achieve 0.8 r2-score result which is not bad. Further steps would be to deploy the model and create web API to work with it. Also one can try to play with this model and try different parameters like shifting target variable by 5 and try to predict stock price in 5 days. Or one can try neural network instead of SVR.
Please refer to my GitHub profile for the complete code for this article.
https://github.com/Ruslion
| github_jupyter |
```
import os
if 'google.colab' in str(get_ipython()):
os.system('pip install torch==1.9.0 fastcore==1.3.20 pytorch_inferno==0.2.2')
```
# A PyTorch Drop-In Implementation of The INFERNO Algorithm
## Giles Strong
## PyHEP 2021, Online 06/07/21
### [GitHub](https://github.com/GilesStrong/pytorch_inferno) [Docs](https://gilesstrong.github.io/pytorch_inferno/)
# Traditional signal-versus-background classifier
- Common approach in HEP searches: train binary classifier for signal and background(s)
- Loss is binary cross-entropy
- Systematic uncertainties (nuisance parameters) ignored during training
- Resulting classification score binned and used as summary statistic for inference of the signal strength (parameter of interest, PoI)
- Only now are systematic uncertainties considered
- If effects are large, BCE training is no longer optimal & performance is sub-optimal
<img src="imgs/bce_result.png" width="1500">
# The INFERNO algorithm
- [de Castro & Dorigo 2018](https://www.sciencedirect.com/science/article/pii/S0010465519301948)
- Systematic uncertainties included during training
- Directly optimise deep neural network (DNN) for statistical inference of the PoI
- Loss is the PoI element of the inverse Hessian of the log-likelihood w.r.t. PoI and nuisance parameters
- Encouraged to have steep gradient w.r.t. PoI and shallow gradient w.r.t. nuisances
- DNN output is a pre-binned summary statistic
- Softmax output, but can hard-assign after training
- "Classes" are unordered and emerge as a function of the training
- Learned binning allows INFERNO to account for nuisances on both input features & normalisation
- Further discussion in blog posts ([part 1 of 5](https://gilesstrong.github.io/website/statistics/hep/inferno/2020/12/04/inferno-1.html))
<img src="imgs/inferno_result.png" width="1500">
# INFERNO implementation
## General implementation
1. Specify how nusiances would modify input features to DNN - makes DNN out differentiable w.r.t. nuisnaces
1. Pass data through DNN
1. Compute signal & bkg. shapes and normalise by rates
1. Compute profile log-likelihood (NLL) at Asimov count (sig+bkg and no nuisances)
1. Compute hessian of NLL w.r.t nuisances and PoI
1. Loss is the PoI element of the inverted hessian $\nabla^2 NLL ^{-1}_{\mathrm{PoI,PoI}}$
1. Back-propagate loss and update DNN parameters
## Difficulties
- Data needs to be modified before forwards pass to include the effect of the nuisances
- May need to reevaluate model response on data without the nuisances to get Asimov predictions
# INFERNO as a drop-in* loss
- Main aim: implement INFERNO such that
- it doesn't require a specific, hard-coded training loop
- it can easily be interchanged with other loss functions, to compare performance
- Implement loss as a *callback* class
- Has access to DNN and data
- Can perform actions at required points during optimisation cycle
# The PyTorch INFERNO package
- Provides DNN training wrapper for PyTorch, with extensive callback system
- Basic inference system for NLL minimisation with nuisances
- Abstract class implementation of INFERNO
- User can inherit and fine-tune it for their specific use-case
- Also includes an experimental approximation of INFERNO for use with up/down systematic MC datasets used in HEP
- User doesn't need to analytically describe how the input features depend on the nuisances
- Instead, nuisance effects extracted from interpolation between systematic and nominal datasets
- Package successfully reproduces paper results on toy dataset
# Optimisation loop
`ModelWrapper` class provides training of DNNs (`self.model`).
Callbacks are attributes of the `ModelWrapper`, and each callback has the `ModelWrapper` as an attribute.
The INFERNO callback can therefore alter the data (`self.x`) before the forwards pass, and compute the loss before the backwards pass.
This is the forwards pass for a batch of data.
```python
def _fit_batch(self, x:Tensor, y:Tensor, w:Tensor) -> None:
self.x,self.y = to_device(x,self.device),to_device(y,self.device)
for c in self.cbs: c.on_batch_begin() # <-- INFERNO callback modifes data
self.y_pred = self.model(self.x)
if self.state != 'test' and self.loss_func is not None:
self.loss_val = self.loss_func(self.y_pred, self.y)
for c in self.cbs: c.on_forwards_end() # <-- INFERNO callback manually sets the loss value
```
# Data modification
When a new batch of data is loaded, the `AbsInferno` class runs:
```python
def on_batch_begin(self) -> None:
self.b_mask = self.wrapper.y.squeeze() == 0
self._aug_data(self.wrapper.x)
```
The user overrides `_aug_data` to provide an appropriate modification of the input features, e.g.:
```python
def _aug_data(self, x:Tensor) -> None:
x[self.b_mask,0] += self.alpha[self.shape_idxs[0]] # Nuisance shifts input 0
x[self.b_mask,2] *= (self.alpha[self.shape_idxs[-1]]+3)/3 # Nuisance rescales input 2
```
Where `self.alpha` is a tensor of zeros i.e. the nuisance parameters at their nominal values.
# Loss computation
After forwards pass, INFERNO computes prediction shapes and reruns DNN predictions on unmodified data.
Only shape nuisances included so far.
`self.get_inv_ikk` computes the loss.
```python
def on_forwards_end(self) -> None:
# Shapes with derivatives w.r.t. nuisances
f_s = self.to_shape(self.wrapper.y_pred[~self.b_mask], w_s)
f_b = self.to_shape(self.wrapper.y_pred[self.b_mask], w_b)
# Shapes without derivatives w.r.t. nuisances
f_s_asimov = self.to_shape(self.wrapper.model(self.wrapper.x[~self.b_mask].detach()), w_s)
f_b_asimov = self.to_shape(self.wrapper.model(self.wrapper.x[self.b_mask].detach()), w_b)
self.wrapper.loss_val = self.get_inv_ikk(f_s, f_b, f_s_asimov, f_b_asimov)
```
# Loss computation
Expected counts for signal & bkg modified by rate nuisances and PoI.
NLL computed at nominal PoI. Can also constrain NLL using auxiliary measurements.
```python
def get_inv_ikk(self, f_s:Tensor, f_b:Tensor, f_s_asimov:Tensor, f_b_asimov:Tensor) -> Tensor:
s_exp = self.alpha[self.poi_idx]+self.alpha[self.s_norm_idxs].sum()
b_exp = self.b_true +self.alpha[self.b_norm_idxs].sum()
t_exp = (s_exp*f_s)+(b_exp*f_b)
asimov = (self.mu_true*f_s)+(self.b_true*f_b_asimov)
nll = -torch.distributions.Poisson(t_exp, False).log_prob(asimov).sum()
_,h = calc_grad_hesse(nll, self.alpha, create_graph=True) # Compute Hessian w.r.t. params
return torch.inverse(h)[self.poi_idx,self.poi_idx] # Invert and return relevant element
```
# Paper example
Let's see how we'd go about training networks for the toy-dataset of the INFERNO paper: a simple 3-feature classification problem with nuisances affecting the position and scale of the features, and the normalisation of the background.
<img src="imgs/toy_data.png" width="512">
## Import data
The package includes a simple function to generate the toy-dataset from the paper
```
from pytorch_inferno.data import get_paper_data
data, test = get_paper_data(200000, bs=2000, n_test=1000000)
```
## Define DNN
The `VariableSoftmax` is a standard softmax, but with a rescaling of the input activation.
```
from pytorch_inferno.utils import init_net
from pytorch_inferno.model_wrapper import ModelWrapper
from pytorch_inferno.inferno import VariableSoftmax, PaperInferno
from torch import nn, optim
net = nn.Sequential(nn.Linear(3,100), nn.ReLU(),
nn.Linear(100,100),nn.ReLU(),
nn.Linear(100,10), VariableSoftmax(0.1))
init_net(net)
model_inf = ModelWrapper(net)
```
## Train using INFERNO
We set the loss fucntion to `None` and include INFERNO as a callback. `PaperInferno` is already configures to apply the nuisance parameters used in the paper, and we can constrain the nuisances by supplying distributions for them.
```
from pytorch_inferno.callback import LossTracker, SaveBest, EarlyStopping
from fastcore.all import partialler
from torch.distributions import Normal
model_inf.fit(10, data=data, opt=partialler(optim.Adam,lr=1e-3), loss=None,
cbs=[PaperInferno(float_r=True, float_l=True, shape_aux=[Normal(0,0.2), Normal(0,0.5)], nonaux_b_norm=False, b_norm_aux=[Normal(0,100)]),
LossTracker(),SaveBest('weights/best_ie.h5'),EarlyStopping(2)])
```
## Train using BCE
We just need to change the output to a single sigmoid and the loss to BCE:
```
net2 = nn.Sequential(nn.Linear(3,100), nn.ReLU(),
nn.Linear(100,100), nn.ReLU(),
nn.Linear(100,1), nn.Sigmoid())
init_net(net2)
model_bce = ModelWrapper(net2)
model_bce.fit(10, data=data, opt=partialler(optim.Adam,lr=1e-3), loss=nn.BCELoss(),
cbs=[LossTracker(),SaveBest('weights/best_ie.h5'),EarlyStopping(2)])
from pytorch_inferno.inferno import InfernoPred
from pytorch_inferno.inference import bin_preds, get_shape, get_paper_syst_shapes, calc_profile
from pytorch_inferno.plotting import plot_likelihood, plot_preds
from pytorch_inferno.callback import PredHandler
import pandas as pd
import numpy as np
import torch
from torch import Tensor
mu_scan = torch.linspace(20,80,61)
bkg = test.dataset.x[test.dataset.y.squeeze() == 0]
def compute_nll(model:ModelWrapper, is_inferno:bool) -> Tensor:
preds = model._predict_dl(test, pred_cb=InfernoPred() if is_inferno else PredHandler())
df = pd.DataFrame({'pred':preds.squeeze()})
df['gen_target'] = test.dataset.y
df.head()
bins = np.linspace(0,10,11) if is_inferno else np.linspace(0,1,11)
bin_preds(df, bins=bins)
plot_preds(df, bin_edges=bins)
f_s,f_b = get_shape(df,1),get_shape(df,0)
b_shapes = get_paper_syst_shapes(bkg, df, model=model, pred_cb=InfernoPred() if is_inferno else PredHandler(), bins=bins)
nll = calc_profile(f_s_nom=f_s, **b_shapes, n_obs=1050, mu_scan=mu_scan, mu_true=50, n_steps=100, shape_aux=[Normal(0,0.4), Normal(0,0.5)], b_norm_aux=[Normal(0,100)])
return nll
nll_inf = compute_nll(model_inf, True)
nll_bce = compute_nll(model_bce, False)
```
# Comparison
After computing the predictions on the test data and evaluating the profile likelihood (code skipped, but present in notebook), we can compare the sensitivity to the signal.
Both DNNs predict the same signal strength, however INFERNO is much more precise due to being aware of the systematics that affect the data.
```
plot_likelihood({'BCE':nll_bce, 'INFERNO':nll_inf}, mu_scan)
```
# Summary
- PyTorch INFERNO provides a starting point for applying INFERNO to your own work
- is a pip installable package (`pip install pytorch_inferno`)
- Due to the task-specific nature of the concrete implementations, the user will always have to inherit and build there own classes
- Not covered is an approximation of INFERNO, that could be more generalisable to work directly on systematic datasets
- The code also demonstrates how INFERNO can be potentially "dropped-in" to other frameworks with callback systems
- Other implementations:
- Tensorflow 1, Pablo de Casto: https://github.com/pablodecm/paper-inferno
- Tensorflow 2, Lukas Layer: https://github.com/llayer/inferno
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sqlalchemy import func
from sqlalchemy import distinct
from sqlalchemy import desc
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
inspector = inspect(engine)
columns = inspector.get_columns('Measurement')
for column in columns:
print(column["name"], column["type"])
columns = inspector.get_columns('Station')
for column in columns:
print(column["name"], column["type"])
```
# Exploratory Climate Analysis
```
percep_out= session.query(Measurement.date,Measurement.prcp).filter(Measurement.date >= last_year).all()
print(percep_out)
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
last_year= dt.datetime(2017,8,23)-dt.timedelta(days=365)
print(last_year)
# Perform a query to retrieve the data and precipitation scores
percep_out= session.query(Measurement.date,Measurement.prcp).filter(Measurement.date >= last_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
percep_out_df= pd.DataFrame(percep_out, columns= ["Date", "Perception"]).set_index(["Date"])
# Sort the dataframe by date
percep_out_df.sort_values("Date")
# Use Pandas Plotting with Matplotlib to plot the
percep_out_df.plot(rot= 90)
```

```
# Use Pandas to calcualte the summary statistics for the precipitation data
percep_out_df.describe()
```

```
# Design a query to show how many stations are available in this dataset?
station_count = session.query(func.count(distinct(Station.station))).all()
station_count
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
station_active = session.query(Measurement.station, func.count(Measurement.station).label('Total')).\
group_by(Measurement.station).order_by(desc('Total')).all()
station_active
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
station_temps = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519281').all()
station_temps
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temp_observ = session.query(Measurement.tobs).filter(Measurement.station == 'USC00519281').filter(Measurement.date > last_year).all()
highest_tobs = [result[0] for result in temp_observ]
highest_tobs
plt.ylabel("Frequency")
plt.hist(temp_observ, bins = 12)
plt.savefig('Tobs_hist.png')
plt.show()
```

```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
trip = calc_temps('2017-05-23', '2017-05-27')
trip[0]
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
```
## Optional Challenge Assignment
```
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
```
import numpy as np
np.empty(5)
np.zeros((10, 7))
np.ones((3,3,3))
np.eye(3)
np.full((3, 5), 3.14)
np.arange(0, 21, 7)
np.linspace(0, 1, 5)
np.random.randint(0, 10, (3, 3))
[x for x in range(1, 301, 7) if x%10 == 7 or x%10 == 1]
my_secret = [x for x in range(1, 301, 7) if x%10 == 7 or x%10 == 1]
np.array([my_secret, [x/2 for x in my_secret], [x-100 for x in my_secret]])
np.ones((5, 5))
np.array([i for i in range(11) if i%2])
first_line = [x*y for x in range(2, 100, 6) for y in range (7, 1, -2)]
second_line = [x ** 0.5 for x in range(1000, 1101, 2)]
third_line = [x**2 for x in range(51)]
big_secret = np.array([first_line, second_line, third_line, second_line, first_line])
# Чему равна сумма элементов последнего столбца массива? Ответ округлите до двух цифр после запятой:
big_secret[:, -1].sum()
# Выделите из каждой строки массива big_secret первые 5 элементов. Чему равна сумма элементов главной диагонали
# получившейся матрицы? Округлите ответ до двух цифр после запятой:
i = 0
elements = []
for each in big_secret[:, :5]:
elements.append(each[i])
i += 1
sum(elements)
i = 0
elements_diagonal = []
for elem in big_secret[:, -5:]:
elements_diagonal.append(elem[i])
i += 1
np.prod(elements_diagonal)
my_array = np.random.randint(1, 100, (4, 6))
my_array
my_slice = my_array[1:3, 2:4]
my_slice
my_slice[:] = 0
my_array
big_secret.shape[1]
for i in range(big_secret.shape[0]):
for j in range(big_secret.shape[1]):
if i%2==0 and j%2==0:
big_secret[i, j] = -1
elif i%2!=0 and j%2!=0:
big_secret[i, j] = 1
big_secret
i = 0
elements = []
for each in big_secret[:, :5]:
elements.append(each[i])
i += 1
np.sum(elements)
elements
i = 0
elements_diagonal = []
for elem in big_secret[:, -5:]:
elements_diagonal.append(elem[i])
i += 1
np.prod(elements_diagonal)
# транспонированием
my_array = np.array([[1,2,3,4,5], [6,7,8,9,10]])
my_array.T
my_array = np.random.randint(0, 10, 20)
my_array.reshape((4,5))
my_array.reshape((5,4))
my_array = np.array([[1,2,3], [11,22,33], [111,222,333]])
my_array.flatten()
my_array = np.random.randint(0, 10, (3, 4))
my_array
my_array[my_array < 5]
mask = np.array([1, 0, 1, 0], dtype=bool)
my_array[:, mask]
my_array = np.random.randint(0, 10, (4, 6))
my_array
np.sort(my_array, axis=1)
np.sort(my_array, axis=0)
first = [x**(1/2) for x in range(100)]
second = [x**(1/3) for x in range(100, 200)]
third = [x/y for x in range(200,300,2) for y in [3,5]]
great_secret = np.array([first, second, third]).T
great_secret.shape
np.cos(great_secret[0, :]).sum()
great_secret[great_secret > 50].sum()
great_secret.flatten()[150]
great_secret.sort(axis=0)
great_secret[-1, :].sum()
students = np.array([1,135,34,4,2,160,43,5,3,163,40,4.3,4,147,44,5,5,138,41,4.7,6,149,54,3.9,7,136,39,4.2,8,154,48,4.9,9,137,35,3.7,10,165,60,4.6])
students = students.reshape((10, 4))
students
students[:, -1].mean()
np.median(students[:, -1])
np.mean(students[:, 2]) - np.median(students[:, 2])
np.corrcoef(students[:, 1], students[:, 2])
np.corrcoef(students[:, 1], students[:, -1])
np.corrcoef(students[:, 2], students[:, -1])
np.std(students[:, 1])
np.std(students[:, -1])
np.var(students[:, 2]), np.std(students[:, 2])
my_array = np.array([[1,2,3,4,5],
[6,7,8,9,10],
[11,12,13,14,15],
[16,17,18,19,20],
[21,22,23,24,25]])
my_array[1:4, 1:4]
my_sin = np.sin(my_array)
my_sin[1:4,1:4] = 1
my_sin.sum()
my_sin[:, :4].reshape((10, 2))[:, 0].sum()
bigdata = np.array([x**2 for x in range(100, 1000) if x%2!=0])
bigdata
np.median(bigdata)
np.std(bigdata)
np.corrcoef(bigdata[0::2], bigdata[1::2])
```
| github_jupyter |
```
# Зависимости
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import os
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Инициализируем все известные генераторы случаынйх чисел / Setting all known random seeds
my_code = "Margaryan"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
os.environ['PYTHONHASHSEED']=str(my_seed)
random.seed(my_seed)
np.random.seed(my_seed)
tf.compat.v1.set_random_seed(my_seed)
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# Читаем данные из файла
train_data = pd.read_csv("../datasets/iris_train.csv")
train_data.head()
# Определим размер валидационной выборки
val_size = round(0.2*len(train_data))
print(val_size)
# Создадим обучающую и валидационную выборки
random_state = my_seed
train, val = train_test_split(train_data, test_size=val_size, random_state=random_state)
print(len(train), len(val))
# Значения в числовых столбцах преобразуем к отрезку [0,1].
# Для настройки скалировщика используем только обучающую выборку.
num_columns = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
ord_columns = ['species']
ct = ColumnTransformer(transformers=[('numerical', MinMaxScaler(), num_columns)], remainder='passthrough')
ct.fit(train)
# Преобразуем значения, тип данных приводим к DataFrame
sc_train = pd.DataFrame(ct.transform(train))
sc_val = pd.DataFrame(ct.transform(val))
# Устанавливаем названия столбцов
column_names = num_columns + ord_columns
sc_train.columns = column_names
sc_val.columns = column_names
sc_train
# Отберем необходимые параметры
x_train = sc_train[num_columns]
x_val = sc_val[num_columns]
y_train = (sc_train[ord_columns].values).flatten()
y_val = (sc_val[ord_columns].values).flatten()
# Создадим простую модель логистической регрессии
model = LogisticRegression()
# Обучим модель
model.fit(x_train, y_train)
# Проверим работу обученной нейронной сети на валидационной выборке
pred_val = model.predict(x_val)
f1 = f1_score(y_val, pred_val, average='weighted')
print(f1)
test = pd.read_csv("../datasets/iris_test.csv")
test['species'] = ''
test.head()
sc_test = pd.DataFrame(ct.transform(test))
sc_test.columns = column_names
x_test = sc_test[num_columns]
test['species'] = model.predict(x_test)
test.head()
test.to_csv('margaryan.csv', index=False)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Differenziazione automatica e gradient tape
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/autodiff"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Visualizza su TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/it/tutorials/customization/autodiff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Esegui in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/it/tutorials/customization/autodiff.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Visualizza il sorgente su GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/it/tutorials/customization/autodiff.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Scarica il notebook</a>
</td>
</table>
Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè questa traduzioni della comunità sono *best-effort*, non c'è garanzia che questa sia un riflesso preciso e aggiornato della [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en).
Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs).
Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list docs@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs).
Nel tutorial precedente abbiamo introdotto i `Tensori` e le loro operazioni. In questo tutorial copriremo la [differenziazione automatica](https://en.wikipedia.org/wiki/Automatic_differentiation), una tecnica importante per ottimizare i modelli di machine learning.
## Setup
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
```
## Gradient tapes
TensorFlow fornisce l'API [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) per la differenziazione automatica che calcola il gradiente di una computazione rispetto alle sue variabili in input. Tensorflow "registra" tutte le operazioni eseguite dentro il contesto di un `tf.GradientTape` su un "tape". Tensorflow quindi usa quel "tape" e i gradienti associati con ogni operazione registrata per calcolare il gradiente di una computazione "registrata" utilizzando l'[accumulazione inversa](https://en.wikipedia.org/wiki/Automatic_differentiation).
Per esempio:
```
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
```
Puoi anche richiedere i gradienti dell'output rispetto ai valori intermedi calcolati in un contesto "registrato" di `tf.GradientTape`.
```
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
```
Di default, le risorse tenute da un GradientTape sono rilasciate non appena il metodo GradientTape.gradient() è chiamato. Per calcolare multipli gradienti sullo stesso calcolo, crea un `persistent` gradient tape. Questo da la possibilità di fare chiamate multiple del metodo `gradient()` non appena le risorse sono rilasciate quando l'oggetto tape è liberato dal garbage collector. Per esempio:
```
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
```
### Flusso di controllo della registrazione
Poichè i "tape" registrano operazione nel momento in cui le eseugono, il flusso di controllo Python (usando `if` e `while` per esempio) è gestito naturalmente:
```
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
```
### Gradienti di ordine superiore
Le operazioni dentro il gestore di contesto di `GradientTape` sono registrati per la differenziazione automatica. Se i gradienti sono calcolati nello stesso contesto, allora anche il calcolo del gradiente è registrato. Come risultato, la stessa API funziona per gradienti di ordine superiore. Per esempio:
```
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
```
## Passi successivi
In questo tutorial abbiamo coperto il calcolo di gradienti in TensorFlow. Con questo abbiamo abbastanza primitive richieste per costruire e addestrare reti neurali.
| github_jupyter |
# 15-minutes Realized Variance Notebook
This notebook analyzes the best subfrequency for computing the 15-minutes Realized Variance by creating a variance signature plot.
```
# Required libraries
# Required libraries
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
from pathlib import Path
import sys
import os
import pandas as pd
import numpy as np
from itertools import chain
import matplotlib.pyplot as plt
import datetime
import zipfile
from timeit import default_timer as timer
import sqlalchemy as db
import matplotlib.pylab as pylab
# Paths
sys.path.append(os.path.join(Path(os.getcwd()).parent))
data_path = os.path.join(os.path.join(Path(os.getcwd()).parent), 'data')
data_per_day_path = os.path.join(os.path.join(Path(os.getcwd()).parent), 'data','data_per_day')
results_path = os.path.join(os.path.join(Path(os.getcwd()).parent), 'results')
# create connection to sqlite database
db_path = os.path.join(data_path, 'database.db')
db_engine = db.create_engine('sqlite:///' + db_path)
params = {
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
pylab.rcParams.update(params)
# get the data folders file now
data_folders = [f for f in os.listdir(data_per_day_path) if not os.path.isfile(os.path.join(data_per_day_path, f))]
data_folders = [file for file in data_folders if '.' not in file]
data_folders = [os.path.join(data_per_day_path, x) for x in data_folders]
# get the csv file now
data_folder = data_folders[1]
table_name = data_folder[-3:]
csv_files = [f for f in os.listdir(data_folder) if os.path.isfile(os.path.join(data_folder, f))]
csv_files = [file for file in csv_files if '.csv' in file and '201912' in file]
csv_files = np.sort([os.path.join(data_folder, x) for x in csv_files])
def compute_second_data(csv_file):
data_df = pd.read_csv(csv_file)
data_df.DT = pd.to_datetime(data_df.DT)
data_df.sort_values(by=['DT'], inplace=True)
data_df.index = data_df.DT
data_df.drop(columns=['DT'],inplace=True)
data_df = data_df.between_time('9:30', '16:00')
data_df.reset_index(drop=False, inplace=True)
# non zero quotes
data_df = data_df.loc[(data_df.BID>0) & (data_df.BIDSIZ>0) & (data_df.ASK>0) & (data_df.ASKSIZ>0)]
# autoselect exchange
data_df['total_size'] = data_df.BIDSIZ + data_df.ASKSIZ
#data_df = data_df.loc[data_df.EX == data_df.groupby(['EX']).sum().total_size.idxmax()]
# delete negative spreads
data_df = data_df.loc[data_df.ASK > data_df.BID]
# mergeQuotesSameTimestamp
ex = data_df.EX.values[0]
sym_root = data_df.SYM_ROOT.values[0]
data_df.drop(columns=['SYM_SUFFIX', 'total_size'], inplace=True)
data_df = data_df.groupby(['DT']).median()
data_df['EX'] = ex
data_df['SYM_ROOT'] = sym_root
data_df.reset_index(drop=False, inplace=True)
# remove entries with spread > 50 * daily median spread
data_df['SPREAD'] = data_df.ASK - data_df.BID
data_df = data_df.loc[data_df['SPREAD'] < 50 * data_df['SPREAD'].median()]
# remove outliers using the centered rolling window approach
def compute_diff(x):
return x.values[window] - np.median(np.delete(x.values,window))
window = 25
data_df.sort_values(by=['DT'], inplace=True)
data_df['SPREAD_DIFF'] = data_df.SPREAD.rolling(2*window+1, min_periods=2*window+1, center=True).apply(compute_diff)
data_df = data_df.loc[(data_df['SPREAD_DIFF'] < 10 * data_df['SPREAD_DIFF'].mean()) | (data_df['SPREAD_DIFF'].isna())]
data_df = data_df.reset_index(drop=True)
# resample data to 15 seconds level
data_df.set_index(['DT'], inplace=True)
data_df["MID"] = data_df.apply(lambda x: (x.ASK * x.ASKSIZ + x.BID * x.BIDSIZ) / (x.ASKSIZ + x.BIDSIZ), axis=1)
data_df = data_df[['MID', 'SYM_ROOT']]
df_resampled = data_df.resample('1s').ffill()
df_resampled = df_resampled.append(pd.DataFrame(data_df[-1:].values,
index=[df_resampled.index[-1] + datetime.timedelta(seconds=1)],columns=data_df.columns)) # get last observation that is not added by ffill
# set new index and forward fill the price data
first_date = datetime.datetime(year=2019,month=12,day=int(csv_file[-6:-4]),hour=9,minute=45,second=0)
df_resampled = df_resampled.iloc[1:,:] # observation at 9:30 is going to be NA
new_index = pd.date_range(start=first_date, periods=22501, freq='1s') # index from 9:45 until 16:00
df_resampled = df_resampled.reindex(new_index, method='ffill')
df_resampled.reset_index(drop=False, inplace=True)
df_resampled.rename(columns={'index': 'DT'}, inplace = True)
return df_resampled
%%time
from joblib import Parallel, delayed
df_data_all_days_SPY = Parallel(n_jobs=14)(delayed(compute_second_data)(i) for i in csv_files)
%%time
from joblib import Parallel, delayed
df_data_all_days_EEM = Parallel(n_jobs=14)(delayed(compute_second_data)(i) for i in csv_files)
%%time
from joblib import Parallel, delayed
df_data_all_days_EZU = Parallel(n_jobs=14)(delayed(compute_second_data)(i) for i in csv_files)
```
# Analysis best sampling for 15min realized variance
The result indicates that 1min is more than enough
```
def compute_rv(df, sampling):
df.index = df.DT
df_resampled = df.resample(sampling).ffill()
df_resampled['RET'] = df_resampled.MID.pct_change().apply(np.vectorize(lambda x: np.log(1+x)))
df_resampled = df_resampled.iloc[1:,:] # first return is NA
df_resampled.reset_index(drop=True, inplace=True)
df_resampled['RET2'] = df_resampled['RET'].apply(lambda x: x ** 2)
df_resampled.iloc[-1,0] = df_resampled.iloc[-1,0] - datetime.timedelta(seconds=1)
df_resampled.index = df_resampled.DT
df_resampled = df_resampled.resample('15min').sum()
df_resampled.reset_index(drop=False, inplace=True)
df_resampled.DT = df_resampled.DT + datetime.timedelta(minutes=15)
return list(df_resampled['RET2'].values)
samplings = ['1s', '2s', '5s', '10s', '20s', '30s', '40s', '50s', '1min','3min', '5min']
rv_plot = []
for sampling in samplings:
rv_sample = []
for df in df_data_all_days_SPY:
rv_sample +=compute_rv(df, sampling)
rv_plot.append(np.mean(rv_sample))
fig,ax = plt.subplots(1,1,figsize=(20,15))
plt.plot(samplings, rv_plot)
plt.savefig(os.path.join(results_path, 'rv_15_signature_plot.png'), dpi=400, facecolor='aliceblue',edgecolor='k',bbox_inches='tight')
plt.show()
df_test = pd.DataFrame(columns=['varEEM', 'varSPY', 'varEZU', 'cov(EEM,SPY)', 'cov(EEM, EZU)', 'cov(SPY, EZU)'])
for day in range(len(df_data_all_days_SPY)):
df_SPY = df_data_all_days_SPY[day]
df_SPY.index = df_SPY.DT
df_SPY = df_SPY.resample('1min').ffill()
df_SPY['RET'] = df_SPY.MID.pct_change().apply(np.vectorize(lambda x: np.log(1+x)))
df_SPY = df_SPY[1:]
df_EEM = df_data_all_days_EEM[day]
df_EEM.index = df_EEM.DT
df_EEM = df_EEM.resample('1min').ffill()
df_EEM['RET'] = df_EEM.MID.pct_change().apply(np.vectorize(lambda x: np.log(1+x)))
df_EEM = df_EEM[1:]
df_EZU = df_data_all_days_EZU[day]
df_EZU.index = df_EZU.DT
df_EZU = df_EZU.resample('1min').ffill()
df_EZU['RET'] = df_EZU.MID.pct_change().apply(np.vectorize(lambda x: np.log(1+x)))
df_EZU = df_EZU[1:]
master_df = pd.DataFrame(index = df_SPY.index, columns=['varEEM', 'varSPY', 'varEZU', 'cov(EEM,SPY)', 'cov(EEM, EZU)', 'cov(SPY, EZU)'])
master_df['varEEM'] = df_EEM.RET.apply(lambda x: x**2)
master_df['varSPY'] = df_SPY.RET.apply(lambda x: x**2)
master_df['varEZU'] = df_EZU.RET.apply(lambda x: x**2)
master_df['cov(EEM,SPY)'] = np.multiply(df_EEM.RET.values, df_SPY.RET.values)
master_df['cov(EEM, EZU)'] = np.multiply(df_EEM.RET.values, df_EZU.RET.values)
master_df['cov(SPY, EZU)'] = np.multiply(df_SPY.RET.values, df_EZU.RET.values)
master_df.reset_index(drop=False, inplace=True)
master_df.iloc[-1,0] = master_df.iloc[-1,0] - datetime.timedelta(seconds=1)
master_df.index = master_df.DT
master_df = master_df.resample('15min').sum()
master_df.reset_index(drop=False, inplace=True)
master_df.DT = master_df.DT + datetime.timedelta(minutes=15)
df_test = pd.concat([df_test, master_df])
df_test.to_excel(os.path.join(data_path, 'RV15min.xlsx'))
```
| github_jupyter |
# SageMaker PySpark PCA and K-Means Clustering MNIST Example
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Loading the Data](#Loading-the-Data)
4. [Create a pipeline with PCA and K-Means on SageMaker](#Create-a--pipeline-with--PCA-and--K-Means-on-SageMaker)
5. [Inference](#Inference)
6. [Clean-up](#Clean-up)
7. [More on SageMaker Spark](#More-on-SageMaker-Spark)
## Introduction
This notebook will show how to cluster handwritten digits through the SageMaker PySpark library.
We will manipulate data through Spark using a SparkSession, and then use the SageMaker Spark library to interact with SageMaker for training and inference.
We will create a pipeline consisting of a first step to reduce the dimensionality using SageMaker's PCA algorithm, followed by the final K-Means clustering step on SageMaker.
You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.
This notebook was created and tested on an ml.m4.xlarge notebook instance.
## Setup
First, we import the necessary modules and create the `SparkSession` with the SageMaker-Spark dependencies attached.
```
import os
import boto3
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
import sagemaker
from sagemaker import get_execution_role
import sagemaker_pyspark
role = get_execution_role()
# Configure Spark to use the SageMaker Spark dependency jars
jars = sagemaker_pyspark.classpath_jars()
classpath = ":".join(sagemaker_pyspark.classpath_jars())
# See the SageMaker Spark Github to learn how to connect to EMR from a notebook instance
spark = (
SparkSession.builder.config("spark.driver.extraClassPath", classpath)
.master("local[*]")
.getOrCreate()
)
spark
```
## Loading the Data
Now, we load the MNIST dataset into a Spark Dataframe, which dataset is available in LibSVM format at
`s3://sagemaker-sample-data-[region]/spark/mnist/`
where `[region]` is replaced with a supported AWS region, such as us-east-1.
In order to train and make inferences our input DataFrame must have a column of Doubles (named "label" by default) and a column of Vectors of Doubles (named "features" by default).
Spark's LibSVM DataFrameReader loads a DataFrame already suitable for training and inference.
Here, we load into a DataFrame in the SparkSession running on the local Notebook Instance, but you can connect your Notebook Instance to a remote Spark cluster for heavier workloads. Starting from EMR 5.11.0, SageMaker Spark is pre-installed on EMR Spark clusters. For more on connecting your SageMaker Notebook Instance to a remote EMR cluster, please see [this blog post](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/).
```
import boto3
cn_regions = ["cn-north-1", "cn-northwest-1"]
region = boto3.Session().region_name
endpoint_domain = "com.cn" if region in cn_regions else "com"
spark._jsc.hadoopConfiguration().set(
"fs.s3a.endpoint", "s3.{}.amazonaws.{}".format(region, endpoint_domain)
)
trainingData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/train/".format(region))
)
testData = (
spark.read.format("libsvm")
.option("numFeatures", "784")
.load("s3a://sagemaker-sample-data-{}/spark/mnist/test/".format(region))
)
trainingData.show()
```
MNIST images are 28x28, resulting in 784 pixels. The dataset consists of images of digits going from 0 to 9, representing 10 classes.
In each row:
* The `label` column identifies the image's label. For example, if the image of the handwritten number is the digit 5, the label value is 5.
* The `features` column stores a vector (`org.apache.spark.ml.linalg.Vector`) of `Double` values. The length of the vector is 784, as each image consists of 784 pixels. Those pixels are the features we will use.
As we are interested in clustering the images of digits, the number of pixels represents the feature vector, while the number of classes represents the number of clusters we want to find.
### Create a pipeline with PCA and K-Means on SageMaker
To perform the clustering task, we will first running PCA on our feature vector, reducing it to 50 features. Then, we can use K-Means on the result of PCA to apply the final clustering. We will create a **Pipeline** consisting of 2 stages: the PCA stage, and the K-Means stage.
In the following example, we run the pipeline fully on SageMaker infrastructure, making use of both `PCASageMakerEstimator` and `KMeansSageMakerEstimator`. The PCA training and inference step will run on SageMaker, and then we can train and infer using Amazon SageMaker's K-Means on the output column from PCA:
```
from pyspark.ml import Pipeline
from sagemaker_pyspark.algorithms import PCASageMakerEstimator, KMeansSageMakerEstimator
from sagemaker_pyspark import RandomNamePolicyFactory, IAMRole, EndpointCreationPolicy
from sagemaker_pyspark.transformation.serializers import ProtobufRequestRowSerializer
# ML pipeline with 2 stages: PCA and K-Means
# 1st stage: PCA on SageMaker
pcaSageMakerEstimator = PCASageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingInstanceType="ml.m4.xlarge",
trainingInstanceCount=1,
endpointInstanceType="ml.t2.large",
endpointInitialInstanceCount=1,
namePolicyFactory=RandomNamePolicyFactory("sparksm-3p-"),
)
# Set parameters for PCA (number of features in input and the number of principal components to find)
pcaSageMakerEstimator.setFeatureDim(784)
pcaSageMakerEstimator.setNumComponents(50)
# 2nd stage: K-Means on SageMaker
kMeansSageMakerEstimator = KMeansSageMakerEstimator(
sagemakerRole=IAMRole(role),
trainingSparkDataFormatOptions={
"featuresColumnName": "projection"
}, # Default output column generated by PCASageMakerEstimator
requestRowSerializer=ProtobufRequestRowSerializer(
featuresColumnName="projection"
), # Default output column generated by PCASageMakerEstimator
trainingInstanceType="ml.m4.xlarge",
trainingInstanceCount=1,
endpointInstanceType="ml.t2.large",
endpointInitialInstanceCount=1,
namePolicyFactory=RandomNamePolicyFactory("sparksm-3k-"),
endpointCreationPolicy=EndpointCreationPolicy.CREATE_ON_TRANSFORM,
)
# Set parameters for K-Means
kMeansSageMakerEstimator.setFeatureDim(50)
kMeansSageMakerEstimator.setK(10)
# Define the stages of the Pipeline in order
pipelineSM = Pipeline(stages=[pcaSageMakerEstimator, kMeansSageMakerEstimator])
```
Now that we've defined the `Pipeline`, we can call fit on the training data. Please note the below code will take several minutes to run and create all the resources needed for this pipeline.
```
# Train
pipelineModelSM = pipelineSM.fit(trainingData)
```
In this case, when calling `fit` on the `PipelineModel`, 2 jobs and models will be created:
1. A job using the PCA algorithm which will create a PCA model
2. A job using the K-Means algorithm which will create a K-Means model
As the stages were defined in the pipeline, the pipeline is responsible for giving as input to the PCA job the raw data, and then giving as input to the K-Means job the results of the PCA job.
Please note that the endpoint serving the PCA model is created when calling `fit`, as the endpoint is needed to be generate the input to train the K-means algorithm and thus launch the job. In this setting, only the K-Means endpoint will be created when calling `transform`, as stated by the `endpointCreationPolicy` given to the `KMeansSageMakerEstimator`, in order to reduce the waiting time when calling `fit`.
## Inference
When calling the transform method on the `PipelineModel` object, both the PCA and K-Means SageMaker endpoints are contacted sequentially. We can see this in the below architecture diagram.

Please note the below code will take several minutes to run and create the final K-Means endpoint needed for this pipeline.
```
transformedData = pipelineModelSM.transform(testData)
transformedData.show()
```
How well did the pipeline perform? Let us display the digits from each of the clusters and manually inspect the results:
```
from pyspark.sql.types import DoubleType
import matplotlib.pyplot as plt
import numpy as np
import string
# Helper function to display a digit
def showDigit(img, caption="", xlabel="", subplot=None):
if subplot == None:
_, (subplot) = plt.subplots(1, 1)
imgr = img.reshape((28, 28))
subplot.axes.get_xaxis().set_ticks([])
subplot.axes.get_yaxis().set_ticks([])
plt.title(caption)
plt.xlabel(xlabel)
subplot.imshow(imgr, cmap="gray")
def displayClusters(data):
images = np.array(data.select("features").cache().take(250))
clusters = data.select("closest_cluster").cache().take(250)
for cluster in range(10):
print("\n\n\nCluster {}:".format(string.ascii_uppercase[cluster]))
digits = [img for l, img in zip(clusters, images) if int(l.closest_cluster) == cluster]
height = ((len(digits) - 1) // 5) + 1
width = 5
plt.rcParams["figure.figsize"] = (width, height)
_, subplots = plt.subplots(height, width)
subplots = np.ndarray.flatten(subplots)
for subplot, image in zip(subplots, digits):
showDigit(image, subplot=subplot)
for subplot in subplots[len(digits) :]:
subplot.axis("off")
plt.show()
displayClusters(transformedData)
```
## Clean-up
Since we don't need to make any more inferences, now we delete the resources (endpoints, models, configurations, etc):
```
# Delete the resources
from sagemaker_pyspark import SageMakerResourceCleanup
from sagemaker_pyspark import SageMakerModel
def cleanUp(model):
resource_cleanup = SageMakerResourceCleanup(model.sagemakerClient)
resource_cleanup.deleteResources(model.getCreatedResources())
# Delete the SageMakerModel in pipeline
for m in pipelineModelSM.stages:
if isinstance(m, SageMakerModel):
cleanUp(m)
```
## More on SageMaker Spark
The SageMaker Spark Github repository has more about SageMaker Spark, including how to use SageMaker Spark using the Scala SDK: https://github.com/aws/sagemaker-spark
| github_jupyter |
# Custom Distributions
You might want to model input uncertanty with a distribution not currenlty available in Golem. In this case you can create your own class implementing such distribution.
Here, we will reimplement a uniform distribution as a toy example.
```
from golem import *
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import matplotlib
%matplotlib inline
import seaborn as sns
sns.set(context='talk', style='ticks')
```
To create your own distribution class to be used in Golem, you need to create a class that:
(1) Inherits from the ``BaseDist`` class;
(2) Implements a ``cdf`` method that returns the cumulative density for the distribution of interest. The ``cdf`` method needs to to take in two arguments, ``x`` and ``loc``. ``loc`` is the location of the distribution, e.g. the mean for a Gaussian, and ``x`` is where the CDF needs to be evaluated at.
In addition, even though this is not required for the code to run, the ``__init__`` method should allow to define the scale of the distribution. In the example below, we allow the user to define the range of the uniform. For a Gaussian distribution this would be the standard deviation, and so on.
```
# Here is a custom, user-implemented, uniform distribution class
class MyDistribution(BaseDist):
def __init__(self, urange):
self.urange = urange
def cdf(self, x, loc):
"""Cumulative density function.
Parameters
----------
x : float
The point where to evaluate the cdf.
loc : float
The location of the Uniform distribution.
Returns
-------
cdf : float
Cumulative density evaluated at ``x``.
"""
a = loc - 0.5 * self.urange
b = loc + 0.5 * self.urange
# calc cdf
if x < a:
return 0.
elif x > b:
return 1.
else:
return (x - a) / (b - a)
```
To demonstrate how this can be used, we use a simple objective function and we will compute its robust counterpart using the ``Uniform`` class available in Golem as well as the above, user-defined equivalent ``MyDistribution``.
```
# a sample 1d objective function
def objective(x):
def sigmoid(x, l, k, x0):
return l / (1 + np.exp(-k*(x-x0)))
sigs = [sigmoid(x, 1, 100, 0.1),
sigmoid(x, -1, 100, 0.2),
sigmoid(x, 0.7, 80, 0.5),
sigmoid(x, -0.7, 80, 0.9)
]
return np.sum(sigs, axis=0)
```
First, using the ``Golem.Uniform`` class...
```
# take 1000 samples in x
x = np.linspace(0, 1, 1000)
# compute objective
y = objective(x)
# compute robust objective with Golem
golem = Golem(goal='max', random_state=42, nproc=1)
golem.fit(X=x.reshape(-1,1), y=y)
# use the Golem.Uniform class here
dists = [Uniform(0.2)]
y_robust = golem.predict(X=x.reshape(-1,1), distributions=dists)
# plot results
plt.plot(x, y, linewidth=5, label='Objective')
plt.plot(x, y_robust, linewidth=5, label='Robust Objective')
_ = plt.legend(loc='lower center', ncol=2, bbox_to_anchor=(0.5 ,1.), frameon=False)
_ = plt.xlabel('$x$')
_ = plt.ylabel('$f(x)$')
```
...then with our new custom ``MyDistribution`` class:
```
# use MyDistribution for the prediction/convolution
dists = [MyDistribution(0.2)]
y_robust = golem.predict(X=x.reshape(-1,1), distributions=dists)
# plot the results
plt.plot(x, y, linewidth=5, label='Objective')
plt.plot(x, y_robust, linewidth=5, label='Robust Objective')
_ = plt.legend(loc='lower center', ncol=2, bbox_to_anchor=(0.5 ,1.), frameon=False)
_ = plt.xlabel('$x$')
_ = plt.ylabel('$f(x)$')
```
As you can see, the result above (orange line) obtained with the user-defined uniform is the same to that obtained with ``Golem.Uniform`` as expected.
However, note that while with ``Golem.Uniform`` the 1000 samples were processed in less than 10 ms, with ``MyDistribution`` it took almost 300 ms (~30 times slower). This is because the method ``cdf`` is called many times (about 1 million times in this example) and ``Golem.Uniform`` is implemented in Cython rather than Python. Therefore, if the execution time of the ``predict`` method in Golem with your custom distribution is too slow, you shuold consider a Cython implementation.
| github_jupyter |
```
from azureml.core import Workspace, Experiment
ws = Workspace.get(name="quick-starts-ws-141247", resource_group="aml-quickstarts-141247", subscription_id="f5091c60-1c3c-430f-8d81-d802f6bf2414")
exp = Experiment(workspace=ws, name="udacity-project")
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
run = exp.start_logging()
from azureml.core.compute import ComputeTarget, AmlCompute
# TODO: Create compute cluster
# Use vm_size = "Standard_D2_V2" in your provisioning configuration.
# max_nodes should be no greater than 4.
from azureml.exceptions import ComputeTargetException
cpu_cluster_name = "project1-cluster"
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print("Cluster exists")
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2", max_nodes=4)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform
import os
# Specify parameter sampler
ps = RandomParameterSampling({'--C': uniform(0.1, 2), '--max_iter': uniform(500, 5000)})
# Specify a Policy
policy = BanditPolicy(slack_factor=0.1, evaluation_interval=1, delay_evaluation=5)
if "training" not in os.listdir():
os.mkdir("./training")
# Create a SKLearn estimator for use with train.py
est = SKLearn('.', compute_target=cpu_cluster, entry_script='train.py')
# Create a HyperDriveConfig using the estimator, hyperparameter sampler, and policy.
hyperdrive_config = HyperDriveConfig(hyperparameter_sampling=ps, policy=policy, estimator=est,
primary_metric_name="Accuracy", primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=15)
# Submit your hyperdrive run to the experiment and show run details with the widget.
hper_run = exp.submit(config=hyperdrive_config)
from azureml.widgets import RunDetails
RunDetails(hper_run).show()
import joblib
# Get your best run and save the model from that run.
best_run = hper_run.get_best_run_by_primary_metric()
mod = best_run.register_model(model_name='bank-marketing-log-reg', model_path="/outputs/bank-marketing-log-reg.joblib",
tags={"type": "logistic regression", "dataset":"bank marketing"})
best_run.download_file("/outputs/bank-marketing-log-reg.joblib", "outputs/best-hyperdrive-mod.joblib")
from azureml.data.dataset_factory import TabularDatasetFactory
# Create TabularDataset using TabularDatasetFactory
# Data is available at:
# "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
data = TabularDatasetFactory().from_delimited_files('https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv')
from train import clean_data
from sklearn.model_selection import train_test_split
import pandas as pd
# Use the clean_data function to clean your data.
print('cleaning')
x, y = clean_data(data)
x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.3, random_state=666)
print('train data to csv')
train = pd.concat([x_train, y_train], axis=1)
train.to_csv("training/train_banking.csv", index=False)
print('upload to datastore')
ds = ws.get_default_datastore()
ds.upload(src_dir="./training", target_path="bankmarketing", overwrite=True, show_progress=True)
print('get dataset reference')
train_data = TabularDatasetFactory().from_delimited_files(path=ds.path('bankmarketing/train_banking.csv'))
from azureml.train.automl import AutoMLConfig
# Set parameters for AutoMLConfig
# NOTE: DO NOT CHANGE THE experiment_timeout_minutes PARAMETER OR YOUR INSTANCE WILL TIME OUT.
# If you wish to run the experiment longer, you will need to run this notebook in your own
# Azure tenant, which will incur personal costs.
automl_config = AutoMLConfig(
experiment_timeout_minutes=30,
task="classification",
primary_metric='accuracy',
training_data=train_data,
label_column_name="y",
n_cross_validations=5,
compute_target=cpu_cluster,
blocked_models=['XGBoostClassifier'])
# Submit your automl run
automl_exp = Experiment(ws, name="automl-run")
auto_run = automl_exp.submit(automl_config, show_output=False)
RunDetails(auto_run).show()
# Retrieve and save your best automl model.
best_auto_run, fitted_model = auto_run.get_output()
best_auto_run.register_model(model_name="bank_marketing-automml", model_path="./outputs/")
#Delete cluster
cpu_cluster.delete()
```
| github_jupyter |
# In this notebook a Q learner with dyna will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
```
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
```
## Let's show the symbols data, to see how good the recommender has to be.
```
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
```
## Let's run the trained agent, with the test set
### First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
```
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
```
### And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
```
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
```
## What are the metrics for "holding the position"?
```
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
```
## Conclusion:
```
import pickle
with open('../../data/simple_q_learner_fast_learner_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
```
| github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
img = cv2.imread("../imori.jpg")
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
def bgr_to_gray(img):
b = img[:, :, 0].copy()
g = img[:, :, 1].copy()
r = img[:, :, 2].copy()
output_image = (0.2126*r + 0.7152*g + 0.0722*b).astype("uint8")
return output_image
def otsu_binarization(img_gray):
# Otsu's binarization algorithm
max_variance, threshold = -1, -1
for v in range(0, 256):
c0 = np.count_nonzero(img_gray < v)
c1 = img_gray.shape[0] * img_gray.shape[1] - c0
v0 = img_gray[np.where(img_gray < v)]
v1 = img_gray[np.where(img_gray >= v)]
m0 = np.mean(v0) if len(v0) > 0 else 0
m1 = np.mean(v1) if len(v1) > 0 else 0
variance = c0 * c1 * ((m0-m1)**2)
if variance > max_variance:
max_variance = variance
threshold = v
print("image shape =", img_gray.shape)
print("optimal threshold =", threshold)
img_binary = np.vectorize(lambda x: 0 if x < threshold else 255)(img_gray)
return img_binary
# 収縮
def morphology_erode(img, iteration=1):
H, W = img.shape
input_img = np.pad(img, (1, 1), "edge")
K = np.array(( (0, 1, 0), (1, 1, 1), (0, 1, 0) ), dtype=np.int)
for t in range(iteration):
output_img = np.ones((H+2, W+2)) * 255
for i in range(1, H+1):
for j in range(1, W+1):
if np.sum(K * input_img[i-1:i+2, j-1:j+2]) < 255*5:
output_img[i, j] = 0
input_img = output_img.copy()
output_img = input_img[1:1+H, 1:1+W]
return output_img
# 膨張
def morphology_dilate(img, iteration=1):
H, W = img.shape
input_img = np.pad(img, (1, 1), "edge")
K = np.array(( (0, 1, 0), (1, 1, 1), (0, 1, 0) ), dtype=np.int)
for t in range(iteration):
output_img = np.zeros((H+2, W+2))
for i in range(1, H+1):
for j in range(1, W+1):
if np.sum(K * input_img[i-1:i+2, j-1:j+2]) >= 255:
output_img[i, j] = 255
input_img = output_img.copy()
output_img = input_img[1:1+H, 1:1+W]
return output_img
def morphology_gradient(img_gray):
img_otsu = otsu_binarization(img_gray)
img_erode = morphology_erode(img_otsu)
img_dilate = morphology_dilate(img_otsu)
img_out = np.abs(img_erode - img_dilate).astype("uint8")
return img_out
img_gray = bgr_to_gray(img)
img_grad = morphology_gradient(img_gray)
plt.imshow(img_grad, cmap="gray", vmin=0, vmax=255)
plt.show()
```
| github_jupyter |
# QAOA Problems
The shallowest depth version of the QAOA consists of the application of two unitary operators: the problem unitary and the driver unitary. The first of these depends on the parameter $\gamma$ and applies a phase to pairs of bits according to the problem-specific cost operator $C$:
$$
U_C \! \left(\gamma \right) = e^{-i \gamma C } = \prod_{j < k} e^{-i \gamma w_{jk} Z_j Z_k}
$$
whereas the driver unitary depends on the parameter $\beta$, is problem-independent, and serves to drive transitions between bitstrings within the superposition state:
$$
\newcommand{\gammavector}{\boldsymbol{\gamma}}
\newcommand{\betavector}{\boldsymbol{\beta}}
U_B \! \left(\beta \right) = e^{-i \beta B} = \prod_j e^{- i \beta X_j},
\quad \qquad
B = \sum_j X_j
$$
where $X_j$ is the Pauli $X$ operator on qubit $j$. These operators can be implemented by sequentially evolving under each term of the product; specifically the problem unitary is applied with a sequence of two-body interactions while the driver unitary is a single qubit rotation on each qubit. For higher-depth versions of the algorithm the two unitaries are sequentially re-applied each with their own $\beta$ or $\gamma$. The number of applications of the pair of unitaries is represented by the hyperparameter $p$ with parameters $\gammavector = (\gamma_1, \dots, \gamma_p)$ and $\betavector = (\beta_1, \dots, \beta_p)$. For $n$ qubits, we prepare the parameterized state
$$
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\ket}[1]{|#1\rangle}
| \gammavector , \betavector \rangle = U_B(\beta_p) U_C(\gamma_p ) \cdots U_B(\beta_1) U_C(\gamma_1 ) \ket{+}^{\otimes n},
$$
where $\ket{+}^{\otimes n}$ is the symmetric superposition of computational basis states.

The optimization problems we study in this work are defined through a cost function with a corresponding quantum operator C given by
$$
C = \sum_{j < k} w_{jk} Z_j Z_k
$$
where $Z_j$ dnotes the Pauli $Z$ operator on qubit $j$, and the $w_{jk}$ correspond to scalar weights with values $\{0, \pm1\}$. Because these clauses act on at most two qubits, we are able to associate a graph with a given problem instance with weighted edges given by the $w_{jk}$ adjacency matrix.
```
import networkx as nx
import numpy as np
import scipy.optimize
import cirq
import recirq
%matplotlib inline
from matplotlib import pyplot as plt
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
```
## Hardware Grid
First, we study problem graphs which match the connectivity of our hardware, which we term "Hardware Grid problems". Despite results showing that problems on such graphs are efficient to solve on average, we study these problems as they do not require routing. This family of problems is composed of random instances generated by sampling $w_{ij}$ to be $\pm 1$ for edges in the device topology or a subgraph thereof.
```
from recirq.qaoa.problems import get_all_hardware_grid_problems
import cirq.contrib.routing as ccr
hg_problems = get_all_hardware_grid_problems(
device_graph=ccr.gridqubits_to_graph_device(recirq.get_device_obj_by_name('Sycamore23').qubits),
central_qubit=cirq.GridQubit(6,3),
n_instances=10,
rs=np.random.RandomState(5)
)
instance_i = 0
n_qubits = 23
problem = hg_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = {i: coord for i, coord in enumerate(problem.coordinates)}
nx.draw_networkx(problem.graph, pos=pos, with_labels=False, node_color=QBLUE)
if True: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
```
## Sherrington-Kirkpatrick model
Next, we study instances of the Sherrington-Kirkpatrick (SK) model, defined on the complete graph with $w_{ij}$ randomly chosen to be $\pm 1$. This is a canonical example of a frustrated spin glass and is most penalized by routing, which can be performed optimally using the linear swap networks at the cost of a linear increase in circuit depth.
```
from recirq.qaoa.problems import get_all_sk_problems
n_qubits = 17
all_sk_problems = get_all_sk_problems(max_n_qubits=17, n_instances=10, rs=np.random.RandomState(5))
sk_problem = all_sk_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = nx.circular_layout(sk_problem.graph)
nx.draw_networkx(sk_problem.graph, pos=pos, with_labels=False, node_color=QRED)
if False: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in sk_problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(sk_problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
```
## 3-regular MaxCut
Finally, we study instances of the MaxCut problem on 3-regular graphs. This is a prototypical discrete optimization problem with a low, fixed node degree but a high dimension which cannot be trivially mapped to a planar architecture. It more closely matches problems of industrial interest. For these problems, we use an automated routing algorithm to heuristically insert SWAP operations.
```
from recirq.qaoa.problems import get_all_3_regular_problems
n_qubits = 22
instance_i = 0
threereg_problems = get_all_3_regular_problems(max_n_qubits=22, n_instances=10, rs=np.random.RandomState(5))
threereg_problem = threereg_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = nx.spring_layout(threereg_problem.graph, seed=11)
nx.draw_networkx(threereg_problem.graph, pos=pos, with_labels=False, node_color=QGOLD)
if False: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in threereg_problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(threereg_problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AI4Finance-LLC/FinRL-Library/blob/master/FinRL_ensemble_stock_trading_ICAIF_2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Deep Reinforcement Learning for Stock Trading from Scratch: Multiple Stock Trading Using Ensemble Strategy
Tutorials to use OpenAI DRL to trade multiple stocks using ensemble strategy in one Jupyter Notebook | Presented at ICAIF 2020
* This notebook is the reimplementation of our paper: Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy, using FinRL.
* Check out medium blog for detailed explanations: https://medium.com/@ai4finance/deep-reinforcement-learning-for-automated-stock-trading-f1dad0126a02
* Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues
* **Pytorch Version**
# Content
* [1. Problem Definition](#0)
* [2. Getting Started - Load Python packages](#1)
* [2.1. Install Packages](#1.1)
* [2.2. Check Additional Packages](#1.2)
* [2.3. Import Packages](#1.3)
* [2.4. Create Folders](#1.4)
* [3. Download Data](#2)
* [4. Preprocess Data](#3)
* [4.1. Technical Indicators](#3.1)
* [4.2. Perform Feature Engineering](#3.2)
* [5.Build Environment](#4)
* [5.1. Training & Trade Data Split](#4.1)
* [5.2. User-defined Environment](#4.2)
* [5.3. Initialize Environment](#4.3)
* [6.Implement DRL Algorithms](#5)
* [7.Backtesting Performance](#6)
* [7.1. BackTestStats](#6.1)
* [7.2. BackTestPlot](#6.2)
* [7.3. Baseline Stats](#6.3)
* [7.3. Compare to Stock Market Index](#6.4)
<a id='0'></a>
# Part 1. Problem Definition
This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.
The algorithm is trained using Deep Reinforcement Learning (DRL) algorithms and the components of the reinforcement learning environment are:
* Action: The action space describes the allowed actions that the agent interacts with the
environment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent
selling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use
an action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy
10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively
* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio
values at state s′ and s, respectively
* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so
our trading agent observes many different features to better learn in an interactive environment.
* Environment: Dow 30 consituents
The data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.
<a id='1'></a>
# Part 2. Getting Started- Load Python Packages
<a id='1.1'></a>
## 2.1. Install all the packages through FinRL library
```
# ## install finrl library
# !pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
```
<a id='1.2'></a>
## 2.2. Check if the additional packages needed are present, if not install them.
* Yahoo Finance API
* pandas
* numpy
* matplotlib
* stockstats
* OpenAI gym
* stable-baselines
* tensorflow
* pyfolio
<a id='1.3'></a>
## 2.3. Import Packages
```
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# matplotlib.use('Agg')
import datetime
%matplotlib inline
from finrl.config import config
from finrl.marketdata.yahoodownloader import YahooDownloader
from finrl.preprocessing.preprocessors import FeatureEngineer
from finrl.preprocessing.data import data_split
from finrl.env.env_stocktrading import StockTradingEnv
from finrl.model.models import DRLAgent,DRLEnsembleAgent
from finrl.trade.backtest import get_baseline, backtest_stats, backtest_plot
from pprint import pprint
import sys
sys.path.append("../FinRL-Library")
import itertools
```
<a id='1.4'></a>
## 2.4. Create Folders
```
import os
if not os.path.exists("./" + config.DATA_SAVE_DIR):
os.makedirs("./" + config.DATA_SAVE_DIR)
if not os.path.exists("./" + config.TRAINED_MODEL_DIR):
os.makedirs("./" + config.TRAINED_MODEL_DIR)
if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR):
os.makedirs("./" + config.TENSORBOARD_LOG_DIR)
if not os.path.exists("./" + config.RESULTS_DIR):
os.makedirs("./" + config.RESULTS_DIR)
```
<a id='2'></a>
# Part 3. Download Data
Yahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.
* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API
* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).
-----
class YahooDownloader:
Provides methods for retrieving daily stock data from
Yahoo Finance API
Attributes
----------
start_date : str
start date of the data (modified from config.py)
end_date : str
end date of the data (modified from config.py)
ticker_list : list
a list of stock tickers (modified from config.py)
Methods
-------
fetch_data()
Fetches data from yahoo API
```
# from config.py start_date is a string
config.START_DATE
print(config.DOW_30_TICKER)
%load_ext autoreload
%autoreload 2
# from finrl.config.config import MISSING3
# df = YahooDownloader(start_date = '2006-01-01',
# end_date = '2021-06-11',
# ticker_list = config.DOW_30_TICKER).fetch_data()
# df.append(df2).append(df3).append(df4).append(df5)
import pickle
# df.drop('Unnamed: 0')with open('dji_df_2004-2021.pkl', 'rb') as f:
# df = pickle.load(f)
df = pd.read_csv('/home/roman/Work/trading-bot/notebooks/dji_prices_2020_04_09.csv')
df
df = df.drop(['Unnamed: 0'], axis=1)
df = df.sort_values(['date', 'tic'])
df.head()
df.tail()
df.shape
df.tic.unique()
# import pickle
# with open('dji_df_2004-2021.pkl', 'wb') as f:
# pickle.dump(df, f)
```
# Part 4: Preprocess Data
Data preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.
* Add technical indicators. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. In this article, we demonstrate two trend-following technical indicators: MACD and RSI.
* Add turbulence index. Risk-aversion reflects whether an investor will choose to preserve the capital. It also influences one's trading strategy when facing different market volatility level. To control the risk in a worst-case scenario, such as financial crisis of 2007–2008, FinRL employs the financial turbulence index that measures extreme asset price fluctuation.
```
import pickle
with open('/home/roman/Work/trading-bot/notebooks/2004-2021_imputed_sentiment_final_df.pkl', 'rb') as f:
daily_sentiment_df = pickle.load(f)
with open('doc2vec_2004_2021_expanded_world_df.pkl', 'rb') as f:
doc2vec_2004_2021_expanded_world_df = pickle.load(f)
daily_sentiment_df
fe = FeatureEngineer(
use_technical_indicator=True,
tech_indicator_list = config.TECHNICAL_INDICATORS_LIST,
use_turbulence=True)
processed = fe.preprocess_data(df)
list(processed.columns)
list_ticker = processed["tic"].unique().tolist()
list_date = list(pd.date_range(processed['date'].min(),processed['date'].max()).astype(str))
combination = list(itertools.product(list_date,list_ticker))
processed_full = pd.DataFrame(combination,columns=["date","tic"]).merge(processed,on=["date","tic"],how="left")
processed_full = processed_full[processed_full['date'].isin(processed['date'])]
processed_full = processed_full.sort_values(['date','tic'])
processed_full = processed_full.fillna(0)
processed_full.sample(5)
processed_full['date'][0]
```
<a id='4'></a>
# Part 5. Design Environment
Considering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.
Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.
The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric.
```
config.TECHNICAL_INDICATORS_LIST
stock_dimension = len(processed_full.tic.unique())
state_space = 1 + 2*stock_dimension + len(config.TECHNICAL_INDICATORS_LIST)*stock_dimension + config.NUMBER_OF_DAILY_FEATURES
print(f"Stock Dimension: {stock_dimension}, User Features: {config.NUMBER_OF_USER_FEATURES}, State Space: {state_space}")
env_kwargs = {
"hmax": 100,
"initial_amount": 50_000_000/100, #Since in Indonesia the minimum number of shares per trx is 100, then we scaled the initial amount by dividing it with 100
"buy_cost_pct": 0.0019, #IPOT has 0.19% buy cost
"sell_cost_pct": 0.0029, #IPOT has 0.29% sell cost
"state_space": state_space,
"stock_dim": stock_dimension,
"tech_indicator_list": config.TECHNICAL_INDICATORS_LIST,
"action_space": stock_dimension,
"reward_scaling": 1e-4,
"print_verbosity":5
}
```
<a id='5'></a>
# Part 6: Implement DRL Algorithms
* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.
* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,
Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to
design their own DRL algorithms by adapting these DRL algorithms.
* In this notebook, we are training and validating 3 agents (A2C, PPO, DDPG) using Rolling-window Ensemble Method ([reference code](https://github.com/AI4Finance-LLC/Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020/blob/80415db8fa7b2179df6bd7e81ce4fe8dbf913806/model/models.py#L92))
```
%load_ext autoreload
%autoreload 2
rebalance_window = 63 # rebalance_window is the number of days to retrain the model
validation_window = 63 # validation_window is the number of days to do validation and trading (e.g. if validation_window=63, then both validation and trading period will be 63 days)
train_start = '2006-01-03'
train_end = '2016-01-01'
val_test_start = '2016-01-01'
val_test_end = '2021-06-11'
ensemble_agent = DRLEnsembleAgent(df=processed_full,
train_period=(train_start,train_end),
val_test_period=(val_test_start,val_test_end),
rebalance_window=rebalance_window,
validation_window=validation_window,
daily_features=daily_sentiment_df,
**env_kwargs)
A2C_model_kwargs = {
'n_steps': 5,
'ent_coef': 0.01,
'learning_rate': 0.0005
}
PPO_model_kwargs = {
"ent_coef":0.01,
"n_steps": 2048,
"learning_rate": 0.00025,
"batch_size": 128
}
DDPG_model_kwargs = {
"action_noise":"ornstein_uhlenbeck",
"buffer_size": 50_000,
"learning_rate": 0.000005,
"batch_size": 128
}
timesteps_dict = {'a2c' : 100_000,
'ppo' : 100_000,
'ddpg' : 50_000
}
import time
start = time.time()# print([self.state[0]])
# print(self.data.close.values.tolist())
# print(list(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))
# print(sum([self.data[tech].values.tolist() for tech in self.tech_indicator_list ], []) )
# user_features_columns = self.data.columns[-config.NUMBER_OF_USER_FEATURES:]
# print(self.data[user_features_columns].values[0])
df_summary = ensemble_agent.run_ensemble_strategy(A2C_model_kwargs, PPO_model_kwargs, DDPG_model_kwargs, timesteps_dict)
time_elapsed = time.time()-start
print(time_elapsed)
df_summary
del fe
import dill
dill.dump_session('17_year_guardian_sentiment_fixed_fixed_dow.db')
processed_full
import pickle
with open('sentiment_1.pkl', 'wb') as f:
pickle.dump(processed_full, f)
import pickle
with open('17_year_guardian_sentiment_fixed_fixed_dow_df_summary.pkl', 'wb') as f:
pickle.dump(df_summary, f)
with open('17_year_guardian_sentiment_fixed_fixed_dow_processed_full.pkl', 'wb') as f:
pickle.dump(processed_full, f)
with open('17_year_guardian_sentiment_fixed_fixed_dow_ensemble_agent.pkl', 'wb') as f:
pickle.dump(ensemble_agent, f)
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/ArunkumarRamanan/Google-Machine-Learning/blob/master/overfit_and_underfit.ipynb)
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Explore overfitting and underfitting
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/overfit_and_underfit"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/models/blob/master/samples/core/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).
In both of the previous examples—classifying movie reviews, and predicting housing prices—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.
In other words, our model would *overfit* to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the *training set*, what we really want is to develop models that generalize well to a *testing data* (or data they haven't seen before).
The opposite of overfitting is *underfitting*. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data.
If you train for too long though, the model will start to overfit and learn patterns from the training data that don't generalize to the test data. We need to strike a balance. Understanding how to train for an appropriate number of epochs as we'll explore below is a useful skill.
To prevent overfitting, the best solution is to use more training data. A model trained on more data will naturally generalize better. When that is no longer possible, the next best solution is to use techniques like regularization. These place constraints on the quantity and type of information your model can store. If a network can only afford to memorize a small number of patterns, the optimization process will force it to focus on the most prominent patterns, which have a better chance of generalizing well.
In this notebook, we'll explore two common regularization techniques—weight regularization and dropout—and use them to improve our IMDB movie review classification notebook.
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
```
## Download the IMDB dataset
Rather than using an embedding as in the previous notebook, here we will multi-hot encode the sentences. This model will quickly overfit to the training set. It will be used to demonstrate when overfitting occurs, and how to fight it.
Multi-hot-encoding our lists means turning them into vectors of 0s and 1s. Concretely, this would mean for instance turning the sequence `[3, 5]` into a 10,000-dimensional vector that would be all-zeros except for indices 3 and 5, which would be ones.
```
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
```
Let's look at one of the resulting multi-hot vectors. The word indices are sorted by frequency, so it is expected that there are more 1-values near index zero, as we can see in this plot:
```
plt.plot(train_data[0])
```
## Demonstrate overfitting
The simplest way to prevent overfitting is to reduce the size of the model, i.e. the number of learnable parameters in the model (which is determined by the number of layers and the number of units per layer). In deep learning, the number of learnable parameters in a model is often referred to as the model's "capacity". Intuitively, a model with more parameters will have more "memorization capacity" and therefore will be able to easily learn a perfect dictionary-like mapping between training samples and their targets, a mapping without any generalization power, but this would be useless when making predictions on previously unseen data.
Always keep this in mind: deep learning models tend to be good at fitting to the training data, but the real challenge is generalization, not fitting.
On the other hand, if the network has limited memorization resources, it will not be able to learn the mapping as easily. To minimize its loss, it will have to learn compressed representations that have more predictive power. At the same time, if you make your model too small, it will have difficulty fitting to the training data. There is a balance between "too much capacity" and "not enough capacity".
Unfortunately, there is no magical formula to determine the right size or architecture of your model (in terms of the number of layers, or what the right size for each layer). You will have to experiment using a series of different architectures.
To find an appropriate model size, it's best to start with relatively few layers and parameters, then begin increasing the size of the layers or adding new layers until you see diminishing returns on the validation loss. Let's try this on our movie review classification network.
We'll create a simple model using only ```Dense``` layers, then well a smaller version, and compare them.
### Create a baseline model
```
baseline_model = keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
### Create a smaller model
Let's create a model with less hidden units to compare against the baseline model that we just created:
```
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
```
And train the model using the same data:
```
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
### Create a bigger model
As an exercise, you can create an even larger model, and see how quickly it begins overfitting. Next, let's add to this benchmark a network that has much more capacity, far more than the problem would warrant:
```
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
```
And, again, train the model using the same data:
```
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
### Plot the training and validation loss
<!--TODO(markdaoust): This should be a one-liner with tensorboard -->
The solid lines show the training loss, and the dashed lines show the validation loss (remember: a lower validation loss indicates a better model). Here, the smaller network begins overfitting later than the baseline model (after 6 epochs rather than 4) and its performance degrades much more slowly once it starts overfitting.
```
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
```
Notice that the larger network begins overfitting almost right away, after just one epoch, and overfits much more severely. The more capacity the network has, the quicker it will be able to model the training data (resulting in a low training loss), but the more susceptible it is to overfitting (resulting in a large difference between the training and validation loss).
## Strategies
### Add weight regularization
You may be familiar with Occam's Razor principle: given two explanations for something, the explanation most likely to be correct is the "simplest" one, the one that makes the least amount of assumptions. This also applies to the models learned by neural networks: given some training data and a network architecture, there are multiple sets of weights values (multiple models) that could explain the data, and simpler models are less likely to overfit than complex ones.
A "simple model" in this context is a model where the distribution of parameter values has less entropy (or a model with fewer parameters altogether, as we saw in the section above). Thus a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values, which makes the distribution of weight values more "regular". This is called "weight regularization", and it is done by adding to the loss function of the network a cost associated with having large weights. This cost comes in two flavors:
* L1 regularization, where the cost added is proportional to the absolute value of the weights coefficients (i.e. to what is called the "L1 norm" of the weights).
* L2 regularization, where the cost added is proportional to the square of the value of the weights coefficients (i.e. to what is called the "L2 norm" of the weights). L2 regularization is also called weight decay in the context of neural networks. Don't let the different name confuse you: weight decay is mathematically the exact same as L2 regularization.
In `tf.keras`, weight regularization is added by passing weight regularizer instances to layers as keyword arguments. Let's add L2 weight regularization now.
```
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
```
```l2(0.001)``` means that every coefficient in the weight matrix of the layer will add ```0.001 * weight_coefficient_value``` to the total loss of the network. Note that because this penalty is only added at training time, the loss for this network will be much higher at training than at test time.
Here's the impact of our L2 regularization penalty:
```
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
```
As you can see, the L2 regularized model has become much more resistant to overfitting than the baseline model, even though both models have the same number of parameters.
### Add dropout
Dropout is one of the most effective and most commonly used regularization techniques for neural networks, developed by Hinton and his students at the University of Toronto. Dropout, applied to a layer, consists of randomly "dropping out" (i.e. set to zero) a number of output features of the layer during training. Let's say a given layer would normally have returned a vector [0.2, 0.5, 1.3, 0.8, 1.1] for a given input sample during training; after applying dropout, this vector will have a few zero entries distributed at random, e.g. [0, 0.5,
1.3, 0, 1.1]. The "dropout rate" is the fraction of the features that are being zeroed-out; it is usually set between 0.2 and 0.5. At test time, no units are dropped out, and instead the layer's output values are scaled down by a factor equal to the dropout rate, so as to balance for the fact that more units are active than at training time.
In tf.keras you can introduce dropout in a network via the Dropout layer, which gets applied to the output of layer right before.
Let's add two Dropout layers in our IMDB network to see how well they do at reducing overfitting:
```
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
```
Adding dropout is a clear improvement over the baseline model.
To recap: here the most common ways to prevent overfitting in neural networks:
* Get more training data.
* Reduce the capacity of the network.
* Add weight regularization.
* Add dropout.
And two important approaches not covered in this guide are data-augmentation and batch normalization.
| github_jupyter |
# Introduction to Programming with Python
---
## What is Python and why would I use it?
Python is a programming language.
A programming language is a way of writing commands so that an interpreter or compiler can turn them into machine instructions.
We like using Python in Software Carpentry Workshops for lots of reasons
- Widely used in science
- It's easy to read and write
- Huge supporting community - lots of ways to learn and get help
- This Jupyter Notebook. Not a lot of languages have this kind of thing (name comes from Julia, Python, and R).
Even if you aren't using Python in your work, you can use Python to learn the fundamentals of programming that will apply accross languages
### Characters
Python uses certain characters as part of its syntax. Here is what they are called:
* `[` : left `square bracket`
* `]` : right `square bracket`
* `(` : left `paren` (parentheses)
* `)` : right `paren`
* `{` : left `curly brace`
* `}` : right `curly brace`
* `<` : left `angle bracket`
* `>` : right `angle bracket`
* `-` `dash` (not hyphen. Minus only when used in an equation or formula)
* `"` : `double quote`
* `'` : `single quote` (apostrophe)
# What are the fundamentals?
## VARIABLES
* We store values inside variables.
* We can refer to variables in other parts of our programs.
* In Python, the variable is created when a value is assigned to it.
* Values are assigned to variable names using the equals sign (=).
* A variable can hold two types of things. Basic data types and objects(ways to structure data and code).
* In Python, all variables are objects.
Some data types you will find in almost every language include:
- Strings (characters, words, sentences or paragraphs): 'a' 'b' 'c' 'abc' '0' '3' ';' '?'
- Integers (whole numbers): 1 2 3 100 10000 -100
- Floating point or Float (decimals): 10.0 56.9 -3.765
- Booleans: True, False
Here, Python assigns an age to a variable `age` and a name in quotation marks to a variable `first_name`.
```
age = 42
first_name = "Ahmed"
```
#### Of Note:
Variable names:
* Cannot start with a digit
* Cannot contain spaces, quotation marks, or other punctuation
You can display what is inside `age` by using the print command
`print()`
with the value placed inside the parenthesis
```
print(age)
```
---
## EXERCISE:
1. Create two new variables called age and first_name with your own age and name
1. Print each variable out to dispaly it's value
```
age = 22
name1 = 'Amy'
print(name1)
```
You can also combine values in a single print command by separating them with commas
```
# Insert your variable values into the print statement below
print(name1, 'is', age, 'years old')
```
* `print` automatically puts a single space between items to separate them.
* And wraps around to a new line at the end.
### Using Python built-in type() function
If you are not sure of what your variables' types are, you can call a python function called type() in the same manner as you used print() function.
Python is an object-oriented language, so any defined variable has a type. Default common types are str, int, float, list, and tuple. We will cover list and tuple later
```
print(type(age))
print(type(first_name))
```
### STRING TYPE
One or more characters strung together and enclosed in quotes (single or double): "Hello World!"
```
greeting = "Hello World!"
print ("The greeting is:", greeting)
greeting = 'Hello World!'
print ('The greeting is:', greeting)
```
#### Need to use single quotes in your string?
Use double quotes to make your string.
```
greeting = "Hello 'World'!"
print ("The greeting is:", greeting)
```
#### Need to use both?
```
greeting1 = "'Hello'"
greeting2 = '"World"!'
print ("The greeting is:", greeting1, greeting2)
```
#### Concatenation
```
bear = "wild"
down = "cats"
print (bear+down)
```
---
## EtherPad
Why isn't `greeting` enclosed in quotes in the statements above?
Post your answers to the EtherPad, or vote for existing answers
---
#### Use an index to get a single character from a string.
* The characters (individual letters, numbers, and so on) in a string are ordered.
* For example, the string ‘AB’ is not the same as ‘BA’. Because of this ordering, we can treat the string as a list of characters.
* Each position in the string (first, second, etc.) is given a number. This number is called an index or sometimes a subscript.
* Indices are numbered from 0.
* Use the position’s index in square brackets to get the character at that position.
```
# String : H e l i u m
# Index Location: 0 1 2 3 4 5
atom_name = 'helium'
print(atom_name[0], atom_name[3])
```
### NUMERIC TYPES
* Numbers are stored as numbers (no quotes) and are either integers (whole) or real numbers (decimal).
* In programming, numbers with decimal precision are called floating-point, or float.
* Floats use more processing than integers so use them wisely!
* Floats and ints come in various sizes but Python switches between them transparently.
```
my_integer = 10
my_float = 10.99998
my_value = my_float
print("My numeric value:", my_value)
print("Type:", type(my_value))
```
### BOOLEAN TYPE
* Boolean values are binary, meaning they can only either true or false.
* In python True and False (no quotes) are boolean values
```
is_true = True
is_false = False
print("My true boolean variable:", is_true)
```
---
## EtherPad
What data type is `'1024'`?
<ol style="list-style-type:lower-alpha">
<li>String</li>
<li>Int</li>
<li>Float</li>
<li>Boolean</li>
</ol>
Post your answers to the EtherPad, or vote for existing answers
---
## Variables can be used in calculations.
* We can use variables in calculations just as if they were values.
* Remember, we assigned 42 to `age` a few lines ago.
```
age = age + 3
print('Age in three years:', age)
```
* This now sets our age value 45. We can also add strings together. When you add strings it's called "concatenating"
```
name = "Sonoran"
full_name = name + " Desert"
print(full_name)
```
* Notice how I included a space in the quotes before "Desert". If we hadn't, we would have had "SonoranDesert"
* Can we subtract, multiply, or divide strings?
```
#Create a new variable called last_name with your own last name.
#Create a second new variable called full_name that is a combination of your first and last name
last_name = ' Chai'
full_name = name1 + last_name
print(full_name)
```
## DATA STRUCTURES
Python has many objects that can be used to structure data including:
- Lists
- Tuples
- Sets
- Dictionaries
### LISTS
Lists are collections of values held together in brackets:
```
list_of_characters = ['a', 'b', 'c']
print (list_of_characters)
# Create a new list called list_of_numbers with four numbers in it
list_of_numbers = [1,2,3,4]
print(list_of_numbers)
```
* Just like strings, we can access any value in the list by it's position in the list.
* **IMPORTANT:** Indexes start at 0
~~~
list: ['a', 'b', 'c', 'd']
index location: 0 1 2 3
~~~
```
# Print out the second value in the list list_of_numbers
print(list_of_numbers[1])
```
Once you have created a list you can add more items to it with the append method
```
list_of_numbers.append(5)
print(list_of_numbers)
```
#### Aside: Sizes of data structures
To determine how large (how many values/entries/elements/etc.) any Python data structure has, use the `len()` function
```
len(list_of_numbers)
```
Note that you cannot compute the length of a numeric variable:
```
len('age')
```
This will give an error: `TypeError: object of type 'int' has no len()`
However, `len()` can compute the lengths of strings
```
print(len('this is a sentence'))
# You can also get the lengths of strings in a list
list_of_strings = ["Python is Awesome!", "Look! I'm programming.", "E = mc^2"]
# This will get the length of "Look! I'm programming."
print(len(list_of_strings[2]))
print(list_of_strings[0],list_of_strings[1])
```
### TUPLES
Tuples are like a List, `cannot be changed (immutable)`.
Tuples can be used to represent any collection of data. They work well for things like coordinates.
```
tuple_of_x_y_coordinates = (3, 4)
print (tuple_of_x_y_coordinates)
```
Tuples can have any number of values
```
coordinates = (1, 7, 38, 9, 0)
print (type(coordinates))
icecream_flavors = ("strawberry", "vanilla", "chocolate")
print (icecream_flavors)
```
... and any types of values.
Once created, you `cannot add more items to a tuple` (but you can add items to a list). If we try to append, like we did with lists, we get an error
```
icecream_flavors.append('bubblegum')
```
### THE DIFFERENCE BETWEEN TUPLES AND LISTS
Lists are good for manipulating data sets. It's easy for the computer to add, remove and sort items. Sorted tuples are easier to search and index. This happens because tuples reserve entire blocks of memory to make finding specific locations easier while lists use addressing and force the computer to step through the whole list.

Let's say you want to get to the last item. The tuple can calculate the location because:
(address)=(size of data)×(inex of the item)+(original address)
This is how zero indexing works. The computer can do the calculation and jump directly to the address. The list would need to go through every item in the list to get there.
Now lets say you wanted to remove the third item. Removing it from the tuple requires it to be resized and coppied. Python would even make you do this manually. Removing the third item in the list is as simple as making the second item point to the fourth. Python makes this as easy as calling a method on the tuple object.
### SETS
Sets are similar to lists and tuples, but can only contain unique values and are held in braces
For example a list could contain multiple exact values
```
# In the gapminder data that we will use, we will have data entries for the continents
# of each country in the dataset
my_list = ['Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America']
print("my_list is", my_list)
# A set would only allow for unique values to be held
my_set = {'Africa', 'Europe', 'North America', 'Africa', 'Europe', 'North America'}
print("my_set is", my_set)
my_set2 = set(my_list)
print(my_set2)
```
Just list lists, you can append to a set using the add() function
```
my_set.add('Asia')
# Now let's try to append one that is in:
my_set.add('Europe')
print(my_set)
```
### DICTIONARIES
* Dictionaries are collections of things that you can lookup like in a real dictionary:
* Dictionarys can organized into key and value pairs separated by commas (like lists) and surrounded by braces.
* E.g. {key1: value1, key2: value2}
* We call each association a "key-value pair".
```
dictionary_of_definitions = {"aardvark" : "The aardvark is a medium-sized, burrowing, nocturnal mammal native to Africa.",
"boat" : "A boat is a thing that floats on water"}
```
We can find the definition of aardvark by giving the dictionary the "key" to the definition we want in brackets.
In this case the key is the word we want to lookup
```
print ("The definition of aardvark is:", dictionary_of_definitions["aardvark"])
# Print out the definition of a boat
print("The definition of a boat is:", dictionary_of_definitions['boat'])
```
Just like lists and sets, you can add to dictionaries by doing the following:
```
dictionary_of_definitions['ocean'] = "An ocean is a very large expanse of sea, in particular each of the main areas into which the sea is divided geographically."
print(dictionary_of_definitions)
```
---
## EtherPad
Which one of these is not a valid entry in a dictionary?
1. `"key"`: `"value"`
2. `"GCBHSA"`: `"ldksghdklfghfdlgkfdhgfldkghfgfhd"`
3. `"900"` : `"key"` : `"value"`
4. `Books` : `10000`
Post your answer to the EtherPad, or vote for an existing answer
```
## EXERCISE:
# 1. Create a dictionary called `zoo` with at least three animal types with a different count for each animal
# 3 1. `print` out the count of the second animal in your dictionary
zoo = {'panda': '1', 'tiger': '2', 'elephant': '5'}
print(zoo)
print('the amount of tiger is:', zoo['tiger'])
print()
```
## Statements
OK great. Now what can we do with all of this?
We can plug everything together with a bit of logic and python language and make a program that can do things like:
* process data
* parse files
* data analysis
What kind of logic are we talking about?
We are talking about something called a "logical structure" which starts at the top (first line) and reads down the page in order
In python a logical structure are often composed of statements. Statements are powerful operators that control the flow of your script. There are two main types:
* conditionals (if, while)
* loops (for)
### Conditionals
Conditionals are how we make a decision in the program.
In python, conditional statements are called if/else statements.
* If statement use boolean values to define flow.
* E.g. If something is True, do this. Else, do this
```
it_is_daytime = False # this is the variable that holds the current condition of it_is_daytime which is True or False
if it_is_daytime:
print ("Have a nice day.")
else:
print ("Have a nice night.")
# before running this cell
# what will happen if we change it_is_daytime to True?
# what will happen if we change it_is_daytime to False?
```
* Often if/else statement use a comparison between two values to determine True or False
* These comparisons use "comparison operators" such as ==, >, and <.
* \>= and <= can be used if you need the comparison to be inclusive.
* **NOTE**: Two equal signs is used to compare values, while one equals sign is used to assign a value
* E.g.
1 > 2 is False<br/>
2 > 2 is False<br/>
2 >= 2 is True<br/>
'abc' == 'abc' is True
```
user_name = "Marnee"
if user_name == "Marnee":
print ("Marnee likes to program in Python.")
else:
print ("We do not know who you are.")
print('print this no matter what:)')
```
* What if a condition has more than two choices? Does it have to use a boolean?
* Python if-statments will let you do that with elif
* `elif` stands for "else if"
```
user_name = 'Amy'
if user_name == "Marnee":
print ("Marnee likes to program in Python.")
elif user_name == "Ben":
print ("Ben likes maps.")
elif user_name == "Brian":
print ("Brian likes plant genomes")
else:
print ("We do not know who you are")
# for each possibility of user_name we have an if or else-if statment to check the value of the name
# and print a message accordingly.
#What does the following statement print?
my_num = 17
my_num = 8 + my_num
new_num = my_num / 2
if new_num >= 30:
print("Greater than thirty")
elif my_num == 25:
print("Equals 25")
elif new_num <= 30:
print("Less than thirty")
else:
print("Unknown")
```
---
## EXERCISE:
* 1. Check to see if you have more than three entries in the `zoo` dictionary you created earlier. If you do, print "more than three". If you don't, print "less than three"
---
```
entries_number = len(zoo)
if entries_number >3:
print('more than three')
elif entries_number == 3:
print('exactly three')
else:
print('less than three')
```
### Loops
Loops tell a program to do the same thing over and over again until a certain condition is met.
In python two main loop types are for loops and while loops.
#### For Loops
We can loop over collections of things like lists or dictionaries or we can create a looping structure.
```
# LOOPING over a collection
# LIST
# If I want to print a list of fruits, I could write out each print statment like this:
#print("apple")
#print("banana")
#print("mango")
# or I could create a list of fruit
# loop over the list
# and print each item in the list
list_of_fruit = ["apple", "banana", "mango"]
# this is how we write the loop
# "fruit" here is a variable that will hold each item in the list, the fruit, as we loop
# over the items in the list
#print (">>looping>>")
for something in list_of_fruit:
print (something)
### LOOPING a set number of times
# We can do this with range
# range automatically creates a list of numbers in a range
# here we have a list of 10 numbers starting with 0 and increasing by one until we have 10 numbers
# What will be printed
for x in range(0,10):
print (x+1)
# LOOPING over a collection
# DICTIONARY
# We can do the same thing with a dictionary and each association in the dictionary
fruit_price = {"apple" : 0.10, "banana" : 0.50, "mango" : 0.75}
for a, b in fruit_price.items():
print ("%s price is %s" % (a, b))
```
---
## EXERCISE:
1\. For each entry in your `zoo` dictionary, print that entry/key
```
for animal, count in zoo.items():
print('amount of %s is %s' % (animal, count))
for animal in zoo.keys():
print('there are', animal+'s', 'in the zoo')
for counts in zoo.values():
print(counts)
```
2\. For each entry in your zoo dictionary, print that value
---
#### While Loops
Similar to if statements, while loops use a boolean test to either continue looping or break out of the loop.
```
# While Loops
my_num = 10
while my_num > 0:
print("My number", my_num)
my_num = my_num - 1
```
NOTE: While loops can be dangerous, because if you forget to to include an operation that modifies the variable being tested (above, we're subtracting 1 at the end of each loop), it will continue to run forever and you script will never finish.
That's it. With just these data types, structures, and logic, you can build a program
Let's do that next with functions
# -- COMMIT YOUR WORK TO GITHUB --
# Key Points
* Python is an open-source programming language that can be used to do science!
* We store information in variables
* There are a variety of data types and objects for storing data
* You can do math on numeric variables, you can concatenate strings
* There are different Python default data structures including: lists, tuples, sets and dictionaries
* Programming uses conditional statements for flow control such as: if/else, for loops and while loops
| github_jupyter |
# Predictions with Faster RCNN
We need to install coco tools.
```
#!pip install pycocotools-windows
import pycocotools
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
from PIL import ImageDraw
import torch
import torch.utils.data
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from engine import train_one_epoch, evaluate
import utils
import transforms as T
# Define current path
current_path = os.getcwd()
def parse_one_annot(path_to_data_file, filename):
data = pd.read_csv(path_to_data_file)
boxes_array = data[data["filename"] == filename][["xmin", "ymin",
"xmax", "ymax"]].values
return boxes_array
class DressDataset(torch.utils.data.Dataset):
def __init__(self, root, data_file, transforms=None):
self.root = root
self.transforms = transforms
self.imgs = sorted(os.listdir(os.path.join(root, "images")))
self.path_to_data_file = data_file
def __getitem__(self, idx):
# load images and bounding boxes
img_path = os.path.join(self.root, "images", self.imgs[idx])
img = Image.open(img_path).convert("RGB")
box_list = parse_one_annot(self.path_to_data_file,self.imgs[idx])
boxes = torch.as_tensor(box_list, dtype=torch.float32)
num_objs = len(box_list)
# there is only one class
labels = torch.ones((num_objs,), dtype=torch.int64)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:,0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
def get_model(num_classes):
# load an object detection model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# get the number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new on
model.roi_heads.box_predictor = FastRCNNPredictor(in_features,num_classes)
return model
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
dataset = DressDataset(root= current_path + "/dress_dataset",data_file= current_path+ "/dress_dataset/labels/dress_labels.csv",transforms = get_transform(train=False))
data_loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, num_workers=4,collate_fn=utils.collate_fn)
loaded_model = get_model(num_classes = 2)
loaded_model.load_state_dict(torch.load(current_path + "/model_saved",map_location=torch.device('cpu')))
def predict_box(idx):
img, _ = dataset[idx]
label_boxes = np.array(dataset[idx][1]["boxes"])
#put the model in evaluation mode
loaded_model.eval()
with torch.no_grad():
prediction = loaded_model([img])
image = Image.fromarray(img.mul(255).permute(1, 2,0).byte().numpy())
draw = ImageDraw.Draw(image)
# draw groundtruth
for elem in range(len(label_boxes)):
draw.rectangle([(label_boxes[elem][0], label_boxes[elem][1]),(label_boxes[elem][2], label_boxes[elem][3])],outline ="green", width =3)
for element in range(len(prediction[0]["boxes"])):
boxes = prediction[0]["boxes"][element].cpu().numpy()
score = np.round(prediction[0]["scores"][element].cpu().numpy(),decimals= 4)
if score > 0.9:
draw.rectangle([(boxes[0], boxes[1]), (boxes[2], boxes[3])],outline ="red", width =3)
draw.text((boxes[0], boxes[1]), text = str(score))
if (score > 0.8 and score <= 0.9) :
draw.rectangle([(boxes[0], boxes[1]), (boxes[2], boxes[3])],outline ="purple", width =3)
draw.text((boxes[0], boxes[1]), text = str(score))
return(image)
idxs = [569,583]
for i in range(0,len(idxs)):
plt.figure(i+1,figsize = (5, 5))
plt.subplot(1, 1, 1)
plt.axis('off')
plt.imshow(predict_box(idxs[i]))
plt.show()
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/automl/automl-text-classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/official/automl/automl-text-classification.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/ai-platform-samples/raw/master/ai-platform-unified/notebooks/official/automl/automl-text-classification.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
# Vertex AI: Create, train, and deploy an AutoML text classification model
## Overview
This notebook walks you through the major phases of building and using a text classification model on [Vertex AI](https://cloud.google.com/vertex-ai/docs/). In this notebook, you use the "Happy Moments" sample dataset to train a model. The resulting model classifies happy moments into categores that reflect the causes of happiness.
### Objective
In this notebook, you learn how to:
* Set up your development environment
* Create a dataset and import data
* Train an AutoML model
* Get and review evaluations for the model
* Deploy a model to an endpoint
* Get online predictions
* Get batch predictions
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI Training and Serving
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage
## Before you begin
**Note:** This notebook does not require a GPU runtime.
### Set up your local development environment
**If you are using Colab or Google Cloud Notebooks**, your environment already meets
all the requirements to run this notebook. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
1. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3. Activate the virtual environment.
1. To install Jupyter, run `pip install jupyter` on the
command-line in a terminal shell.
1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
1. Open this notebook in the Jupyter Notebook Dashboard.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. [Enable the Vertex AI, Cloud Storage, and Compute Engine APIs](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component,storage-component.googleapis.com).
1. Follow the "**Configuring your project**" instructions from the Vertex Pipelines documentation.
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
### Install additional packages
This notebook uses the Python SDK for Vertex AI, which is contained in the `python-aiplatform` package. You must first install the package into your development environment.
```
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform google-cloud-storage jsonlines
```
### Set your project ID
Finally, you must initialize the client library before you can send requests to the Vertex AI service. With the Python SDK, you initialize the client library as shown in the following cell. This tutorial also uses the Cloud Storage Python library for accessing batch prediction results.
Be sure to provide the ID for your Google Cloud project in the `project` variable. This notebook uses the `us-central1` region, although you can change it to another region.
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
from datetime import datetime
import jsonlines
from google.cloud import aiplatform, storage
from google.protobuf import json_format
PROJECT_ID = "[your-project-id]"
REGION = "us-central1"
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
aiplatform.init(project=PROJECT_ID, location=REGION)
```
## Create a dataset and import your data
The notebook uses the 'Happy Moments' dataset for demonstration purposes. You can change it to another text classification dataset that [conforms to the data preparation requirements](https://cloud.google.com/vertex-ai/docs/datasets/prepare-text#classification).
Using the Python SDK, you can create a dataset and import the dataset in one call to `TextDataset.create()`, as shown in the following cell.
Creating and importing data is a long-running operation. This next step can take a while. The sample waits for the operation to complete, outputting statements as the operation progresses. The statements contain the full name of the dataset that you will use in the following section.
**Note**: You can close the noteboook while you wait for this operation to complete.
```
# Use a timestamp to ensure unique resources
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
src_uris = "gs://cloud-ml-data/NL-classification/happiness.csv"
display_name = f"e2e-text-dataset-{TIMESTAMP}"
ds = aiplatform.TextDataset.create(
display_name=display_name,
gcs_source=src_uris,
import_schema_uri=aiplatform.schema.dataset.ioformat.text.single_label_classification,
sync=True,
)
```
## Train your text classification model
Once your dataset has finished importing data, you are ready to train your model. To do this, you first need the full resource name of your dataset, where the full name has the format `projects/[YOUR_PROJECT]/locations/us-central1/datasets/[YOUR_DATASET_ID]`. If you don't have the resource name handy, you can list all of the datasets in your project using `TextDataset.list()`.
As shown in the following code block, you can pass in the display name of your dataset in the call to `list()` to filter the results.
```
datasets = aiplatform.TextDataset.list(filter=f'display_name="{display_name}"')
print(datasets)
```
When you create a new model, you need a reference to the `TextDataset` object that corresponds to your dataset. You can use the `ds` variable you created previously when you created the dataset or you can also list all of your datasets to get a reference to your dataset. Each item returned from `TextDataset.list()` is an instance of `TextDataset`.
The following code block shows how to instantiate a `TextDataset` object using a dataset ID. Note that this code is intentionally verbose for demonstration purposes.
```
# Get the dataset ID if it's not available
dataset_id = "[your-dataset-id]"
if dataset_id == "[your-dataset-id]":
# Use the reference to the new dataset captured when we created it
dataset_id = ds.resource_name.split("/")[-1]
print(f"Dataset ID: {dataset_id}")
text_dataset = aiplatform.TextDataset(dataset_id)
```
Now you can begin training your model. Training the model is a two part process:
1. **Define the training job.** You must provide a display name and the type of training you want when you define the training job.
2. **Run the training job.** When you run the training job, you need to supply a reference to the dataset to use for training. At this step, you can also configure the data split percentages.
You do not need to specify [data splits](https://cloud.google.com/vertex-ai/docs/general/ml-use). The training job has a default setting of training 80%/ testing 10%/ validate 10% if you don't provide these values.
To train your model, you call `AutoMLTextTrainingJob.run()` as shown in the following snippets. The method returns a reference to your new `Model` object.
As with importing data into the dataset, training your model can take a substantial amount of time. The client library prints out operation status messages while the training pipeline operation processes. You must wait for the training process to complete before you can get the resource name and ID of your new model, which is required for model evaluation and model deployment.
**Note**: You can close the notebook while you wait for the operation to complete.
```
# Define the training job
training_job_display_name = f"e2e-text-training-job-{TIMESTAMP}"
job = aiplatform.AutoMLTextTrainingJob(
display_name=training_job_display_name,
prediction_type="classification",
multi_label=False,
)
model_display_name = f"e2e-text-classification-model-{TIMESTAMP}"
# Run the training job
model = job.run(
dataset=text_dataset,
model_display_name=model_display_name,
training_fraction_split=0.7,
validation_fraction_split=0.2,
test_fraction_split=0.1,
sync=True,
)
```
## Get and review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the `model` variable you created when deployed the model or you can list all of the models in your project. When listing your models, you can provide filter criteria to narrow down your search.
```
models = aiplatform.Model.list(filter=f'display_name="{model_display_name}"')
print(models)
```
Using the model name (in the format `projects/[PROJECT_NAME]/locations/us-central1/models/[MODEL_ID]`), you can get its model evaluations. To get model evaluations, you must use the underlying service client.
Building a service client requires that you provide the name of the regionalized hostname used for your model. In this tutorial, the hostname is `us-central1-aiplatform.googleapis.com` because the model was created in the `us-central1` location.
```
# Get the ID of the model
model_name = "[your-model-resource-name]"
if model_name == "[your-model-resource-name]":
# Use the `resource_name` of the Model instance you created previously
model_name = model.resource_name
print(f"Model name: {model_name}")
# Get a reference to the Model Service client
client_options = {"api_endpoint": "us-central1-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
```
Before you can view the model evaluation you must first list all of the evaluations for that model. Each model can have multiple evaluations, although a new model is likely to only have one.
```
model_evaluations = model_service_client.list_model_evaluations(parent=model_name)
model_evaluation = list(model_evaluations)[0]
```
Now that you have the model evaluation, you can look at your model's scores. If you have questions about what the scores mean, review the [public documentation](https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-models#text).
The results returned from the service are formatted as [`google.protobuf.Value`](https://googleapis.dev/python/protobuf/latest/google/protobuf/struct_pb2.html) objects. You can transform the return object as a `dict` for easier reading and parsing.
```
model_eval_dict = json_format.MessageToDict(model_evaluation._pb)
metrics = model_eval_dict["metrics"]
confidence_metrics = metrics["confidenceMetrics"]
print(f'Area under precision-recall curve (AuPRC): {metrics["auPrc"]}')
for confidence_scores in confidence_metrics:
metrics = confidence_scores.keys()
print("\n")
for metric in metrics:
print(f"\t{metric}: {confidence_scores[metric]}")
```
## Deploy your text classification model
Once your model has completed training, you must deploy it to an _endpoint_ to get online predictions from it. When you deploy the model to an endpoint, a copy of the model is made on the endpoint with a new resource name and display name.
You can deploy multiple models to the same endpoint and split traffic between the various models assigned to the endpoint. However, you must deploy one model at a time to the endpoint. To change the traffic split percentages, you must assign new values on your second (and subsequent) models each time you deploy a new model.
The following code block demonstrates how to deploy a model. The code snippet relies on the Python SDK to create a new endpoint for deployment. The call to `modely.deploy()` returns a reference to an `Endpoint` object--you need this reference for online predictions in the next section.
```
deployed_model_display_name = f"e2e-deployed-text-classification-model-{TIMESTAMP}"
endpoint = model.deploy(
deployed_model_display_name=deployed_model_display_name, sync=True
)
```
In case you didn't record the name of the new endpoint, you can get a list of all your endpoints as you did before with datasets and models. For each endpoint, you can list the models deployed to that endpoint. To get a reference to the model that you just deployed, you can check the `display_name` of each model deployed to the endpoint against the model you're looking for.
```
endpoints = aiplatform.Endpoint.list()
endpoint_with_deployed_model = []
for endpoint_ in endpoints:
for model in endpoint_.list_models():
if model.display_name.find(deployed_model_display_name) == 0:
endpoint_with_deployed_model.append(endpoint_)
print(endpoint_with_deployed_model)
```
## Get online predictions from your model
Now that you have your endpoint's resource name, you can get online predictions from the text classification model. To get the online prediction, you send a prediction request to your endpoint.
```
endpoint_name = "[your-endpoint-name]"
if endpoint_name == "[your-endpoint-name]":
endpoint_name = endpoint.resource_name
print(f"Endpoint name: {endpoint_name}")
endpoint = aiplatform.Endpoint(endpoint_name)
content = "I got a high score on my math final!"
response = endpoint.predict(instances=[{"content": content}])
for prediction_ in response.predictions:
ids = prediction_["ids"]
display_names = prediction_["displayNames"]
confidence_scores = prediction_["confidences"]
for count, id in enumerate(ids):
print(f"Prediction ID: {id}")
print(f"Prediction display name: {display_names[count]}")
print(f"Prediction confidence score: {confidence_scores[count]}")
```
## Get batch predictions from your model
You can get batch predictions from a text classification model without deploying it. You must first format all of your prediction instances (prediction input) in JSONL format and you must store the JSONL file in a Google Cloud Storage bucket. You must also provide a Google Cloud Storage bucket to hold your prediction output.
To start, you must first create your predictions input file in JSONL format. Each line in the JSONL document needs to be formatted like so:
```
{ "content": "gs://sourcebucket/datasets/texts/source_text.txt", "mimeType": "text/plain"}
```
The `content` field in the JSON structure must be a Google Cloud Storage URI to another document that contains the text input for prediction.
[See the documentation for more information.](https://cloud.google.com/ai-platform-unified/docs/predictions/batch-predictions#text)
```
instances = [
"We hiked through the woods and up the hill to the ice caves",
"My kitten is so cute",
]
input_file_name = "batch-prediction-input.jsonl"
```
For batch prediction, you must supply the following:
+ All of your prediction instances as individual files on Google Cloud Storage, as TXT files for your instances
+ A JSONL file that lists the URIs of all your prediction instances
+ A Google Cloud Storage bucket to hold the output from batch prediction
For this tutorial, the following cells create a new Storage bucket, upload individual prediction instances as text files to the bucket, and then create the JSONL file with the URIs of your prediction instances.
```
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "[your-bucket-name]"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = f"automl-text-notebook-{TIMESTAMP}"
BUCKET_URI = f"gs://{BUCKET_NAME}"
! gsutil mb -l $REGION $BUCKET_URI
# Instantiate the Storage client and create the new bucket
storage = storage.Client()
bucket = storage.bucket(BUCKET_NAME)
# Iterate over the prediction instances, creating a new TXT file
# for each.
input_file_data = []
for count, instance in enumerate(instances):
instance_name = f"input_{count}.txt"
instance_file_uri = f"{BUCKET_URI}/{instance_name}"
# Add the data to store in the JSONL input file.
tmp_data = {"content": instance_file_uri, "mimeType": "text/plain"}
input_file_data.append(tmp_data)
# Create the new instance file
blob = bucket.blob(instance_name)
blob.upload_from_string(instance)
input_str = "\n".join([str(d) for d in input_file_data])
file_blob = bucket.blob(f"{input_file_name}")
file_blob.upload_from_string(input_str)
```
Now that you have the bucket with the prediction instances ready, you can send a batch prediction request to Vertex AI. When you send a request to the service, you must provide the URI of your JSONL file and your output bucket, including the `gs://` protocols.
With the Python SDK, you can create a batch prediction job by calling `Model.batch_predict()`.
```
job_display_name = "e2e-text-classification-batch-prediction-job"
model = aiplatform.Model(model_name=model_name)
batch_prediction_job = model.batch_predict(
job_display_name=job_display_name,
gcs_source=f"{BUCKET_URI}/{input_file_name}",
gcs_destination_prefix=f"{BUCKET_URI}/output",
sync=True,
)
batch_prediction_job_name = batch_prediction_job.resource_name
```
Once the batch prediction job completes, the Python SDK prints out the resource name of the batch prediction job in the format `projects/[PROJECT_ID]/locations/[LOCATION]/batchPredictionJobs/[BATCH_PREDICTION_JOB_ID]`. You can query the Vertex AI service for the status of the batch prediction job using its ID.
The following code snippet demonstrates how to create an instance of the `BatchPredictionJob` class to review its status. Note that you need the full resource name printed out from the Python SDK for this snippet.
```
from google.cloud.aiplatform import jobs
batch_job = jobs.BatchPredictionJob(batch_prediction_job_name)
print(f"Batch prediction job state: {str(batch_job.state)}")
```
After the batch job has completed, you can view the results of the job in your output Storage bucket. You might want to first list all of the files in your output bucket to find the URI of the output file.
```
BUCKET_OUTPUT = f"{BUCKET_URI}/output"
! gsutil ls -a $BUCKET_OUTPUT
```
The output from the batch prediction job should be contained in a folder (or _prefix_) that includes the name of the batch prediction job plus a time stamp for when it was created.
For example, if your batch prediction job name is `my-job` and your bucket name is `my-bucket`, the URI of the folder containing your output might look like the following:
```
gs://my-bucket/output/prediction-my-job-2021-06-04T19:54:25.889262Z/
```
To read the batch prediction results, you must download the file locally and open the file. The next cell copies all of the files in the `BUCKET_OUTPUT_FOLDER` into a local folder.
```
RESULTS_DIRECTORY = "prediction_results"
RESULTS_DIRECTORY_FULL = f"{RESULTS_DIRECTORY}/output"
# Create missing directories
os.makedirs(RESULTS_DIRECTORY, exist_ok=True)
# Get the Cloud Storage paths for each result
! gsutil -m cp -r $BUCKET_OUTPUT $RESULTS_DIRECTORY
# Get most recently modified directory
latest_directory = max(
[
os.path.join(RESULTS_DIRECTORY_FULL, d)
for d in os.listdir(RESULTS_DIRECTORY_FULL)
],
key=os.path.getmtime,
)
print(f"Local results folder: {latest_directory}")
```
With all of the results files downloaded locally, you can open them and read the results. In this tutorial, you use the [`jsonlines`](https://jsonlines.readthedocs.io/en/latest/) library to read the output results.
The following cell opens up the JSONL output file and then prints the predictions for each instance.
```
# Get downloaded results in directory
results_files = []
for dirpath, subdirs, files in os.walk(latest_directory):
for file in files:
if file.find("predictions") >= 0:
results_files.append(os.path.join(dirpath, file))
# Consolidate all the results into a list
results = []
for results_file in results_files:
# Open each result
with jsonlines.open(results_file) as reader:
for result in reader.iter(type=dict, skip_invalid=True):
instance = result["instance"]
prediction = result["prediction"]
print(f"\ninstance: {instance['content']}")
for key, output in prediction.items():
print(f"\n{key}: {output}")
```
## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
* Dataset
* Training job
* Model
* Endpoint
* Batch prediction
* Batch prediction bucket
```
if os.getenv("IS_TESTING") is True:
! gsutil rm -r $BUCKET_URI
batch_job.delete()
# `force` parameter ensures that models are undeployed before deletion
endpoint.delete(force=True)
model.delete()
text_dataset.delete()
# Training job
job.delete()
```
## Next Steps
After completing this tutorial, see the following documentation pages to learn more about Vertex AI:
* [Preparing text training data](https://cloud.google.com/vertex-ai/docs/datasets/prepare-text)
* [Training an AutoML model using the API](https://cloud.google.com/vertex-ai/docs/training/automl-api#text)
* [Evaluating AutoML models](https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-models#text)
* [Deploying a model using ther Vertex AI API](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#aiplatform_create_endpoint_sample-python)
* [Getting online predictions from AutoML models](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#aiplatform_create_endpoint_sample-python)
* [Getting batch predictions](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#text)
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from random import sample, seed
from astropy.io import ascii, fits
import astropy.units as u
from glob import glob
from usefulFuncs import ifind_tpfs
from LCFeatureExtraction import error_estimate, amplitude_estimate
from lightkurve import MPLSTYLE
from lightkurve import open as open_lc
# from k2spin import prot
```
## Light Curve Quality Indicator: Sigma estimate
### Intro
In this notebook I originally wanted to test how the my amplitude estimate compares to noise estimate in indicating the quality of a light curve. However, I am afraid my amplitude estimate is not as sophisticated as my error estimate and is sensitive to a systematic trend. Therefore I will only proceed to assess whether my error estimate is a good quality indicator for my light curves. I will focus on light curves that have some sort of periodic signal (regardless of trends).
### Target Selection
In order to make this whole process more efficient I am only selecting light curves that have a clear periodic signal.
These light curves were handpicked, and put into three groups; **low**, **high**, and **mixed** frequency oscillations; where mixed means there is more than one dominant signal in the light curve. For each group I handpicked a "large" sample by skimming through the light curves collection (from this cluster), they all are in the folder `LightCurvesPlots`.
For the test in this notebook I simply combined all these handpicked light curves, and grabed a random sample of N_targets.
### Noise Calculation
The noise calculation is better discussed in the LCFeatureExploration notebook.
It simply is a Median Absolute Error of the light curve and a smoothed out version (Gaussian).
It is worth to note that the error estimates are target-specific; they cannot be compared across targets. However, they do not need to in order to automate the aperture-method selection on a per-target basis.
### Assessing Light Curve Quality
In order to visually assess the quality of the light curve I am creating a grid of plots to look at all aperture methods at once, for each target.
Every light curve is normalized, sigma clipped, and plotted along with the error estimate. In order to assess whether the metric was successful I assess each target indepdently, and look at the overall success across all targets. For each target I first find the light curve with lowest noise value and compare it to the other two light curves. If the selected light curve has a visually detectable periodic signal and seems to have the least number of artifacts, then I claim it as successful. Such artifacts include trends, and noise that may impact the power spectral density.
### Discussion
Overally I think the noise metric is a decent quality indicator. It seems to also be sensible to trends, or maybe it is just that those light curves with trends also have a larger noise. As expected, the noise metric is not useful when determining the quality of the strength of the signal. In other words, I can't use it to determine which light curve has the larger amplitudes. This is clear for targets 93549226 & 93014257: while the light curves with lowest noise are good, there are other light curves that can yield a higher power for the intrinsic signal. Using the metric can still help us increase our confidence in our detected period estimates, and
```
# Parameters
outlier_sigma = 3.0
N_targets = 15
seed(12345)
# Get quality mask
with fits.open('DataInput/ClusterQuality/Sector8_Sample.fits.gz') as quality_sample:
# For Chelsea's quality flags:
# 0 means good, 1 means bad
quality_flags = quality_sample[1].data['quality']
good_mask = ~quality_flags.astype(bool)
colors = ['k' if flag else 'red' for flag in good_mask]
# Get filepaths to LightCurve FITS
src_lcfs = 'LightCurvesFITS/*.fits'
fits_paths = glob(src_lcfs)
# Pool of hand picked targets with at least one light curve whose frequeancy appers to be high/low or mixed
high_IDs = ['92580385', '92581404', '93265597', '93022865', '93551193', '93630147', '93831298', '93912230', '144996772', '144997798']
mix_IDs = ['93550865', '93270923', '94185042', '144759493']
low_IDs = ['92475268', '92583560', '93013696', '93014257', '93016484', '93016484', '93269120', '93549309', '94107183', '14475228']
all_freq_ids = high_IDs + mix_IDs + low_IDs
sample_freq_ids = sample(all_freq_ids, N_targets)
```
### Plot Grid Description
Here is the plot grid for assessment. The light curves to be assessed are selected above, and in order to use different light curves one must change the random seed. The grid has a size of N_targets by 3 (aperture-methods). The order of the columns starting from the left: Handpicked aperture, Percentile Ap, Threshold Ap.
```
fig, axes = plt.subplots(N_targets, 3, figsize=(18, 2.5*N_targets // 1), sharey='row', sharex='all')
titles = 'Pipeline Ap', 'Percentile Ap', 'Threshold Ap'
for i, ticid in enumerate(sample_freq_ids):
# Get the light curves associated with this TICID
# By sorting them, I get: Handpicked Ap, Percentile Ap, Threshold Ap
lc_paths = sorted([fits_path for fits_path in fits_paths if ticid in fits_path])
for j, lc_path in enumerate(lc_paths):
# Import Light Curve, Sigma Clip, and Scatter
lc = open_lc(lc_path).get_lightcurve('FLUX')
if lc._flux_unit.is_equivalent(u.dimensionless_unscaled):
clipped_lc, clipped_mask = lc.remove_outliers(sigma=outlier_sigma, return_mask=True)
else:
clipped_lc, clipped_mask = lc.normalize().remove_outliers(sigma=outlier_sigma, return_mask=True)
if axes[i, j].is_first_col():
clipped_lc.scatter(ax=axes[i, j], c=[c for c, b in [*zip(colors, ~clipped_mask)] if b], show_colorbar=False)
else:
clipped_lc.scatter(ax=axes[i, j], c=[c for c, b in [*zip(colors, ~clipped_mask)] if b], show_colorbar=False, label='')
# Estimate Features
sigma = error_estimate(clipped_lc, quality=good_mask[~clipped_mask])
# amplitude = amplitude_estimate(clipped_lc, quality=good_mask[~clipped_mask]) / sigma
# Workout plot details
with plt.style.context(MPLSTYLE):
# axes[i, j].set_title(f'{titles[j]}\nSNR: {amplitude:.2f} Noise: {sigma*1e6:.2f}')
axes[i, j].set_title(f'{titles[j]}\nNoise: {sigma*1e6:.2f} ppm')
if not axes[i, j].is_first_col():
axes[i, j].set_ylabel('')
if not axes[i, j].is_last_row():
axes[i, j].set_xlabel('')
# if clipped_lc.flux.max() > 1.1:
axes[i, j].set_ylim(0.95, 1.05)
# Tighten plot to look better
plt.tight_layout()
```
| github_jupyter |
<p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
<small>
<br>November 24, 2017: seminar time
<br>November 11-23, 2017: minor reviews
<br>November 9, 2017: refutation methods
<br>November 8, 2017: sets
<br>November 7, 2017: init
</small>
</div>
</p>
```
from IPython.display import Markdown, Image, Latex
from collections import defaultdict
from muk.core import *
from muk.ext import *
from mclock import *
from sympy import IndexedBase, symbols, latex, init_printing, Eq, Matrix, binomial
init_printing()
toc = ["", "exams & courses & conferences", "what I've done", "what I'm working on"]#, "thesis arguments"]
toc_iter = iter(toc[1:])
def reference_to_this_talk():
src = '<a href="{href}">{href}</a>'
return Markdown(src.format(href=r'http://massimo-nocentini.github.io/PhD/second-year-summary/talk.html'))
def table_of_contents():
src = r'# TOC'
return Markdown('\n- '.join(toc))
def greetings(smiley=True):
return Markdown("<h1>{greet} {smile}</h1>".format(
greet="Thanks for coming ", smile=":)" if smiley else ""))
def next_topic():
return Markdown("# {topic}".format(topic=next(toc_iter)))
__AUTHOR__ = ("Massimo Nocentini",
"massimo.nocentini@unifi.it",
"https://github.com/massimo-nocentini/")
__ACKNOWLEDGEMENT__ = {"Beatrice Donati", "Marco Maggesi", }
__ABSTRACT__ = '''
The relational language __microkanren__ is presented as a goal-based,
Pythonic implementation with a *fair, complete* search strategy.
'''
__SELF__ = r'http://massimo-nocentini.github.io/PhD/mkpy/talk.html'
```
# raw outline
- programming *abstractions* and *paradigms*: *logic* to the rescue
- *microkanren* relational language
- examples of *puzzles* and *combinatorics*
## it comes down to *sets*, eventually
- our main track concerns generation of __sets__, possibly *inductively* defined
- how programming languages allow us to do that?
- is it easy in every paradigm?
- what about imperative? functional? ...*relational*?
- by Church-Turing thesis, it should be possible even in *assembly* code
- what if some kind of *elegance* is required?
- don't you code not-so-easy to grasp implementation, do you?
# *primes* numbers
generate them using the [sieve of Eratosthenes][sieve]:
>iteratively mark as composite the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime.
[sieve]:https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
## imperative style
```
def filter_primes(primes):
if not primes: return [] # base case for recursion
prime, *nats = primes # get the next prime
for i in range(len(nats)): # for every other number
if nats[i] % prime == 0: # check if it is composite
nats[i] = None # if yes, cancel it
return [prime] + filter_primes([n for n in nats if n]) # recur
assert (filter_primes(range(2,52)) ==
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47])
```
## functional style
```
def filter_primes(primes):
if not primes: return [] # base case for recursion
prime, *nats = primes # get the next prime
multiples = {number for number in nats
if number % prime == 0} # enum multiples of prime
return [prime] + filter_primes(sorted(set(nats) - multiples)) # recur
assert (filter_primes(range(2,52)) ==
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47])
```
## declarative style
```haskell
> primes = filte_prime [2..] -- infinite list!
where filter_prime (p:xs) =
p : filter_prime [x | x <- xs, x `mod` p /= 0] -- lazily
> take 15 primes == [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47]
True
```
# *inductively-defined* sets
## a curious numbers machine
Let $▢, \triangle$ be natural numbers in machine
$$
\mathcal{C} = \left \lbrace{ \over 2▢ \stackrel{\circ}{\rightarrow} ▢} , {▢ \stackrel{\circ}{\rightarrow} \triangle \over 3▢ \stackrel{\circ}{\rightarrow} \triangle 2 \triangle} \right \rbrace
$$
Questions:
- enumerate $ \alpha \stackrel{\circ}{\rightarrow} \beta $, namely the set of numbers that the machine can produce
- does exist a number $\alpha$ such that $ \alpha \stackrel{\circ}{\rightarrow} \alpha $?
- does exist a number $\beta$ such that $ \beta \stackrel{\circ}{\rightarrow} \alpha\beta $, for every number $\alpha$?
*what paradigm do you feel comfortable with in order to answer that requests?*<br>
*are you able to generalize the algorithm to handle an arbitrary, inductively defined machine*?
---
this is part of a puzzle in the chapter *The mystery of the Montecarlo lock* <br>
of the book [*The Lady or the Tiger*][book] by logician Raymond Smullyan.
[book]:https://www.amazon.com/Lady-Tiger-Other-Puzzles-Recreational/dp/048647027X
## context-free grammars
let $\mathcal{D}$ be the set of [*Dyck paths*][dyck] and let $\leadsto$ be a *CFG* defined as follows
$$
\left\lbrace\begin{array}{l}
\leadsto = \varepsilon \\
\leadsto = \diagup \leadsto \diagdown \leadsto \\
\end{array}\right\rbrace
$$
Questions:
- enumerate $\mathcal{D}$ using $\leadsto$
- what values for α and β work for the following path to be in $\mathcal{D}$?
α
/ β
/ \
---
this CFG has a nice combinat interpretation, actually any other CFG would work
[dyck]:http://mathworld.wolfram.com/DyckPath.html
## recurrence relations
```
P = symbols(r'\mathcal{P}_{6}') # the *Pascal matrix*
Eq(P, Matrix(6, 6, binomial), evaluate=False)
```
$d_{nk}$ denotes the coefficient at row $n$ and column $k$ satisfying
$$d_{n, k} = d_{n-1, k-1} + d_{n-1, k}$$
Questions:
- are you able to unfold the recurrence at different depths for the generic coeff?
- can you generalize for an arbitrary recurrence?
# a bit of theory: *resolutions methods*
## by refutation
Let $\alpha$ be a *CNF* sentence and $M(\alpha)$ the set of models that satisfy it;<br>
a model is a set of assignments that make $\alpha$ true.
$\alpha$ is *valid* if it is true in *all* models;
oth, it is *satisfiable* if it is true in *some* model.
*Logical reasoning* boils down to *entailment* relation $\models$ between sentences
$$
M(\alpha) \subseteq M(\beta) \leftrightarrow \alpha \models \beta \leftrightarrow
(\alpha \Rightarrow \beta) \text{ is valid } \leftrightarrow \neg(\neg\alpha \vee \beta) \text{ is unsatisfiable}
$$
the *resolution rule* is a *complete* inference algorithm, let $l_{q}, m_{w} \in\lbrace 0,1\rbrace$ in
$$
{
l_{0}\vee \ldots \vee l_{i} \vee \ldots \vee l_{j-1} \quad m_{0}\vee \ldots\vee m_{o}\vee\ldots\vee m_{k-1} \quad l_{i} = \neg m_{o}
\over
l_{0}\vee \ldots \vee l_{i-1}\vee l_{i+1} \vee \ldots\vee l_{j-1} \vee
m_{0}\vee \ldots \vee m_{o-1}\vee m_{o+1}\vee\ldots\vee m_{k-1}
}
$$
finally, if two clauses resolve to yield the empty clause $\varepsilon$ then $\alpha\models\beta$ holds
algorithm [DPLL][dpll] is a recursive, depth-first enumeration of models using resolution<br>
paired with *early termination*, *pure symbol* and *unit clause* heuristics to speed up.
[dpll]:https://en.wikipedia.org/wiki/DPLL_algorithm
## by unification
it is a process of solving *equations between symbolic expressions*;<br>
a *solution* is denoted as a *substitution* $\theta$, namely a mapping assigning<br>
a symbolic value to each variable of the problem's expressions
*finite terms*: $$\lbrace cons(x,cons(x,nil)) = cons(2,y)\rbrace \theta \leftrightarrow \theta = \lbrace x \mapsto 2, y \mapsto cons(2,nil) \rbrace$$
*infinite terms*: $$ \lbrace y = cons(2,y) \rbrace \theta \leftrightarrow \theta = \lbrace y \mapsto cons(2,cons(2,cons(2,...))) \rbrace$$
let $G$ be a set of equations, unification proceeds according to the following rules:
- *delete*: $$G \cup \lbrace t = t \rbrace \rightarrow G$$
- *decompose*: $$G \cup \lbrace f(s_{0}, \ldots, s_{k}) = f(t_{0}, \ldots, t_{k})\rbrace \rightarrow G \cup \lbrace s_{0}=t_{0},\ldots, s_{k}=t_{k} \rbrace$$
- *conflict*: if $f\neq g \vee k\neq m$ then $$G \cup \lbrace f(s_{0}, \ldots, s_{k}) = g(t_{0}, \ldots, t_{m})\rbrace \rightarrow \,\perp$$
- *eliminate*: if $x \not\in vars(t)$ and $x \in vars(G)$ then $$G \cup \lbrace x = t\rbrace \rightarrow G\lbrace x \mapsto t\rbrace \cup \left\lbrace x \triangleq t\right\rbrace $$
- *occur check*: if $x \in vars(f(s_{0},\ldots,s_{k}))$ then $$G \cup \lbrace x = t(s_{0}, \ldots, s_{k}\rbrace \rightarrow \,\perp$$
without *occur checks*, generating a substitution $\theta$ is a *recursive enumerable* problem
# microkanren
meet _microKanren_
- a DSL for relational programming, in Scheme
- purely functional of [_miniKanren_][mk]
- *explicit streams* of satisfying states, _goal-based_ approach
- _unification_ instead of _SLD-NF resolution_
- complete, _unfair_ search strategy
my contribution
- _Pythonic_ [implementation][mkpy]: functional at the core, objective at the interface
- generators subsume _countably_-satisfiable relations; complete, _fair_ [search][dovetail]
- _The Reasoned Schemer_ fully tested via [Travis CI][travis]; moreover, [read the docs][rtfd]
- case studies: Smullyan puzzles and combinatorics
- tweaking HOL Light for _certified deductions_, [wip][klight]
[mk]:http://minikanren.org/
[travis]:https://travis-ci.org/massimo-nocentini/microkanrenpy
[rtfd]:http://microkanrenpy.readthedocs.io/en/latest/
[klight]:https://github.com/massimo-nocentini/kanren-light
[mkpy]:https://github.com/massimo-nocentini/microkanrenpy
[dovetail]:http://microkanrenpy.readthedocs.io/en/latest/under_the_hood.html#muk.core.mplus
```
rvar(0) # a logic variable
run(succeed) # a goal that always succeeds
run(fail) # a goal that always fails
run(fresh(lambda q: succeed)) # a free variable without association
run(unify(3, 3)) # unification of ground values
run(fresh(lambda q: unify(3, q))) # unification of a variable
run(fresh(lambda q: unify([[2, 3], 1, 2, 3], [q, 1] + q))) # list unification
run(fresh(lambda q, z: unify(q, z) & unify(z, 3))) # co-sharing
run(fresh(lambda q: unify(q, False) | unify(q, True))) # disjunction
run(fresh(lambda q:
fresh(lambda q: unify(q, False)) &
unify(q, True))) # conjunction
def father(p, s):
return conde([unify(p, 'paul'), unify(s, 'jason')],
[unify(p, 'john'), unify(s, 'henry')],
[unify(p, 'jason'), unify(s, 'tom')],
[unify(p, 'peter'), unify(s, 'brian')],
[unify(p, 'tom'), unify(s, 'peter')])
def grand_father(g, s):
return fresh(lambda p: father(g, p) & father(p, s))
run(fresh(lambda rel, p, s: grand_father(p, s) & unify([p, s], rel)))
```
# a curious numbers machine
Recap, let $▢, \triangle$ be natural numbers in machine
$$
\mathcal{C} = \left \lbrace{ \over 2▢ \stackrel{\circ}{\rightarrow} ▢} , {▢ \stackrel{\circ}{\rightarrow} \triangle \over 3▢ \stackrel{\circ}{\rightarrow} \triangle 2 \triangle} \right \rbrace
$$
```python
def machine(*, rules): # an abstract machine
def M(a, b):
return condi(*[[r(a, b, machine=M), succeed] for r in rules])
return M
def associateo(g, g2g): # a number ctor
return appendo(g, [2]+g, g2g)
def mcculloch_first_ruleo(a, b, *, machine): # machine's first rule
return unify([2]+b, a)
def mcculloch_second_ruleo(a, g2g, *, machine): # machine's second rule
return fresh(lambda n, g: unify([3]+n, a)
& associateo(g, g2g)
& machine(n, g))
mccullocho = machine(rules=[ mcculloch_second_ruleo, mcculloch_first_ruleo, ])
```
## about the first rule
```
run(mccullocho([2,3,4,5], [3,4,5])) # check
run(fresh(lambda α, β: mccullocho([4]+β, α))) # attempt to break rules
run(fresh(lambda α: mccullocho([2,3,4,5], α))) # computing forwards
run(fresh(lambda α: mccullocho(α, [3,4,5]))) # computing backwards
```
## about the second rule
```
run(mccullocho([3,2,5], [5,2,5]))
run(fresh(lambda α: mccullocho([3,3,2,5], α)), n=1)
run(fresh(lambda β: mccullocho(β, [5,2,5,2,5,2,5])))
```
## answers
```
run(fresh(lambda p, α, β: mccullocho(α, β) & unify([α, β], p)), n=10) # enum
run(fresh(lambda α: mccullocho(α, α)), n=1) # fixpoint
run(fresh(lambda out, γ, αγ:
mcculloch_lawo(γ, αγ)
& unify([γ, αγ], out)), n=5) # McCulloch's law
```
# Dyck paths' CFG
```
def dycko(α):
return conde([nullo(α), succeed],
else_clause=fresh(lambda β, γ:
appendo(['(']+β, [')']+γ, α) @
(dycko(β) @ dycko(γ))))
paths = run(fresh(lambda α: dycko(α)), n=80)
D = defaultdict(list)
for α in map(lambda α: ''.join(α), paths):
D[len(α)//2].append(α)
from collections import namedtuple
dyck = namedtuple('dyck', ['paths', 'count'])
[dyck(paths, len(paths)) for i in range(5) for paths in [D[i]]]
```
# Pascal triangle
```
P = IndexedBase('P')
n, m = symbols('n m')
def pascalo(depth, r, c, α):
if not depth: return unify([P[r,c]], α)
return fresh(lambda β, γ: (pascalo(depth-1, r-1, c-1, β) @
pascalo(depth-1, r-1, c, γ) @
appendo(β, γ, α)))
unfoldings = {d:sum(addends)
for d in range(6)
for addends in run(fresh(lambda α: pascalo(d, n, m, α)))}
Matrix(5, 1, lambda i, j: unfoldings[i+1])
import this # Easter egg!!
greetings(smiley=True)
def fives(x):
return unify(5, x) | fives(x)
try:
run(fresh(lambda x: fives(x)))
except RecursionError:
pass
def fives(x):
return unify(5, x) | fresh(lambda y: fives(y))
run(fresh(fives), n=5)
g = fresh(lambda x: fives(x))
states = g(emptystate())
[next(states) for i in range(5)]
def fives(x):
return unify(5, x) | fresh(lambda: fives(x))
run(fresh(fives), n=5)
g = fresh(lambda x: fives(x))
states = g(emptystate())
[next(states) for i in range(5)]
def nats(x, n=0):
return unify(n, x) | fresh(lambda: nats(x, n+1))
run(fresh(lambda x: nats(x)), n=10)
def nullo(l):
return unify([], l)
def appendo(r, s, out):
def A(r, out):
return conde([nullo(r), unify(s, out)],
else_clause=fresh(lambda a, d, res:
unify([a]+d, r) &
unify([a]+res, out) &
fresh(lambda: A(d, res))))
return A(r, out)
run(fresh(lambda l, q: appendo([1,2,3]+q, [4,5,6], l)), n=5)
run(fresh(lambda r, x, y:
appendo(x, y, ['cake', 'with', 'ice', 'd', 't']) &
unify([x, y], r)))
```
---
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
| github_jupyter |
# Landscape Analysis
## Load in Raw Data
Go through each record, load in supporting objects, flatten everything into records, and put into a massive dataframe.
```
import recirq
import cirq
import numpy as np
import pandas as pd
from datetime import datetime
from recirq.qaoa.experiments.p1_landscape_tasks import \
DEFAULT_BASE_DIR, DEFAULT_PROBLEM_GENERATION_BASE_DIR, DEFAULT_PRECOMPUTATION_BASE_DIR, \
ReadoutCalibrationTask
records = []
ro_records = []
for record in recirq.iterload_records(dataset_id="2020-03-tutorial", base_dir=DEFAULT_BASE_DIR):
record['timestamp'] = datetime.fromisoformat(record['timestamp'])
dc_task = record['task']
if isinstance(dc_task, ReadoutCalibrationTask):
ro_records.append(record)
continue
pgen_task = dc_task.generation_task
problem = recirq.load(pgen_task, base_dir=DEFAULT_PROBLEM_GENERATION_BASE_DIR)['problem']
record['problem'] = problem.graph
record['problem_type'] = problem.__class__.__name__
record['bitstrings'] = record['bitstrings'].bits
recirq.flatten_dataclass_into_record(record, 'task')
recirq.flatten_dataclass_into_record(record, 'generation_task')
records.append(record)
# Associate each data collection task with its nearest readout calibration
for record in sorted(records, key=lambda x: x['timestamp']):
record['ro'] = min(ro_records, key=lambda x: abs((x['timestamp']-record['timestamp']).total_seconds()))
df_raw = pd.DataFrame(records)
df_raw.head()
```
## Narrow down to Relevant Data
Drop unnecessary metadata and use bitstrings to compute the expected value of the energy. In general, it's better to save the raw data and lots of metadata so we can use it if it becomes necessary in the future.
```
from recirq.qaoa.simulation import hamiltonian_objectives
def compute_energies(row):
permutation = []
qubit_map = {}
for i, q in enumerate(row['qubits']):
fi = row['final_qubits'].index(q)
permutation.append(fi)
qubit_map[i] = q
return hamiltonian_objectives(row['bitstrings'],
row['problem'],
permutation,
row['ro']['calibration'],
qubit_map)
# Start cleaning up the raw data
df = df_raw.copy()
df = df.drop(['line_placement_strategy',
'generation_task.dataset_id',
'generation_task.device_name'], axis=1)
# Compute energies
df['energies'] = df.apply(compute_energies, axis=1)
df = df.drop(['bitstrings', 'problem', 'ro', 'qubits', 'final_qubits'], axis=1)
df['energy'] = df.apply(lambda row: np.mean(row['energies']), axis=1)
# We won't do anything with raw energies right now
df = df.drop('energies', axis=1)
# Do timing somewhere else
df = df.drop([col for col in df.columns if col.endswith('_time')], axis=1)
df
```
## Compute theoretical landscape
Use a simulator to compute the noiseless landscape. This can get quite expensive, so it would be better practice to factor this out into Tasks in their own right: https://github.com/quantumlib/ReCirq/issues/21
```
def get_problem_graph(problem_type,
n=None,
instance_i=0):
if n is None:
if problem_type == 'HardwareGridProblem':
n = 4
elif problem_type == 'SKProblem':
n = 3
elif problem_type == 'ThreeRegularProblem':
n = 4
else:
raise ValueError(repr(problem_type))
r = df_raw[
(df_raw['problem_type']==problem_type)&
(df_raw['n_qubits']==n)&
(df_raw['instance_i']==instance_i)
]['problem']
return r.iloc[0]
from recirq.qaoa.simulation import exact_qaoa_values_on_grid, lowest_and_highest_energy
import itertools
def compute_exact_values(problem_type, x_grid_num=23, y_grid_num=21):
exact = exact_qaoa_values_on_grid(
graph=get_problem_graph(problem_type),
num_processors=12,
x_grid_num=x_grid_num,
y_grid_num=y_grid_num,
).T.reshape(-1)
exact_gammas = np.linspace(0, np.pi/2, x_grid_num)
exact_betas = np.linspace(-np.pi/4, np.pi/4, y_grid_num)
exact_points = np.asarray(list(itertools.product(exact_gammas, exact_betas)))
min_c, max_c = lowest_and_highest_energy(get_problem_graph(problem_type))
return exact_points, exact, min_c, max_c
EXACT_VALS_CACHE = {k: compute_exact_values(k)
for k in ['HardwareGridProblem', 'SKProblem', 'ThreeRegularProblem']}
```
## Plot
```
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.rc('axes', labelsize=16, titlesize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
plt.rc('legend', fontsize=14, title_fontsize=16)
# Note: I ran into https://github.com/matplotlib/matplotlib/issues/15410
# if I imported matplotlib before using multiprocessing in `exact_qaoa_values_on_grid`, YMMV.
import scipy.interpolate
def plot_landscape(problem_type, res=200, method='nearest', cmap='PuOr'):
dfb = df
dfb = dfb[dfb['problem_type'] == problem_type]
xx, yy = np.meshgrid(np.linspace(0, np.pi/2, res), np.linspace(-np.pi/4, np.pi/4, res))
exact_points, exact, min_c, max_c = EXACT_VALS_CACHE[problem_type]
zz = scipy.interpolate.griddata(
points=dfb[['gamma', 'beta']].values,
values=dfb['energy'].values / min_c,
xi=(xx, yy),
method=method,
)
fig, (axl, axr) = plt.subplots(1, 2, figsize=(5*2, 5), sharey=True)
norm = plt.Normalize(max_c/min_c, min_c/min_c)
cmap = 'RdBu'
extent=(0, 4, -2, 2)
axl.imshow(zz, extent=extent, origin='lower', cmap=cmap, norm=norm, interpolation='none')
axl.set_xlabel(r'$\gamma\ /\ (\pi/8)$')
axl.set_ylabel(r'$\beta\ /\ (\pi/8)$')
axl.set_title('Experiment')
zz_exact = scipy.interpolate.griddata(
points=exact_points,
values=(exact/min_c),
xi=(xx, yy),
method=method,
)
g = axr.imshow(zz_exact, extent=extent, origin='lower', cmap=cmap, norm=norm, interpolation='none')
axr.set_xlabel(r'$\gamma\ /\ (\pi/8)$')
axr.set_title('Theory')
fig.colorbar(g, ax=[axl, axr], shrink=0.8)
```
### Hardware Grid
```
plot_landscape('HardwareGridProblem')
```
### SK Model
```
plot_landscape('SKProblem')
```
## 3 Regular MaxCut
```
plot_landscape('ThreeRegularProblem')
```
| github_jupyter |
*Sebastian Raschka*
last modified: 04/03/2014
<hr>
I am really looking forward to your comments and suggestions to improve and extend this tutorial! Just send me a quick note
via Twitter: [@rasbt](https://twitter.com/rasbt)
or Email: [bluewoodtree@gmail.com](mailto:bluewoodtree@gmail.com)
<hr>
### Problem Category
- Statistical Pattern Recognition
- Supervised Learning
- Parametric Learning
- Bayes Decision Theory
- Multivariate data (2-dimensional)
- 2-class problem
- equal variances
- equal prior probabilities
- Gaussian model (2 parameters)
- no conditional Risk (1-0 loss functions)
<hr>
<p><a name="sections"></a>
<br></p>
# Sections
<p>• <a href="#given">Given information</a><br>
• <a href="#deriving_db">Deriving the decision boundary</a><br>
• <a href="#classify_rand">Classifying some random example data</a><br>
• <a href="#chern_err">Calculating the Chernoff theoretical bounds for P(error)</a><br>
• <a href="#emp_err">Calculating the empirical error rate</a><br>
<hr>
<p><a name="given"></a>
<br></p>
## Given information:
[<a href="#sections">back to top</a>] <br>
<br>
####model: continuous univariate normal (Gaussian) model for the class-conditional densities
$p(\vec{x} | \omega_j) \sim N(\vec{\mu}|\Sigma)$
$p(\vec{x} | \omega_j) \sim \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \exp{ \bigg[-\frac{1}{2} (\vec{x}-\vec{\mu})^t \Sigma^{-1}(\vec{x}-\vec{\mu}) \bigg] }$
####Prior probabilities:
$P(\omega_1) = P(\omega_2) = 0.5$
The samples are of 2-dimensional feature vectors:
$\vec{x} = \bigg[
\begin{array}{c}
x_1 \\
x_2 \\
\end{array} \bigg]$
#### Means of the sample distributions for 2-dimensional features:
$\vec{\mu}_{\,1} = \bigg[
\begin{array}{c}
0 \\
0 \\
\end{array} \bigg]$,
$\; \vec{\mu}_{\,2} = \bigg[
\begin{array}{c}
1 \\
1 \\
\end{array} \bigg]$
#### Covariance matrices for the statistically independend and identically distributed ('i.i.d') features:
$\Sigma_i = \bigg[
\begin{array}{cc}
\sigma_{11}^2 & \sigma_{12}^2\\
\sigma_{21}^2 & \sigma_{22}^2 \\
\end{array} \bigg], \;
\Sigma_1 = \Sigma_2 = I = \bigg[
\begin{array}{cc}
1 & 0\\
0 & 1 \\
\end{array} \bigg], \;$
####Class conditional probabilities:
$p(\vec{x}\;|\;\omega_1) \sim N \bigg( \vec{\mu_1} = \; \bigg[
\begin{array}{c}
0 \\
0 \\
\end{array} \bigg], \Sigma = I \bigg)$
$p(\vec{x}\;|\;\omega_2) \sim N \bigg( \vec{\mu_2} = \; \bigg[
\begin{array}{c}
1 \\
1 \\
\end{array} \bigg], \Sigma = I \bigg)$
<p><a name="deriving_db"></a>
<br></p>
## Deriving the decision boundary
[<a href="#sections">back to top</a>] <br>
### Bayes' Rule:
$P(\omega_j|x) = \frac{p(x|\omega_j) * P(\omega_j)}{p(x)}$
### Discriminant Functions:
The goal is to maximize the discriminant function, which we define as the posterior probability here to perform a **minimum-error classification** (Bayes classifier).
$g_1(\vec{x}) = P(\omega_1 | \; \vec{x}), \quad g_2(\vec{x}) = P(\omega_2 | \; \vec{x})$
$\Rightarrow g_1(\vec{x}) = P(\vec{x}|\;\omega_1) \;\cdot\; P(\omega_1) \quad | \; ln \\
\quad g_2(\vec{x}) = P(\vec{x}|\;\omega_2) \;\cdot\; P(\omega_2) \quad | \; ln$
<br>
We can drop the prior probabilities (since we have equal priors in this case):
$\Rightarrow g_1(\vec{x}) = ln(P(\vec{x}|\;\omega_1))\\
\quad g_2(\vec{x}) = ln(P(\vec{x}|\;\omega_2))$
$\Rightarrow g_1(\vec{x}) = \frac{1}{2\sigma^2} \bigg[\; \vec{x}^{\,t} - 2 \vec{\mu_1}^{\,t} \vec{x} + \vec{\mu_1}^{\,t} \bigg] \mu_1 \\
= - \frac{1}{2} \bigg[ \vec{x}^{\,t} \vec{x} -2 \; [0 \;\; 0] \;\; \vec{x} + [0 \;\; 0] \;\; \bigg[
\begin{array}{c}
0 \\
0 \\
\end{array} \bigg] \bigg] \\
= -\frac{1}{2} \vec{x}^{\,t} \vec{x}$
$\Rightarrow g_2(\vec{x}) = \frac{1}{2\sigma^2} \bigg[\; \vec{x}^{\,t} - 2 \vec{\mu_2}^{\,t} \vec{x} + \vec{\mu_2}^{\,t} \bigg] \mu_2 \\
= - \frac{1}{2} \bigg[ \vec{x}^{\,t} \vec{x} -2 \; 2\; [1 \;\; 1] \;\; \vec{x} + [1 \;\; 1] \;\; \bigg[
\begin{array}{c}
1 \\
1 \\
\end{array} \bigg] \bigg] \\
= -\frac{1}{2} \; \bigg[ \; \vec{x}^{\,t} \vec{x} - 2\; [1 \;\; 1] \;\; \vec{x} + 2\; \bigg] \;$
### Decision Boundary
$g_1(\vec{x}) = g_2(\vec{x})$
$\Rightarrow -\frac{1}{2} \vec{x}^{\,t} \vec{x} = -\frac{1}{2} \; \bigg[ \; \vec{x}^{\,t} \vec{x} - 2\; [1 \;\; 1] \;\; \vec{x} + 2\; \bigg] \;$
$\Rightarrow -2[1\;\; 1] \vec{x} + 2 = 0$
$\Rightarrow [-2\;\; -2] \;\;\vec{x} + 2 = 0$
$\Rightarrow -2x_1 - 2x_2 + 2 = 0$
$\Rightarrow -x_1 - x_2 + 1 = 0$
<p><a name="classify_rand"></a>
<br></p>
## Classifying some random example data
[<a href="#sections">back to top</a>] <br>
```
%pylab inline
import numpy as np
from matplotlib import pyplot as plt
def decision_boundary(x_1):
""" Calculates the x_2 value for plotting the decision boundary."""
return -x_1 + 1
# Generate 100 random patterns for class1
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[1,0],[0,1]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector
# Generate 100 random patterns for class2
mu_vec2 = np.array([1,1])
cov_mat2 = np.array([[1,0],[0,1]])
x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)
mu_vec2 = mu_vec2.reshape(1,2).T # to 1-col vector
# Scatter plot
f, ax = plt.subplots(figsize=(7, 7))
ax.scatter(x1_samples[:,0], x1_samples[:,1], marker='o', color='green', s=40, alpha=0.5)
ax.scatter(x2_samples[:,0], x2_samples[:,1], marker='^', color='blue', s=40, alpha=0.5)
plt.legend(['Class1 (w1)', 'Class2 (w2)'], loc='upper right')
plt.title('Densities of 2 classes with 100 bivariate random patterns each')
plt.ylabel('x2')
plt.xlabel('x1')
ftext = 'p(x|w1) ~ N(mu1=(0,0)^t, cov1=I)\np(x|w2) ~ N(mu2=(1,1)^t, cov2=I)'
plt.figtext(.15,.8, ftext, fontsize=11, ha='left')
plt.ylim([-3,4])
plt.xlim([-3,4])
# Plot decision boundary
x_1 = np.arange(-5, 5, 0.1)
bound = decision_boundary(x_1)
plt.annotate('R1', xy=(-2, 2), xytext=(-2, 2), size=20)
plt.annotate('R2', xy=(2.5, 2.5), xytext=(2.5, 2.5), size=20)
plt.plot(x_1, bound, color='r', alpha=0.8, linestyle=':', linewidth=3)
x_vec = np.linspace(*ax.get_xlim())
x_1 = np.arange(0, 100, 0.05)
plt.show()
```
<p><a name="chern_err"></a>
<br></p>
## Calculating the Chernoff theoretical bounds for P(error)
[<a href="#sections">back to top</a>] <br>
$P(error) \le p^{\beta}(\omega_1) \; p^{1-\beta}(\omega_2) \; e^{-(\beta(1-\beta))}$
$\Rightarrow 0.5^\beta \cdot 0.5^{(1-\beta)} \; e^{-(\beta(1-\beta))}$
$\Rightarrow 0.5 \cdot e^{-\beta(1-\beta)}$
$min[P(\omega_1), \; P(\omega_2)] \le 0.5 \; e^{-(\beta(1-\beta))} \quad for \; P(\omega_1), \; P(\omega_2) \ge \; 0 \; and \; 0 \; \le \; \beta \; \le 1$
### Plotting the Chernoff Bound for $0 \le \beta \le 1$
```
def chernoff_bound(beta):
return 0.5 * np.exp(-beta * (1-beta))
betas = np.arange(0, 1, 0.01)
c_bound = chernoff_bound(betas)
plt.plot(betas, c_bound)
plt.title('Chernoff Bound')
plt.ylabel('P(error)')
plt.xlabel('parameter beta')
plt.show()
```
#### Finding the global minimum:
```
from scipy.optimize import minimize
x0 = [0.39] # initial guess (here: guessed based on the plot)
res = minimize(chernoff_bound, x0, method='Nelder-Mead')
print(res)
```
<p><a name="emp_err"></a>
<br></p>
## Calculating the empirical error rate
[<a href="#sections">back to top</a>] <br>
```
def decision_rule(x_vec):
""" Returns value for the decision rule of 2-d row vectors """
x_1 = x_vec[0]
x_2 = x_vec[1]
return -x_1 - x_2 + 1
w1_as_w2, w2_as_w1 = 0, 0
for x in x1_samples:
if decision_rule(x) < 0:
w1_as_w2 += 1
for x in x2_samples:
if decision_rule(x) > 0:
w2_as_w1 += 1
emp_err = (w1_as_w2 + w2_as_w1) / float(len(x1_samples) + len(x2_samples))
print('Empirical Error: {}%'.format(emp_err * 100))
test complete; Gopal
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# BSSN Quantities
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
## This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)).
**Notebook Status:** <font color='orange'><b> Self-Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**
[comment]: <> (Introduction: TODO)
### A Note on Notation:
As is standard in NRPy+,
* Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.
* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.
As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows
1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules
1. [Step 2](#declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions
1. [Step 3](#rescaling_tensors) Rescaling tensors to avoid coordinate singularities
1. [Step 3.a](#bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions
1. [Step 4](#bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$
1. [Step 4.a](#bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$
1. [Step 4.b](#bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$
1. [Step 5](#detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives
1. [Step 6](#abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$
1. [Step 7](#rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities
1. [Step 7.a](#rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term
1. [Step 7.b](#rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term
1. [Step 7.c](#rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms
1. [Step 7.d](#summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$
1. [Step 8](#beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$
1. [Step 9](#phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$
1. [Step 9.a](#phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$)
1. [Step 9.b](#phi_covariant_derivs): Partial and covariant derivatives of $\phi$
1. [Step 10](#code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module
1. [Step 11](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
```
<a id='declare_bssn_gfs'></a>
# Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](#toc)\]
$$\label{declare_bssn_gfs}$$
```
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
```
<a id='rescaling_tensors'></a>
# Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](#toc)\]
$$\label{rescaling_tensors}$$
While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities.
The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).
For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by
\begin{align}
\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} +
\frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} +
\frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\
&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} +
\frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\
&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} +
\frac{x}{r \sin\theta} \bar{\Lambda}^{y}.
\end{align}
Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation.
This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.
However, notice that if we define $\lambda^{\phi}$ such that
$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$
then $\lambda^{\phi}$ will be smooth as well.
Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:
$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$
where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance.
Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via
\begin{align}
\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\
\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},
\end{align}
where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that
\begin{align}
\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\
\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.
\end{align}
Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :
\begin{align}
\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\
\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,
\end{align}
where no sums are implied by the repeated indices.
Further, since the scale factors are *time independent*,
\begin{align}
\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\
\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\
&= \partial_t \varepsilon_{ij} \\
&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.
\end{align}
Thus instead of taking space or time derivatives of BSSN quantities
$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$
across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables
$$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$
and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](#phi_ito_cf)).
As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:
\begin{align}
\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\
&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.
\end{align}
Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis).
<a id='bssn_basic_tensors'></a>
## Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](#toc)\]
$$\label{bssn_basic_tensors}$$
The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:
\begin{align}
\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\
\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\
\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\
\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\
B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}
\end{align}
Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py).
Here we implement `BSSN_vars__tensors()`:
```
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
```
<a id='bssn_barred_metric__inverse_and_derivs'></a>
# Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](#toc)\]
$$\label{bssn_barred_metric__inverse_and_derivs}$$
<a id='bssn_barred_metric__inverse'></a>
## Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](#toc)\]
$$\label{bssn_barred_metric__inverse}$$
Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
```
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
```
<a id='bssn_barred_metric__derivs'></a>
## Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](#toc)\]
$$\label{bssn_barred_metric__derivs}$$
In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy).
For example, $\bar{\gamma}_{ij,k}$ is given by:
\begin{align}
\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\
&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\
&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\
&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},
\end{align}
where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
```
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
```
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by
\begin{align}
\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\
&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}
\end{align}
```
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
```
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:
$$
\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)
$$
```
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
```
<a id='detgammabar_and_derivs'></a>
# Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](#toc)\]
$$\label{detgammabar_and_derivs}$$
As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.
As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*
In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place.
$\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:
\begin{align}
\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\
\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\
\end{align}
https://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant
```
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
```
<a id='abar_quantities'></a>
# Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](#toc)\]
$$\label{abar_quantities}$$
$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:
$$
\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.
$$
$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:
$$
\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.
$$
The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:
$$
\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.
$$
Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.
In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by
$$
\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},
$$
so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by
$$
\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.
$$
```
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
```
<a id='rbar'></a>
# Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](#toc)\]
$$\label{rbar}$$
Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:
\begin{align}
\bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\
& + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .
\end{align}
Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first:
<a id='rbar_part1'></a>
## Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](#toc)\]
$$\label{rbar_part1}$$
First note that the covariant derivative of a metric with respect to itself is zero
$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$
so
$$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$
Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):
\begin{align}
{(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {}
&\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\
&+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\
&-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.
\end{align}
Therefore,
$$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$
Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))
\begin{align}
\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\
&= \partial_k \hat{D}_{l} \varepsilon_{i j}
- \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
- \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
- \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),
\end{align}
where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:
\begin{align}
\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\
&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.
\end{align}
In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:
\begin{align}
\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\
&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},
\end{align}
and
\begin{align}
\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\
&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.
\end{align}
```
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
```
We next compute three quantities derived above:
* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,
* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and
* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
```
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
```
<a id='rbar_part2'></a>
## Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](#toc)\]
$$\label{rbar_part2}$$
By definition, the index symmetrization operation is given by:
$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$
and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.
The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so
$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$
except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via
$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$
Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes
$$
\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},
$$
and the NRPy+ code for this expression is written
```
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
```
<a id='rbar_part3'></a>
## Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](#toc)\]
$$\label{rbar_part3}$$
Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:
$$
\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).
$$
* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and
* `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$
Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](#bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
```
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
```
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
```
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
```
<a id='summing_rbar_terms'></a>
## Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](#toc)\]
$$\label{summing_rbar_terms}$$
We have now constructed all of the terms going into $\bar{R}_{ij}$:
\begin{align}
\bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\
& + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .
\end{align}
```
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
```
<a id='beta_derivs'></a>
# Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](#toc)\]
$$\label{beta_derivs}$$
This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:
[comment]: <> (Fix Link Above: TODO)
* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$
* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.
* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
```
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
```
<a id='phi_and_derivs'></a>
# Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](#toc)\]
$$\label{phi_and_derivs}$$
<a id='phi_ito_cf'></a>
## Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](#toc)\]
$$\label{phi_ito_cf}$$
When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.
The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:
1. `cf`$=\phi$,
1. `cf`$=\chi=e^{-4\phi}$, and
1. `cf`$=W = e^{-2\phi}$.
The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.
First, we define the base variables needed within the BSSN equations:
```
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
```
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.
For `cf`$=\phi$, this is trivial:
```
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
```
For `cf`$=W=e^{-2\phi}$, we have
* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$
* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$
* $e^{-4\phi} = \text{cf}^2$
***Exercise to student: Prove the above relations***
```
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
```
For `cf`$=W=e^{-4\phi}$, we have
* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$
* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$
* $e^{-4\phi} = \text{cf}$
***Exercise to student: Prove the above relations***
```
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
```
<a id='phi_covariant_derivs'></a>
## Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](#toc)\]
$$\label{phi_covariant_derivs}$$
Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.
Thus the second covariant derivative is given by
\begin{align}
\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\
&= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.
\end{align}
```
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
```
<a id='code_validation'></a>
# Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](#toc)\]
$$\label{code_validation}$$
As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between
1. this tutorial and
2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.
By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
```
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
```
<a id='latex_pdf_output'></a>
# Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
# Pushes
MAU率(MAU/インストール数)を計算するNotebook
##Input Parameters
+ DATE 集計期間の終わりの日
+ DEBUG 手動実行時のみTrue
+ FREQUENCY 実行頻度
+ BIGQUERY_PROJECT_ID bigqueryのプロジェクト名
+ BIGQUERY_DATASET bigqueryのデータセット
+ PACKAGE_NAME bigqueryのパッケージ名
+ OUTPUT_BIGQUERY_PROJECT_ID 出力先のBQのプロジェクト名
+ ISLATEST 最新の日付を対象にする場合はTrue,任意の日付を指定する場合はFalse
# Output Range
+ daily
DATEの1日前を対象に集計
ex.DATE="2021-02-02"の場合は"2021-02-01を対象に集計"
+ weekly
DATEの1日前から7日を対象に集計
ex.DATE="2021-02-22"の場合は"2021-02-15"から"2021-02-21を対象に集計"
+ monthly
DATEの1日前から1ヶ月を対象に集計
ex.DATE="2021-02-01"の場合は"2021-01-01"から"2021-01-31"を対象に集計"
## Output Data
+ MAU
+ Mdownload
+ Mpercent
# Parameters
```
DATE = "2021-08-17" # @param {type: "date"}
DEBUG = True # @param {type: "boolean"} 手動実行時のみTrueにする。Cloud FunctionsからFalseを渡される。
FREQUENCY = "daily" # @param {type: "string"}
BIGQUERY_PROJECT_ID = "komtar-monet-prd" # @param {type: "string"}
BIGQUERY_DATASET = "analytics_167835138.events_*" # @param {type: "string"}
PACKAGE_NAME = "jp.co.fuller.snowpeak" # @param {type: "string"}
OUTPUT_BIGQUERY_PROJECT_ID = "fl-komtar-analytics-dashboard" # @param {type: "string"}
IS_LATEST = False # @param {type:"boolean"}
```
# Constants
```
METRICS_NAME = "pushes"
```
# Version
```
VERSION = "1"
```
# Authorize
```
if DEBUG:
from google.colab import auth
auth.authenticate_user()
```
# Imports
```
from datetime import datetime, timedelta
import pandas as pd
from pytz import timezone
```
# Get Input Datasets
## 対象範囲の計算
```
if IS_LATEST:
date = (datetime.now(timezone("Asia/Tokyo"))-timedelta(days=1))
else:
date = datetime.strptime(DATE,"%Y-%m-%d") - timedelta(days=1)
end = date
start = datetime(end.year,end.month,1)
start_date = start.strftime("%Y%m%d")
end_date = end.strftime("%Y%m%d")
start_date, end_date
base_start = start-timedelta(days=1)
base_end = end+timedelta(days=1)
```
## Query
### プッシュ通知の数を取得
```
query = f"""
SELECT *
FROM `{PACKAGE_NAME.replace(".","_")}_downloads.{FREQUENCY}_events_*`
WHERE _table_suffix BETWEEN "{start_date}" AND "{end_date}"
"""
df_download = pd.read_gbq(query,project_id=OUTPUT_BIGQUERY_PROJECT_ID, dialect='standard')
df_download
query = f"""
SELECT *
FROM `{PACKAGE_NAME.replace(".","_")}_active_users.monthly_events_*`
WHERE _table_suffix BETWEEN "{start_date}" AND "{end_date}"
"""
df_mau = pd.read_gbq(query,project_id=OUTPUT_BIGQUERY_PROJECT_ID, dialect='standard')
df_mau
potential = df_mau["total_users"]/df_download["installs"].sum()
potential
df_output = df_events
df_output.insert(0, "date", start.strftime(format="%Y-%m-%d"))
df_output["date"] = pd.to_datetime(df_output["date"]).dt.date
df_output.to_gbq(f"""{PACKAGE_NAME.replace(".","_")}_{METRICS_NAME.replace("-","_")}.{FREQUENCY}_events_{start.strftime(format="%Y-%m-%d").replace("-","")}""",
if_exists="replace",
table_schema=[{'name': 'date','type': 'DATE'},
{'name': 'android_active_users','type': 'INT64'},
{'name': 'ios_active_users','type': 'INT64'},
{'name': 'total_users','type': 'INT64'}],
project_id=OUTPUT_BIGQUERY_PROJECT_ID)
df_output
```
| github_jupyter |
```
from gym.envs.mujoco import walker2d_v3
# import gym.envs.mujoco.walker2d_v3
import rlkit.torch.pytorch_util as ptu
from rlkit.data_management.env_replay_buffer import EnvReplayBuffer
from rlkit.envs.wrappers import NormalizedBoxEnv
from rlkit.launchers.launcher_util import setup_logger
from rlkit.samplers.data_collector import MdpPathCollector
from rlkit.torch.sac.policies import TanhGaussianPolicy, MakeDeterministic
from rlkit.torch.sac.sac import SACTrainer
from rlkit.torch.networks import FlattenMlp
from rlkit.torch.torch_rl_algorithm import TorchBatchRLAlgorithm
import abc
from collections import OrderedDict
import gtimer as gt
import torch
import os
import copy
from rlkit.core import logger, eval_util
from rlkit.data_management.replay_buffer import ReplayBuffer
from rlkit.samplers.data_collector import DataCollector
import abc
import torch
import gtimer as gt
from rlkit.core.rl_algorithm import BaseRLAlgorithm
from rlkit.data_management.replay_buffer import ReplayBuffer
from rlkit.samplers.data_collector import PathCollector
# import torch
variant = dict(
algorithm="SAC",
version="normal",
layer_size=256,
replay_buffer_size=int(1E6),
algorithm_kwargs=dict(
num_epochs=1,
num_eval_steps_per_epoch=5000,
num_trains_per_train_loop=1000,
num_expl_steps_per_train_loop=1000,
min_num_steps_before_training=1000,
max_path_length=1000,
batch_size=256,
),
trainer_kwargs=dict(
discount=0.99,
soft_target_tau=5e-3,
target_update_period=1,
policy_lr=3E-4,
qf_lr=3E-4,
reward_scale=1,
use_automatic_entropy_tuning=True,
),
)
setup_logger('experiment-6', variant=variant)
ptu.set_gpu_mode(True, 2)
expl_env = walker2d_v3.Walker2dEnv()
eval_env = walker2d_v3.Walker2dEnv()
expl_env = NormalizedBoxEnv(expl_env)
eval_env = NormalizedBoxEnv(eval_env)
obs_dim = expl_env.observation_space.low.size
action_dim = eval_env.action_space.low.size
# print(obs_dim, action_dim)
# ckpt_path = 'ckpt.pkl'
# checkpoint = torch.load(ckpt_path)
# model.load_state_dict(checkpoint['model_state_dict'])
# optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
# epoch = checkpoint['epoch']
# loss = checkpoint['loss']
# ckpt_path = 'ckpt.pkl'
# checkpoint = torch.load(ckpt_path)
M = variant['layer_size']
qf1 = FlattenMlp(
input_size=obs_dim + action_dim,
output_size=1,
hidden_sizes=[M, M],
)
qf2 = FlattenMlp(
input_size=obs_dim + action_dim,
output_size=1,
hidden_sizes=[M, M],
)
target_qf1 = FlattenMlp(
input_size=obs_dim + action_dim,
output_size=1,
hidden_sizes=[M, M],
)
target_qf2 = FlattenMlp(
input_size=obs_dim + action_dim,
output_size=1,
hidden_sizes=[M, M],
)
policy = TanhGaussianPolicy(
obs_dim=obs_dim,
action_dim=action_dim,
hidden_sizes=[M, M],
)
eval_policy = MakeDeterministic(policy)
eval_path_collector = MdpPathCollector(
eval_env,
eval_policy,
)
expl_path_collector = MdpPathCollector(
expl_env,
policy,
)
replay_buffer = EnvReplayBuffer(
variant['replay_buffer_size'],
expl_env,
)
trainer = SACTrainer(
env=eval_env,
policy=policy,
qf1=qf1,
qf2=qf2,
target_qf1=target_qf1,
target_qf2=target_qf2,
**variant['trainer_kwargs']
)
# # run this whenever rlkit is changed
# import rlkit.torch.pytorch_util as ptu
# from rlkit.data_management.env_replay_buffer import EnvReplayBuffer
# from rlkit.envs.wrappers import NormalizedBoxEnv
# from rlkit.launchers.launcher_util import setup_logger
# from rlkit.samplers.data_collector import MdpPathCollector
# from rlkit.torch.sac.policies import TanhGaussianPolicy, MakeDeterministic
# from rlkit.torch.sac.sac import SACTrainer
# from rlkit.torch.networks import FlattenMlp
# from rlkit.torch.torch_rl_algorithm import TorchBatchRLAlgorithm
# import importlib
# import rlkit
# importlib.reload(rlkit)
# from rlkit.torch.torch_rl_algorithm import TorchBatchRLAlgorithm
# from rlkit.core import logger, eval_util
# expl_paths = algorithm.expl_data_collector.get_epoch_paths()
# d = eval_util.get_generic_path_information(expl_paths)
# d['Rewards Mean']
import abc
from collections import OrderedDict
import gtimer as gt
import torch
import os
import copy
from rlkit.core import logger, eval_util
from rlkit.data_management.replay_buffer import ReplayBuffer
from rlkit.samplers.data_collector import DataCollector
def _get_epoch_timings():
times_itrs = gt.get_times().stamps.itrs
times = OrderedDict()
epoch_time = 0
for key in sorted(times_itrs):
time = times_itrs[key][-1]
epoch_time += time
times['time/{} (s)'.format(key)] = time
times['time/epoch (s)'] = epoch_time
times['time/total (s)'] = gt.get_times().total
return times
class BaseRLAlgorithm2(object, metaclass=abc.ABCMeta):
def __init__(
self,
trainer,
exploration_env,
evaluation_env,
exploration_data_collector: DataCollector,
evaluation_data_collector: DataCollector,
replay_buffer: ReplayBuffer,
initial_epoch
):
self.trainer = trainer
self.expl_env = exploration_env
self.eval_env = evaluation_env
self.expl_data_collector = exploration_data_collector
self.eval_data_collector = evaluation_data_collector
self.replay_buffer = replay_buffer
self._start_epoch = initial_epoch
self.post_epoch_funcs = []
def train(self, initial_epoch=0, epochs=None, dir_=None, exp_no=None):
self._train(initial_epoch, epochs, dir_, exp_no)
def _train(self):
"""
Train model.
"""
raise NotImplementedError('_train must implemented by inherited class')
def get_cur_best_metric_val(self):
cur_best_metric_val = None
if os.path.exists('results/walker/cur_best_avg_rewards.pkl'):
cur_best_metric_val = copy.deepcopy(torch.load('results/walker/cur_best_avg_rewards.pkl')['cur_best_metric_val'])
else:
cur_best_metric_val = -1* float('inf')
return cur_best_metric_val
def _end_epoch(self, epoch):
print('in _end_epoch, epoch is: {}'.format(epoch))
snapshot = self._get_snapshot()
logger.save_itr_params(epoch, snapshot)
# trainer_obj = self.trainer
# ckpt_path='ckpt.pkl'
# logger.save_ckpt(epoch, trainer_obj, ckpt_path)
# gt.stamp('saving')
if epoch%10==0:
self.save_snapshot_2(epoch)
eval_paths = self.eval_data_collector.get_epoch_paths()
d = eval_util.get_generic_path_information(eval_paths)
# print(d.keys())
metric_val = d['Returns Mean']
cur_best_metric_val = self.get_cur_best_metric_val()
self.save_snapshot_2_best_only(metric_val=metric_val, cur_best_metric_val=cur_best_metric_val, min_or_max='max', epoch=epoch)
self._log_stats(epoch)
self.expl_data_collector.end_epoch(epoch)
self.eval_data_collector.end_epoch(epoch)
self.replay_buffer.end_epoch(epoch)
self.trainer.end_epoch(epoch)
for post_epoch_func in self.post_epoch_funcs:
post_epoch_func(self, epoch)
def save_snapshot_2(self, epoch):
print('Saving snapshot 2')
self_copy = copy.deepcopy(self)
torch.save(copy.deepcopy({'algorithm':self_copy, 'epoch':epoch}), 'results/walker/ckpt.pkl')
def get_snapshot_2(self):
print('in get_snapshot_2')
ckpt = {}
ckpt = torch.load('results/walker/ckpt.pkl')
self = copy.deepcopy(ckpt['algorithm'])
epoch = ckpt['epoch']
return epoch
def get_snapshot_best(self):
print('in get_snapshot_best')
ckpt = None
ckpt = torch.load('results/walker/ckpt-best.pkl')
self = copy.deepcopy(ckpt['algorithm'])
epoch = ckpt['epoch']
return epoch
def save_snapshot_2_best_only(self, metric_val, cur_best_metric_val, min_or_max='min', epoch=0):
if min_or_max == 'min' and metric_val < cur_best_metric_val \
or min_or_max == 'max' and metric_val > cur_best_metric_val:
print('Saving snapshot best')
print(metric_val)
print(cur_best_metric_val)
self_copy = copy.deepcopy(self)
torch.save({'algorithm':self_copy, 'epoch':epoch}, 'results/walker/ckpt-best.pkl')
cur_best_metric_val = metric_val
cur_best_metric_val_copy = copy.deepcopy(cur_best_metric_val)
torch.save({'cur_best_metric_val':cur_best_metric_val_copy}, 'results/walker/cur_best_avg_rewards.pkl')
# def _resume_training(self):
def _get_snapshot(self):
snapshot = {}
for k, v in self.trainer.get_snapshot().items():
snapshot['trainer/' + k] = v
for k, v in self.expl_data_collector.get_snapshot().items():
snapshot['exploration/' + k] = v
for k, v in self.eval_data_collector.get_snapshot().items():
snapshot['evaluation/' + k] = v
for k, v in self.replay_buffer.get_snapshot().items():
snapshot['replay_buffer/' + k] = v
return snapshot
def _log_stats(self, epoch):
logger.log("Epoch {} finished".format(epoch), with_timestamp=True)
"""
Replay Buffer
"""
logger.record_dict(
self.replay_buffer.get_diagnostics(),
prefix='replay_buffer/'
)
"""
Trainer
"""
logger.record_dict(self.trainer.get_diagnostics(), prefix='trainer/')
"""
Exploration
"""
logger.record_dict(
self.expl_data_collector.get_diagnostics(),
prefix='exploration/'
)
expl_paths = self.expl_data_collector.get_epoch_paths()
if hasattr(self.expl_env, 'get_diagnostics'):
logger.record_dict(
self.expl_env.get_diagnostics(expl_paths),
prefix='exploration/',
)
logger.record_dict(
eval_util.get_generic_path_information(expl_paths),
prefix="exploration/",
)
"""
Evaluation
"""
logger.record_dict(
self.eval_data_collector.get_diagnostics(),
prefix='evaluation/',
)
eval_paths = self.eval_data_collector.get_epoch_paths()
if hasattr(self.eval_env, 'get_diagnostics'):
logger.record_dict(
self.eval_env.get_diagnostics(eval_paths),
prefix='evaluation/',
)
logger.record_dict(
eval_util.get_generic_path_information(eval_paths),
prefix="evaluation/",
)
"""
Misc
"""
# gt.stamp('logging')
logger.record_dict(_get_epoch_timings())
logger.record_tabular('Epoch', epoch)
logger.dump_tabular(with_prefix=False, with_timestamp=False, file_name='logs/walker/log1.txt', file_name2='logs/walker/log2')
@abc.abstractmethod
def training_mode(self, mode):
"""
Set training mode to `mode`.
:param mode: If True, training will happen (e.g. set the dropout
probabilities to not all ones).
"""
pass
import abc
import torch
import gtimer as gt
from rlkit.core.rl_algorithm import BaseRLAlgorithm
from rlkit.data_management.replay_buffer import ReplayBuffer
from rlkit.samplers.data_collector import PathCollector
class BatchRLAlgorithm2(BaseRLAlgorithm2, metaclass=abc.ABCMeta):
def __init__(
self,
trainer,
exploration_env,
evaluation_env,
exploration_data_collector: PathCollector,
evaluation_data_collector: PathCollector,
replay_buffer: ReplayBuffer,
batch_size,
max_path_length,
num_epochs,
num_eval_steps_per_epoch,
num_expl_steps_per_train_loop,
num_trains_per_train_loop,
num_train_loops_per_epoch=1,
min_num_steps_before_training=0,
initial_epoch=0
):
super().__init__(
trainer,
exploration_env,
evaluation_env,
exploration_data_collector,
evaluation_data_collector,
replay_buffer,
initial_epoch
)
self.batch_size = batch_size
self.max_path_length = max_path_length
self.num_epochs = num_epochs
self.num_eval_steps_per_epoch = num_eval_steps_per_epoch
self.num_trains_per_train_loop = num_trains_per_train_loop
self.num_train_loops_per_epoch = num_train_loops_per_epoch
self.num_expl_steps_per_train_loop = num_expl_steps_per_train_loop
self.min_num_steps_before_training = min_num_steps_before_training
# def store_everything(self, ckpt_path):
# d = {
# }
# torch.save(d, ckpt_path)
def _train(self, initial_epoch=0, epochs=None, dir_=None, exp_no=None):
self._start_epoch = initial_epoch
if epochs is not None:
self.num_epochs = epochs
# print('\n\n\n\nn\n\n\n\nin _train #########')
if self.min_num_steps_before_training > 0:
init_expl_paths = self.expl_data_collector.collect_new_paths(
self.max_path_length,
self.min_num_steps_before_training,
discard_incomplete_paths=False,
)
self.replay_buffer.add_paths(init_expl_paths)
self.expl_data_collector.end_epoch(-1)
# for epoch in gt.timed_for(
# range(self._start_epoch, self.num_epochs),
# save_itrs=True,
# ):
for epoch in range(self._start_epoch, self.num_epochs):
self.eval_data_collector.collect_new_paths(
self.max_path_length,
self.num_eval_steps_per_epoch,
discard_incomplete_paths=True,
)
# # gt.stamp('evaluation sampling')
for _ in range(self.num_train_loops_per_epoch):
new_expl_paths = self.expl_data_collector.collect_new_paths(
self.max_path_length,
self.num_expl_steps_per_train_loop,
discard_incomplete_paths=False,
)
# # gt.stamp('exploration sampling', unique=False)
self.replay_buffer.add_paths(new_expl_paths)
# # gt.stamp('data storing', unique=False)
self.training_mode(True)
for _ in range(self.num_trains_per_train_loop):
train_data = self.replay_buffer.random_batch(
self.batch_size)
self.trainer.train(train_data)
# # gt.stamp('training', unique=False)
self.training_mode(False)
self._end_epoch(epoch)
import abc
from collections import OrderedDict
from typing import Iterable
from torch import nn as nn
from rlkit.core.batch_rl_algorithm import BatchRLAlgorithm
from rlkit.core.online_rl_algorithm import OnlineRLAlgorithm
from rlkit.core.trainer import Trainer
from rlkit.torch.core import np_to_pytorch_batch
class TorchOnlineRLAlgorithm(OnlineRLAlgorithm):
def to(self, device):
for net in self.trainer.networks:
net.to(device)
def training_mode(self, mode):
for net in self.trainer.networks:
net.train(mode)
class TorchBatchRLAlgorithm2(BatchRLAlgorithm2):
def to(self, device):
for net in self.trainer.networks:
net.to(device)
def training_mode(self, mode):
for net in self.trainer.networks:
net.train(mode)
class TorchTrainer(Trainer, metaclass=abc.ABCMeta):
def __init__(self):
self._num_train_steps = 0
def train(self, np_batch):
self._num_train_steps += 1
batch = np_to_pytorch_batch(np_batch)
self.train_from_torch(batch)
def get_diagnostics(self):
return OrderedDict([
('num train calls', self._num_train_steps),
])
@abc.abstractmethod
def train_from_torch(self, batch):
pass
@property
@abc.abstractmethod
def networks(self) -> Iterable[nn.Module]:
pass
def get_snapshot_3():
print('in get_snapshot_3')
ckpt = {}
ckpt = torch.load('results/walker/ckpt.pkl')
# self = copy.deepcopy(ckpt['algorithm'])
epoch = ckpt['epoch']
return epoch, copy.deepcopy(ckpt['algorithm'])
# algorithm = TorchBatchRLAlgorithm2(
# trainer=trainer,
# exploration_env=expl_env,
# evaluation_env=eval_env,
# exploration_data_collector=expl_path_collector,
# evaluation_data_collector=eval_path_collector,
# replay_buffer=replay_buffer,
# **variant['algorithm_kwargs']
# )
# algorithm.get_snapshot_2()
# type(algorithm)
resume = True
resume_from_best = False
algorithm = None
if not resume:
algorithm = TorchBatchRLAlgorithm2(
trainer=trainer,
exploration_env=expl_env,
evaluation_env=eval_env,
exploration_data_collector=expl_path_collector,
evaluation_data_collector=eval_path_collector,
replay_buffer=replay_buffer,
**variant['algorithm_kwargs']
)
algorithm.to(ptu.device)
algorithm.train(initial_epoch=0, epochs=1000)
else:
# algorithm = TorchBatchRLAlgorithm()
# algorithm = TorchBatchRLAlgorithm2(
# trainer=trainer,
# exploration_env=expl_env,
# evaluation_env=eval_env,
# exploration_data_collector=expl_path_collector,
# evaluation_data_collector=eval_path_collector,
# replay_buffer=replay_buffer,
# **variant['algorithm_kwargs']
# )
if not resume_from_best:
initial_epoch, algorithm = get_snapshot_3()
initial_epoch+=1
else:
initial_epoch, algorithm = get_snapshot_best()
initial_epoch+=1
algorithm.to(ptu.device)
algorithm.train(initial_epoch=initial_epoch, epochs = 1000)
# algorithm = TorchBatchRLAlgorithm2(
# trainer=trainer,
# exploration_env=expl_env,
# evaluation_env=eval_env,
# exploration_data_collector=expl_path_collector,
# evaluation_data_collector=eval_path_collector,
# replay_buffer=replay_buffer,
# **variant['algorithm_kwargs']
# )
# algorithm_ = copy.deepcopy(algorithm)
# epoch, algorithm = get_snapshot_3()
# algorithm.trainer.qf1.fcs[0].weight.data
# algorithm_.trainer.qf1.fcs[0].weight.data
# !rm logs/walker/log2
# !>logs/walker/log1.txt
# !rm results/walker/*.pkl
# l = [('replay_buffer/size', '2000'), ('trainer/QF1 Loss', '22.540974'), ('trainer/QF2 Loss', '22.461214'), ('trainer/Policy Loss', '-5.3794346'), ('trainer/Q1 Predictions Mean', '-0.013111563'), ('trainer/Q1 Predictions Std', '0.006633578'), ('trainer/Q1 Predictions Max', '0.004102228'), ('trainer/Q1 Predictions Min', '-0.031592093'), ('trainer/Q2 Predictions Mean', '-0.0048158974'), ('trainer/Q2 Predictions Std', '0.0035169804'), ('trainer/Q2 Predictions Max', '0.0017666357'), ('trainer/Q2 Predictions Min', '-0.018819787'), ('trainer/Q Targets Mean', '4.6183643'), ('trainer/Q Targets Std', '1.043318'), ('trainer/Q Targets Max', '8.902103'), ('trainer/Q Targets Min', '0.38821536'), ('trainer/Log Pis Mean', '-5.393486'), ('trainer/Log Pis Std', '0.62483275'), ('trainer/Log Pis Max', '-3.6818123'), ('trainer/Log Pis Min', '-6.818282'), ('trainer/Policy mu Mean', '-8.544093e-05'), ('trainer/Policy mu Std', '0.0021631739'), ('trainer/Policy mu Max', '0.006241211'), ('trainer/Policy mu Min', '-0.008601425'), ('trainer/Policy log std Mean', '0.00043238376'), ('trainer/Policy log std Std', '0.0024977052'), ('trainer/Policy log std Max', '0.007946691'), ('trainer/Policy log std Min', '-0.007701927'), ('trainer/Alpha', '0.9997000694274902'), ('trainer/Alpha Loss', '-0.0'), ('exploration/num steps total', '2000'), ('exploration/num paths total', '14'), ('exploration/path length Mean', '142.85714285714286'), ('exploration/path length Std', '203.16174314459656'), ('exploration/path length Max', '635'), ('exploration/path length Min', '24'), ('exploration/Rewards Mean', '-0.6064347764021665'), ('exploration/Rewards Std', '0.7633642575876102'), ('exploration/Rewards Max', '2.8795207953591153'), ('exploration/Rewards Min', '-3.6407089958048813'), ('exploration/Returns Mean', '-86.63353948602385'), ('exploration/Returns Std', '113.0231192346682'), ('exploration/Returns Max', '-3.105460048148942'), ('exploration/Returns Min', '-353.8074739814667'), ('exploration/Actions Mean', '-0.007008702'), ('exploration/Actions Std', '0.6308291'), ('exploration/Actions Max', '0.99908894'), ('exploration/Actions Min', '-0.9995207'), ('exploration/Num Paths', '7'), ('exploration/Average Returns', '-86.63353948602385'), ('exploration/env_infos/final/reward_forward Mean', '0.19075281818276574'), ('exploration/env_infos/final/reward_forward Std', '0.8342500879174919'), ('exploration/env_infos/final/reward_forward Max', '1.2204116659980113'), ('exploration/env_infos/final/reward_forward Min', '-1.1107150441758207'), ('exploration/env_infos/initial/reward_forward Mean', '-0.19105378778638557'), ('exploration/env_infos/initial/reward_forward Std', '0.1324312690431134'), ('exploration/env_infos/initial/reward_forward Max', '-0.00048826690484476964'), ('exploration/env_infos/initial/reward_forward Min', '-0.3908516428560994'), ('exploration/env_infos/reward_forward Mean', '-0.014457019663503719'), ('exploration/env_infos/reward_forward Std', '0.6431258664770597'), ('exploration/env_infos/reward_forward Max', '2.5457147574562833'), ('exploration/env_infos/reward_forward Min', '-2.3861966329817252'), ('exploration/env_infos/final/reward_ctrl Mean', '-1.5349954196384974'), ('exploration/env_infos/final/reward_ctrl Std', '0.15762081865148822'), ('exploration/env_infos/final/reward_ctrl Max', '-1.3189051151275635'), ('exploration/env_infos/final/reward_ctrl Min', '-1.7733919620513916'), ('exploration/env_infos/initial/reward_ctrl Mean', '-1.7483757819448198'), ('exploration/env_infos/initial/reward_ctrl Std', '0.4890421746133651'), ('exploration/env_infos/initial/reward_ctrl Max', '-0.8430193066596985'), ('exploration/env_infos/initial/reward_ctrl Min', '-2.4695911407470703'), ('exploration/env_infos/reward_ctrl Mean', '-1.5919777567386628'), ('exploration/env_infos/reward_ctrl Std', '0.43263451266003733'), ('exploration/env_infos/reward_ctrl Max', '-0.45637834072113037'), ('exploration/env_infos/reward_ctrl Min', '-2.8311009407043457'), ('exploration/env_infos/final/reward_contact Mean', '0.0'), ('exploration/env_infos/final/reward_contact Std', '0.0'), ('exploration/env_infos/final/reward_contact Max', '-0.0'), ('exploration/env_infos/final/reward_contact Min', '-0.0'), ('exploration/env_infos/initial/reward_contact Mean', '0.0'), ('exploration/env_infos/initial/reward_contact Std', '0.0'), ('exploration/env_infos/initial/reward_contact Max', '-0.0'), ('exploration/env_infos/initial/reward_contact Min', '-0.0'), ('exploration/env_infos/reward_contact Mean', '0.0'), ('exploration/env_infos/reward_contact Std', '0.0'), ('exploration/env_infos/reward_contact Max', '-0.0'), ('exploration/env_infos/reward_contact Min', '-0.0'), ('exploration/env_infos/final/reward_survive Mean', '1.0'), ('exploration/env_infos/final/reward_survive Std', '0.0'), ('exploration/env_infos/final/reward_survive Max', '1.0'), ('exploration/env_infos/final/reward_survive Min', '1.0'), ('exploration/env_infos/initial/reward_survive Mean', '1.0'), ('exploration/env_infos/initial/reward_survive Std', '0.0'), ('exploration/env_infos/initial/reward_survive Max', '1.0'), ('exploration/env_infos/initial/reward_survive Min', '1.0'), ('exploration/env_infos/reward_survive Mean', '1.0'), ('exploration/env_infos/reward_survive Std', '0.0'), ('exploration/env_infos/reward_survive Max', '1.0'), ('exploration/env_infos/reward_survive Min', '1.0'), ('exploration/env_infos/final/x_position Mean', '-0.10269478382571114'), ('exploration/env_infos/final/x_position Std', '1.0156116367327344'), ('exploration/env_infos/final/x_position Max', '0.9451816637118873'), ('exploration/env_infos/final/x_position Min', '-1.897179957278178'), ('exploration/env_infos/initial/x_position Mean', '-0.008983047047146756'), ('exploration/env_infos/initial/x_position Std', '0.055510131153156526'), ('exploration/env_infos/initial/x_position Max', '0.06390724254550409'), ('exploration/env_infos/initial/x_position Min', '-0.09279374271110855'), ('exploration/env_infos/x_position Mean', '0.6017503453335674'), ('exploration/env_infos/x_position Std', '0.5726451692146537'), ('exploration/env_infos/x_position Max', '1.427549938991314'), ('exploration/env_infos/x_position Min', '-1.897179957278178'), ('exploration/env_infos/final/y_position Mean', '0.4782543400247053'), ('exploration/env_infos/final/y_position Std', '0.7855700043684598'), ('exploration/env_infos/final/y_position Max', '1.550614516782081'), ('exploration/env_infos/final/y_position Min', '-1.0481446711405873'), ('exploration/env_infos/initial/y_position Mean', '0.004535862316807871'), ('exploration/env_infos/initial/y_position Std', '0.06869960071253207'), ('exploration/env_infos/initial/y_position Max', '0.10433425346335046'), ('exploration/env_infos/initial/y_position Min', '-0.09282306532235318'), ('exploration/env_infos/y_position Mean', '0.8093149087377335'), ('exploration/env_infos/y_position Std', '0.49822720634726986'), ('exploration/env_infos/y_position Max', '1.85534103577678'), ('exploration/env_infos/y_position Min', '-1.0481446711405873'), ('exploration/env_infos/final/distance_from_origin Mean', '1.2936266169235655'), ('exploration/env_infos/final/distance_from_origin Std', '0.4630235812512934'), ('exploration/env_infos/final/distance_from_origin Max', '2.007668134959712'), ('exploration/env_infos/final/distance_from_origin Min', '0.5422062892864513'), ('exploration/env_infos/initial/distance_from_origin Mean', '0.08576195513360897'), ('exploration/env_infos/initial/distance_from_origin Std', '0.023391580356425377'), ('exploration/env_infos/initial/distance_from_origin Max', '0.11835900951257046'), ('exploration/env_infos/initial/distance_from_origin Min', '0.04765527722351595'), ('exploration/env_infos/distance_from_origin Mean', '1.1705145479324601'), ('exploration/env_infos/distance_from_origin Std', '0.47237975365397095'), ('exploration/env_infos/distance_from_origin Max', '2.007668134959712'), ('exploration/env_infos/distance_from_origin Min', '0.026864676182351343'), ('exploration/env_infos/final/x_velocity Mean', '0.19075281818276574'), ('exploration/env_infos/final/x_velocity Std', '0.8342500879174919'), ('exploration/env_infos/final/x_velocity Max', '1.2204116659980113'), ('exploration/env_infos/final/x_velocity Min', '-1.1107150441758207'), ('exploration/env_infos/initial/x_velocity Mean', '-0.19105378778638557'), ('exploration/env_infos/initial/x_velocity Std', '0.1324312690431134'), ('exploration/env_infos/initial/x_velocity Max', '-0.00048826690484476964'), ('exploration/env_infos/initial/x_velocity Min', '-0.3908516428560994'), ('exploration/env_infos/x_velocity Mean', '-0.014457019663503719'), ('exploration/env_infos/x_velocity Std', '0.6431258664770597'), ('exploration/env_infos/x_velocity Max', '2.5457147574562833'), ('exploration/env_infos/x_velocity Min', '-2.3861966329817252'), ('exploration/env_infos/final/y_velocity Mean', '-0.40199064342737'), ('exploration/env_infos/final/y_velocity Std', '0.5971681663931471'), ('exploration/env_infos/final/y_velocity Max', '0.4077673083866995'), ('exploration/env_infos/final/y_velocity Min', '-1.259094084272049'), ('exploration/env_infos/initial/y_velocity Mean', '-0.0018872918578139997'), ('exploration/env_infos/initial/y_velocity Std', '0.14466164945930343'), ('exploration/env_infos/initial/y_velocity Max', '0.18527643991947776'), ('exploration/env_infos/initial/y_velocity Min', '-0.24354054905268124'), ('exploration/env_infos/y_velocity Mean', '0.06630737583610093'), ('exploration/env_infos/y_velocity Std', '0.5814660994205909'), ('exploration/env_infos/y_velocity Max', '2.4733737635902187'), ('exploration/env_infos/y_velocity Min', '-1.7773695514104038'), ('exploration/env_infos/final/forward_reward Mean', '0.19075281818276574'), ('exploration/env_infos/final/forward_reward Std', '0.8342500879174919'), ('exploration/env_infos/final/forward_reward Max', '1.2204116659980113'), ('exploration/env_infos/final/forward_reward Min', '-1.1107150441758207'), ('exploration/env_infos/initial/forward_reward Mean', '-0.19105378778638557'), ('exploration/env_infos/initial/forward_reward Std', '0.1324312690431134'), ('exploration/env_infos/initial/forward_reward Max', '-0.00048826690484476964'), ('exploration/env_infos/initial/forward_reward Min', '-0.3908516428560994'), ('exploration/env_infos/forward_reward Mean', '-0.014457019663503719'), ('exploration/env_infos/forward_reward Std', '0.6431258664770597'), ('exploration/env_infos/forward_reward Max', '2.5457147574562833'), ('exploration/env_infos/forward_reward Min', '-2.3861966329817252'), ('evaluation/num steps total', '5000'), ('evaluation/num paths total', '5'), ('evaluation/path length Mean', '1000.0'), ('evaluation/path length Std', '0.0'), ('evaluation/path length Max', '1000'), ('evaluation/path length Min', '1000'), ('evaluation/Rewards Mean', '0.9983870785388878'), ('evaluation/Rewards Std', '0.04913844925352009'), ('evaluation/Rewards Max', '1.9170770036140046'), ('evaluation/Rewards Min', '-0.13733900279654443'), ('evaluation/Returns Mean', '998.3870785388877'), ('evaluation/Returns Std', '3.6701104981273907'), ('evaluation/Returns Max', '1004.5377664821074'), ('evaluation/Returns Min', '994.229294090189'), ('evaluation/Actions Mean', '-3.18751e-05'), ('evaluation/Actions Std', '0.0010393225'), ('evaluation/Actions Max', '0.0051202956'), ('evaluation/Actions Min', '-0.0029151689'), ('evaluation/Num Paths', '5'), ('evaluation/Average Returns', '998.3870785388877'), ('evaluation/env_infos/final/reward_forward Mean', '0.0006460700812505518'), ('evaluation/env_infos/final/reward_forward Std', '0.000683088293812967'), ('evaluation/env_infos/final/reward_forward Max', '0.0015115855220859675'), ('evaluation/env_infos/final/reward_forward Min', '-6.473937490869552e-05'), ('evaluation/env_infos/initial/reward_forward Mean', '-0.03758082171038936'), ('evaluation/env_infos/initial/reward_forward Std', '0.12706800538907964'), ('evaluation/env_infos/initial/reward_forward Max', '0.10714505461736121'), ('evaluation/env_infos/initial/reward_forward Min', '-0.22169929465191252'), ('evaluation/env_infos/reward_forward Mean', '-0.001608596634589209'), ('evaluation/env_infos/reward_forward Std', '0.04913838565616579'), ('evaluation/env_infos/reward_forward Max', '0.9170830225171546'), ('evaluation/env_infos/reward_forward Min', '-1.13733215985436'), ('evaluation/env_infos/final/reward_ctrl Mean', '-4.327109218138503e-06'), ('evaluation/env_infos/final/reward_ctrl Std', '2.26174565875007e-08'), ('evaluation/env_infos/final/reward_ctrl Max', '-4.29289639214403e-06'), ('evaluation/env_infos/final/reward_ctrl Min', '-4.362076651887037e-06'), ('evaluation/env_infos/initial/reward_ctrl Mean', '-3.2657525480317417e-06'), ('evaluation/env_infos/initial/reward_ctrl Std', '2.1728433187299645e-07'), ('evaluation/env_infos/initial/reward_ctrl Max', '-2.892852990044048e-06'), ('evaluation/env_infos/initial/reward_ctrl Min', '-3.553457645466551e-06'), ('evaluation/env_infos/reward_ctrl Mean', '-4.3248265230431574e-06'), ('evaluation/env_infos/reward_ctrl Std', '8.365455465240826e-07'), ('evaluation/env_infos/reward_ctrl Max', '-2.892852990044048e-06'), ('evaluation/env_infos/reward_ctrl Min', '-2.5335128157166764e-05'), ('evaluation/env_infos/final/reward_contact Mean', '0.0'), ('evaluation/env_infos/final/reward_contact Std', '0.0'), ('evaluation/env_infos/final/reward_contact Max', '-0.0'), ('evaluation/env_infos/final/reward_contact Min', '-0.0'), ('evaluation/env_infos/initial/reward_contact Mean', '0.0'), ('evaluation/env_infos/initial/reward_contact Std', '0.0'), ('evaluation/env_infos/initial/reward_contact Max', '-0.0'), ('evaluation/env_infos/initial/reward_contact Min', '-0.0'), ('evaluation/env_infos/reward_contact Mean', '0.0'), ('evaluation/env_infos/reward_contact Std', '0.0'), ('evaluation/env_infos/reward_contact Max', '-0.0'), ('evaluation/env_infos/reward_contact Min', '-0.0'), ('evaluation/env_infos/final/reward_survive Mean', '1.0'), ('evaluation/env_infos/final/reward_survive Std', '0.0'), ('evaluation/env_infos/final/reward_survive Max', '1.0'), ('evaluation/env_infos/final/reward_survive Min', '1.0'), ('evaluation/env_infos/initial/reward_survive Mean', '1.0'), ('evaluation/env_infos/initial/reward_survive Std', '0.0'), ('evaluation/env_infos/initial/reward_survive Max', '1.0'), ('evaluation/env_infos/initial/reward_survive Min', '1.0'), ('evaluation/env_infos/reward_survive Mean', '1.0'), ('evaluation/env_infos/reward_survive Std', '0.0'), ('evaluation/env_infos/reward_survive Max', '1.0'), ('evaluation/env_infos/reward_survive Min', '1.0'), ('evaluation/env_infos/final/x_position Mean', '-0.08246357007500449'), ('evaluation/env_infos/final/x_position Std', '0.21278604707291804'), ('evaluation/env_infos/final/x_position Max', '0.2762546681301775'), ('evaluation/env_infos/final/x_position Min', '-0.31765979047571263'), ('evaluation/env_infos/initial/x_position Mean', '-0.003912779431063491'), ('evaluation/env_infos/initial/x_position Std', '0.0330324112758067'), ('evaluation/env_infos/initial/x_position Max', '0.05434083206249111'), ('evaluation/env_infos/initial/x_position Min', '-0.03564545654418633'), ('evaluation/env_infos/x_position Mean', '-0.0980639978100126'), ('evaluation/env_infos/x_position Std', '0.2219732113910267'), ('evaluation/env_infos/x_position Max', '0.28447775987369606'), ('evaluation/env_infos/x_position Min', '-0.4154507641762448'), ('evaluation/env_infos/final/y_position Mean', '-0.21619497009687377'), ('evaluation/env_infos/final/y_position Std', '0.11467158032691588'), ('evaluation/env_infos/final/y_position Max', '-0.06849864226409814'), ('evaluation/env_infos/final/y_position Min', '-0.3834214088146994'), ('evaluation/env_infos/initial/y_position Mean', '0.03461183079311117'), ('evaluation/env_infos/initial/y_position Std', '0.03786474659873482'), ('evaluation/env_infos/initial/y_position Max', '0.0727745156466771'), ('evaluation/env_infos/initial/y_position Min', '-0.031890929939110295'), ('evaluation/env_infos/y_position Mean', '-0.18847055253266243'), ('evaluation/env_infos/y_position Std', '0.12900336667512582'), ('evaluation/env_infos/y_position Max', '0.11318113181673488'), ('evaluation/env_infos/y_position Min', '-0.3843479847936035'), ('evaluation/env_infos/final/distance_from_origin Mean', '0.31742811785734604'), ('evaluation/env_infos/final/distance_from_origin Std', '0.10586486027873725'), ('evaluation/env_infos/final/distance_from_origin Max', '0.41272207970165087'), ('evaluation/env_infos/final/distance_from_origin Min', '0.11858511872844475'), ('evaluation/env_infos/initial/distance_from_origin Mean', '0.05664015906534127'), ('evaluation/env_infos/initial/distance_from_origin Std', '0.023023038124026837'), ('evaluation/env_infos/initial/distance_from_origin Max', '0.090824314788785'), ('evaluation/env_infos/initial/distance_from_origin Min', '0.020441331046195648'), ('evaluation/env_infos/distance_from_origin Mean', '0.31550155666738033'), ('evaluation/env_infos/distance_from_origin Std', '0.10728671757338556'), ('evaluation/env_infos/distance_from_origin Max', '0.41675052787889233'), ('evaluation/env_infos/distance_from_origin Min', '0.006118439492803657'), ('evaluation/env_infos/final/x_velocity Mean', '0.0006460700812505518'), ('evaluation/env_infos/final/x_velocity Std', '0.000683088293812967'), ('evaluation/env_infos/final/x_velocity Max', '0.0015115855220859675'), ('evaluation/env_infos/final/x_velocity Min', '-6.473937490869552e-05'), ('evaluation/env_infos/initial/x_velocity Mean', '-0.03758082171038936'), ('evaluation/env_infos/initial/x_velocity Std', '0.12706800538907964'), ('evaluation/env_infos/initial/x_velocity Max', '0.10714505461736121'), ('evaluation/env_infos/initial/x_velocity Min', '-0.22169929465191252'), ('evaluation/env_infos/x_velocity Mean', '-0.001608596634589209'), ('evaluation/env_infos/x_velocity Std', '0.04913838565616579'), ('evaluation/env_infos/x_velocity Max', '0.9170830225171546'), ('evaluation/env_infos/x_velocity Min', '-1.13733215985436'), ('evaluation/env_infos/final/y_velocity Mean', '-0.00019060288440309048'), ('evaluation/env_infos/final/y_velocity Std', '0.0006822993069578147'), ('evaluation/env_infos/final/y_velocity Max', '0.0003515250043489848'), ('evaluation/env_infos/final/y_velocity Min', '-0.0015195556259323117'), ('evaluation/env_infos/initial/y_velocity Mean', '0.021937314969317126'), ('evaluation/env_infos/initial/y_velocity Std', '0.08664719085850509'), ('evaluation/env_infos/initial/y_velocity Max', '0.15565432328174394'), ('evaluation/env_infos/initial/y_velocity Min', '-0.08328165113421354'), ('evaluation/env_infos/y_velocity Mean', '-0.004994198702830382'), ('evaluation/env_infos/y_velocity Std', '0.04931474746785201'), ('evaluation/env_infos/y_velocity Max', '0.8030629918524109'), ('evaluation/env_infos/y_velocity Min', '-1.2179716906352427'), ('evaluation/env_infos/final/forward_reward Mean', '0.0006460700812505518'), ('evaluation/env_infos/final/forward_reward Std', '0.000683088293812967'), ('evaluation/env_infos/final/forward_reward Max', '0.0015115855220859675'), ('evaluation/env_infos/final/forward_reward Min', '-6.473937490869552e-05'), ('evaluation/env_infos/initial/forward_reward Mean', '-0.03758082171038936'), ('evaluation/env_infos/initial/forward_reward Std', '0.12706800538907964'), ('evaluation/env_infos/initial/forward_reward Max', '0.10714505461736121'), ('evaluation/env_infos/initial/forward_reward Min', '-0.22169929465191252'), ('evaluation/env_infos/forward_reward Mean', '-0.001608596634589209'), ('evaluation/env_infos/forward_reward Std', '0.04913838565616579'), ('evaluation/env_infos/forward_reward Max', '0.9170830225171546'), ('evaluation/env_infos/forward_reward Min', '-1.13733215985436'), ('time/epoch (s)', '0'), ('time/total (s)', '70.16498186439276'), ('Epoch', '0')]
# for ll in l:
# print(ll[0]+'\t'+ll[1])
# print(ll[1])
# import rlkit
# d1 = {}
# i1 = 0
# for line in rlkit.core.tabulate.tabulate(l).split('\n'):
# print(line)
# print(i1)
# print(l[i1][0])
# print(l[i1][1])
# d1[l[i1][0]] = l[i1][1]; i1+=1
# # self.log(line, *args, **kwargs, file_name=file_name)
# d1
# print(len(l))
# !ls logs
# !rm *.pkl logs/*
# !ls | grep pkl
# !cat logs/2020-04-13\ 18\:51\:02.545425.txt
# f = open('temp3', 'a')
# f.write('Hi')
# f.flush()
# import os
# os.getpid()
# !cat temp3
# !rm temp*
# ckpt = torch.load('ckpt.pkl')
# device = torch.device("cuda:0")
# algorithm = ckpt['algorithm']
# initial_epoch
dir_ = 'results/walker/'
f1 = dir_ + 'ckpt.pkl'
f2 = dir_ + 'ckpt-best.pkl'
x1 = torch.load(f1)
x2 = torch.load(f2)
x1
x2
x1['algorithm'].trainer.qf1.fc1 == x2['algorithm'].trainer.qf1.fc1
fc1 = x1['algorithm'].trainer.qf1.fcs[0]
fc2 = x2['algorithm'].trainer.qf1.fcs[0]
fc1.weight.data
fc2.weight.data
import torch
# algorithm.train(initial_epoch=initial_epoch, epochs = 2502)
type_ = 'ant'
a = None
with open('results/'+type_+'/tmp3/ckpt.pkl', 'rb') as f:
a = torch.load(f)
b = None
with open('results/'+type_+'/ckpt.pkl', 'rb') as f:
b = torch.load(f)
a['algorithm'].trainer.qf1.fcs[0].weight.data
a['epoch']
aa = a['algorithm']
aa2 = copy.deepcopy(aa)
bb = b['algorithm']
# aa.trainer.use_automatic_entropy_tuning
e, aa3 = get_snapshot_3()
def get_snapshot_3():
print('in get_snapshot_3')
ckpt = {}
ckpt = torch.load('results/walker/ckpt.pkl')
# self = copy.deepcopy(ckpt['algorithm'])
epoch = ckpt['epoch']
return epoch, copy.deepcopy(ckpt['algorithm'])
# e
bb == aa
bb.trainer.use_automatic_entropy_tuning
aa.trainer.qf1.fcs[0].weight.data == aa2.trainer.qf1.fcs[0].weight.data
aa3.trainer.qf1.fcs[0].weight.data
aa.trainer.qf1.fcs[0].weight.data
bb.trainer.qf1.fcs[0].weight.data
aa2.trainer.qf1.fcs[0].weight.data
```
| github_jupyter |
# Explore the Gensim implementation
> Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., & Joulin, A. (2017). Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405.
```
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import warnings
warnings.filterwarnings("ignore")
from gensim.models import Word2Vec, KeyedVectors
from gensim.test.utils import datapath
wv = KeyedVectors.load_word2vec_format(datapath("/Users/flint/Data/word2vec/GoogleNews-vectors-negative300.bin"),
binary=True)
```
## Similarity
```
pairs = [
('car', 'minivan'), # a minivan is a kind of car
('car', 'bicycle'), # still a wheeled vehicle
('car', 'airplane'), # ok, no wheels, but still a vehicle
('car', 'cereal'), # ... and so on
('car', 'communism'),
]
for w1, w2 in pairs:
print('%r\t%r\t%.2f' % (w1, w2, wv.similarity(w1, w2)))
for x, y in wv.most_similar('car'):
print(x, y)
vectors = []
for word in ['car', 'minivan', 'bicycle', 'airplane']:
vectors.append(wv.get_vector(word))
V = np.array(vectors)
v = V.mean(axis=0)
v = v - wv.get_vector('car')
wv.similar_by_vector(v)
```
## Analogy
FRANCE : PARIS = ITALY : ?
PARIS - FRANCE + ITALY
```
wv.most_similar(positive=['King', 'woman'], negative=['man'])
```
## Not matching
```
wv.doesnt_match("school professor apple student".split())
```
## Mean
```
vp = wv['school']
vr = wv['professor']
vx = wv['student']
m = (vp + vr + vx) / 3
wv.similar_by_vector(m)
pairs = [
('lecturer', 'school'),
('lecturer', 'professor'),
('lecturer', 'student'),
('lecturer', 'teacher'),
]
for w1, w2 in pairs:
print('%r\t%r\t%.2f' % (w1, w2, wv.similarity(w1, w2)))
```
## Context
```
wv.most_similar('buy')
wv.similarity('buy', 'money')
```
## Train a custom model
```
import gensim.models
sentences = _ # assume there's one document per line, tokens separated by whitespace
model = gensim.models.Word2Vec(sentences=sentences)
```
## Exercise: train a model from wordnet
```
from nltk.corpus import wordnet as wn
import nltk
words = ['cat', 'dog', 'bird', 'fish']
h = lambda s: s.hypernyms()
p = lambda s: s.hyponyms()
def get_pseudo_sentences(word, context=3):
sentences = []
for s in wn.synsets(word):
for lemma in s.lemmas():
sentences.append([lemma.name(), s.name()])
for i, j in enumerate(s.closure(h)):
sentences.append([s.name(), j.name()])
for lemma in j.lemmas():
sentences.append([lemma.name(), j.name()])
if i == context:
break
for i, j in enumerate(s.closure(p)):
sentences.append([j.name(), s.name()])
for lemma in j.lemmas():
sentences.append([lemma.name(), j.name()])
if i == context:
break
return sentences
sentences = []
for w in words:
sentences += get_pseudo_sentences(w)
model = Word2Vec(sentences=sentences, vector_size=100, window=5, min_count=1, workers=4)
model.wv.most_similar('fish')
```
## Update an existing model
```
import pymongo
import nltk
from string import punctuation
import copy
MO = Word2Vec.load('/Users/flint/Playground/MeaningSpread/w2v-global.model')
MO.wv.most_similar('fear')
db = pymongo.MongoClient()['twitter']['tweets']
tweets = list(db.find())
corpus = dict([(tweet['id'], tweet['text']) for tweet in tweets])
nltk_tokenize = lambda text: [x.lower() for x in nltk.word_tokenize(text) if x not in punctuation]
data = [nltk_tokenize(y) for x, y in corpus.items()]
M1 = copy.deepcopy(MO)
M1.train(data, total_examples=MO.corpus_count, epochs=MO.epochs)
M1.wv.most_similar('pandemic')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#read the file
df = pd.read_json (r'VEEV.json')
#print the head
df
df['t'] = pd.to_datetime(df['t'], unit='s')
df = df.rename(columns={'c': 'Close', 'h': 'High', 'l':'Low', 'o': 'Open', 's': 'Status', 't': 'Date', 'v': 'Volume'})
df.head()
```
# Linear Regression
```
#read the file
df = pd.read_json (r'VEEV.json')
#print the head
df
df['t'] = pd.to_datetime(df['t'], unit='s')
X = df[['t']]
y = df['c'].values.reshape(-1, 1)
print(X.shape, y.shape)
data = X.copy()
data_binary_encoded = pd.get_dummies(data)
data_binary_encoded.head()
from sklearn.model_selection import train_test_split
X = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
X_train.head()
from sklearn.preprocessing import StandardScaler
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
y_train_scaled = y_scaler.transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train_scaled, y_train_scaled)
plt.scatter(model.predict(X_train_scaled), model.predict(X_train_scaled) - y_train_scaled, c="blue", label="Training Data")
plt.scatter(model.predict(X_test_scaled), model.predict(X_test_scaled) - y_test_scaled, c="orange", label="Testing Data")
plt.legend()
plt.hlines(y=0, xmin=y_test_scaled.min(), xmax=y_test_scaled.max())
plt.title("Residual Plot")
plt.show()
predictions = model.predict(X_test_scaled)
predictions
y_test_scaled
new_data['mon_fri'] = 0
for i in range(0,len(new_data)):
if (new_data['Dayofweek'][i] == 0 or new_data['Dayofweek'][i] == 4):
new_data['mon_fri'][i] = 1
else:
new_data['mon_fri'][i] = 0
#split into train and validation
train = new_data[:987]
valid = new_data[987:]
x_train = train.drop('Close', axis=1)
y_train = train['Close']
x_valid = valid.drop('Close', axis=1)
y_valid = valid['Close']
#implement linear regression
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x_train,y_train)
#make predictions and find the rmse
preds = model.predict(X_test_scaled)
rms=np.sqrt(np.mean(np.power((np.array(y_test_scaled)-np.array(preds)),2)))
rms
from sklearn.metrics import mean_squared_error
predictions = model.predict(X_test_scaled)
MSE = mean_squared_error(y_test_scaled, predictions)
r2 = model.score(X_test_scaled, y_test_scaled)
print(f"MSE: {MSE}, R2: {r2}")
#plot
valid['Predictions'] = 0
valid['Predictions'] = preds
valid.index = new_data[987:].index
train.index = new_data[:987].index
plt.plot(train['Close'])
plt.plot(valid[['Close', 'Predictions']])
```
# K-Nearest Neighbours
```
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
#read the file
df = pd.read_json (r'VEEV.json')
#print the head
df
df['t'] = pd.to_datetime(df['t'], unit='s')
df = df.rename(columns={'c': 'Close', 'h': 'High', 'l':'Low', 'o': 'Open', 's': 'Status', 't': 'Date', 'v': 'Volume'})
df.head()
# Predictor variables
df['Open-Close']= df.Open -df.Close
df['High-Low'] = df.High - df.Low
df =df.dropna()
X= df[['Open-Close', 'High-Low']]
X.head()
# Target variable
Y= np.where(df['Close'].shift(-1)>df['Close'],1,-1)
# Splitting the dataset
split_percentage = 0.7
split = int(split_percentage*len(df))
X_train = X[:split]
Y_train = Y[:split]
X_test = X[split:]
Y_test = Y[split:]
train_scores = []
test_scores = []
for k in range(1, 50, 2):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, Y_train)
train_score = knn.score(X_train, Y_train)
test_score = knn.score(X_test, Y_test)
train_scores.append(train_score)
test_scores.append(test_score)
print(f"k: {k}, Train/Test Score: {train_score:.3f}/{test_score:.3f}")
plt.plot(range(1, 50, 2), train_scores, marker='o')
plt.plot(range(1, 50, 2), test_scores, marker="x")
plt.xlabel("k neighbors")
plt.ylabel("Testing accuracy Score")
plt.show()
# Instantiate KNN learning model(k=15)
knn = KNeighborsClassifier(n_neighbors=17)
# fit the model
knn.fit(X_train, Y_train)
# Accuracy Score
accuracy_train = accuracy_score(Y_train, knn.predict(X_train))
accuracy_test = accuracy_score(Y_test, knn.predict(X_test))
print ('Train_data Accuracy: %.2f' %accuracy_train)
print ('Test_data Accuracy: %.2f' %accuracy_test)
len(Y_train)
pred = knn.predict(X_test)
pred
pd.DataFrame({"Prediction": pred, 'Actual': Y_test})
# Predicted Signal
df['Predicted_Signal'] = knn.predict(X)
# SPY Cumulative Returns
df['SPY_returns'] = np.log(df['Close']/df['Close'].shift(1))
Cumulative_SPY_returns = df[split:]['SPY_returns'].cumsum()*100
# Cumulative Strategy Returns
df['Startegy_returns'] = df['SPY_returns']* df['Predicted_Signal'].shift(1)
Cumulative_Strategy_returns = df[split:]['Startegy_returns'].cumsum()*100
# Plot the results to visualize the performance
plt.figure(figsize=(10,5))
plt.plot(Cumulative_SPY_returns, color='r',label = 'SPY Returns')
plt.plot(Cumulative_Strategy_returns, color='g', label = 'Strategy Returns')
plt.legend()
plt.show()
df
```
What is Sharpe Ratio?
Sharpe ratio is a measure for calculating risk-adjusted return. It is the ratio of the excess expected return of investment (over risk-free rate) per unit of volatility or standard deviation.
Let us see the formula for Sharpe ratio which will make things much clearer. The sharpe ratio calculation is done in the following manner
Sharpe Ratio = (Rx – Rf) / StdDev(x)
Where,
x is the investment
Rx is the average rate of return of x
Rf is the risk-free rate of return
StdDev(x) is the standard deviation of Rx
Once you see the formula, you will understand that we deduct the risk-free rate of return as this helps us in figuring out if the strategy makes sense or not. If the Numerator turned out negative, wouldn’t it be better to invest in a government bond which guarantees you a risk-free rate of return? Some of you would recognise this as the risk-adjusted return.
In the denominator, we have the standard deviation of the average return of the investment. It helps us in identifying the volatility as well as the risk associated with the investment.
Thus, the Sharpe ratio helps us in identifying which strategy gives better returns in comparison to the volatility. There, that is all when it comes to sharpe ratio calculation.
Let’s take an example now to see how the Sharpe ratio calculation helps us.
You have devised a strategy and created a portfolio of different stocks. After backtesting, you observe that this portfolio, let’s call it Portfolio A, will give a return of 11%. However, you are concerned with the volatility at 8%.
Now, you change certain parameters and pick different financial instruments to create another portfolio, Portfolio B. This portfolio gives an expected return of 8%, but the volatility now drops to 4%.
```
# Calculate Sharpe reatio
Std = Cumulative_Strategy_returns.std()
Sharpe = (Cumulative_Strategy_returns-Cumulative_SPY_returns)/Std
Sharpe = Sharpe.mean()
print ('Sharpe ratio: %.2f'%Sharpe )
```
Tested many neighbours and the lowest sharpe ratio was for 17.
# Auto ARIMA
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import lag_plot
from pandas import datetime
from statsmodels.tsa.arima_model import ARIMA
from sklearn.metrics import mean_squared_error
#read the file
df = pd.read_json (r'VEEV.json')
#print the head
df
plt.figure(figsize=(10,10))
lag_plot(df['c'], lag=5)
plt.title('Microsoft Autocorrelation plot')
size = len(df)
train_data, test_data = df[0:int(len(df)*0.8)], df[int(len(df)*0.8):]
plt.figure(figsize=(12,7))
plt.title('Microsoft Prices')
plt.xlabel('Dates')
plt.ylabel('Prices')
plt.plot(df['c'], 'blue', label='Training Data')
plt.plot(test_data['c'], 'green', label='Testing Data')
plt.xticks(np.arange(0,size, 300), df['t'][0:size:300])
plt.legend()
def smape_kun(y_true, y_pred):
return np.mean((np.abs(y_pred - y_true) * 200/ (np.abs(y_pred) + np.abs(y_true))))
train_ar = train_data['c'].values
test_ar = test_data['c'].values
history = [x for x in train_ar]
print(type(history))
predictions = list()
for t in range(len(test_ar)):
model = ARIMA(history, order=(5,1,0))
model_fit = model.fit(disp=0)
output = model_fit.forecast()
yhat = output[0]
predictions.append(yhat)
obs = test_ar[t]
history.append(obs)
error = mean_squared_error(test_ar, predictions)
print('Testing Mean Squared Error: %.3f' % error)
error2 = smape_kun(test_ar, predictions)
print('Symmetric mean absolute percentage error: %.3f' % error2)
pd.DataFrame({"Prediction": predictions, 'Actual': test_ar})
plt.figure(figsize=(12,7))
plt.plot(df['c'], 'green', color='blue', label='Training Data')
plt.plot(test_data.index, predictions, color='green', marker='o', linestyle='dashed',
label='Predicted Price')
plt.plot(test_data.index, test_data['c'], color='red', label='Actual Price')
plt.title('Microsoft Prices Prediction')
plt.xlabel('Dates')
plt.ylabel('Prices')
plt.xticks(np.arange(0,size, 1300), df['t'][0:size:1300])
plt.legend()
```
# Prophet
```
#read the file
df = pd.read_json (r'VEEV.json')
#print the head
df
df['t'] = pd.to_datetime(df['t'], unit='s')
df = df.rename(columns={'c': 'Close', 'h': 'High', 'l':'Low', 'o': 'Open', 's': 'Status', 't': 'Date', 'v': 'Volume'})
df.head()
size = len(df)
#importing prophet
from fbprophet import Prophet
#creating dataframe
new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close'])
for i in range(0,len(df)):
new_data['Date'][i] = df['Date'][i]
new_data['Close'][i] = df['Close'][i]
new_data['Date'] = pd.to_datetime(new_data.Date,format='%Y-%m-%d')
new_data.index = new_data['Date']
#preparing data
new_data.rename(columns={'Close': 'y', 'Date': 'ds'}, inplace=True)
new_data[:size]
#train and validation
train = new_data[:size]
valid = new_data[size:]
len(valid)
#fit the model
model = Prophet()
model.fit(train)
#predictions
close_prices = model.make_future_dataframe(periods=size)
forecast = model.predict(close_prices)
close_prices.tail(2)
#rmse
forecast_valid = forecast['yhat'][size:]
print(forecast_valid.shape, valid['y'].shape)
rms=np.sqrt(np.mean(np.power((np.array(new_data['y'])-np.array(forecast_valid)),2)))
rms
new_data['yhat'] = forecast['yhat']
#plot
new_data['Predictions'] = 0
new_data['Predictions'] = forecast_valid.values
plt.plot(train['y'])
plt.plot(new_data[['y', 'Predictions']])
pd.DataFrame({"Prediction": new_data['Predictions'], "Actual": new_data['y']})
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Gradient Boosted Trees: Model understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimators/boosted_trees_model_understanding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/tree/master/site/en/tutorials/estimators/boosted_trees_model_understanding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
For an end-to-end walkthrough of training a Gradient Boosting model check out the [boosted trees tutorial](https://www.tensorflow.org/tutorials/boosted_trees). In this tutorial you will:
* Learn how to interpret a Boosted Tree model both *locally* and *globally*
* Gain intution for how a Boosted Trees model fits a dataset
## How to interpret Boosted Trees models both locally and globally
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole. Such techniques can help machine learning (ML) practitioners detect bias and bugs during the model development stage
For local interpretability, you will learn how to create and visualize per-instance contributions. To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability you will retrieve and visualize gain-based feature importances, [permutation feature importances](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) and also show aggregated DFCs.
## Load the titanic dataset
You will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
```
!pip install tf-nightly # Requires tf 1.13
from __future__ import absolute_import, division, print_function
import numpy as np
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
```
For a description of the features, please review the prior tutorial.
## Create feature columns, input_fn, and the train the estimator
### Preprocess the data
Create the feature columns, using the original numeric columns as is and one-hot-encoding categorical variables.
```
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
```
### Build the input pipeline
Create the input functions using the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas.
```
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(NUM_EXAMPLES))
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
```
### Train the model
```
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classificaiton when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(feature_columns, **params)
est.train(train_input_fn, max_steps=100)
results = est.evaluate(eval_input_fn)
pd.Series(results).to_frame()
```
For performance reasons, when your data fits in memory, we recommend use the `boosted_trees_classifier_train_in_memory` function. However if training time is not of a concern or if you have a very large dataset and want to do distributed training, use the `tf.estimator.BoostedTrees` API shown above.
When using this method, you should not batch your input data, as the method operates on the entire dataset.
```
in_memory_params = dict(params)
del in_memory_params['n_batches_per_layer']
# In-memory input_fn does not use batching.
def make_inmemory_train_input_fn(X, y):
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
# Train the model.
est = tf.contrib.estimator.boosted_trees_classifier_train_in_memory(
train_input_fn,
feature_columns,
**in_memory_params)
print(est.evaluate(eval_input_fn))
```
## Model interpretation and plotting
```
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
```
## Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). The DFCs are generated with:
`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
```
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.describe().T
```
A nice property of DFCs is that the sum of the contributions + the bias is equal to the prediction for a given example.
```
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
```
Plot DFCs for an individual passenger.
```
# Plot results.
ID = 182
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = example[sorted_ix].plot(kind='barh', color=sns_colors[3])
ax.grid(False, axis='y')
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability');
```
The larger magnitude contributions have a larger impact on the model's prediction. Negative contributions indicate the feature value for this given example reduced the model's prediction, while positive values contribute an increase in the prediction.
### Improved plotting
Let's make the plot nice by color coding based on the contributions' directionality and add the feature values on figure.
```
# Boilerplate code for plotting :)
def _get_color(value):
"""To make positive DFCs plot green, negative DFCs plot red."""
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
"""Display feature's values on left of plot."""
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
def plot_example(example):
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
return ax
```
Plot example.
```
example = df_dfc.iloc[ID] # Choose IDth example from evaluation set.
ax = plot_example(example)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14);
```
You can also plot the example's DFCs compare with the entire distribution using a voilin plot.
```
# Boilerplate plotting code.
def dist_violin_plot(df_dfc, ID):
# Initialize plot.
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
# Create example dataframe.
TOP_N = 8 # View top 8 features.
example = df_dfc.iloc[ID]
ix = example.abs().sort_values()[-TOP_N:].index
example = example[ix]
example_df = example.to_frame(name='dfc')
# Add contributions of entire distribution.
parts=ax.violinplot([df_dfc[w] for w in ix],
vert=False,
showextrema=False,
widths=0.7,
positions=np.arange(len(ix)))
face_color = sns_colors[0]
alpha = 0.15
for pc in parts['bodies']:
pc.set_facecolor(face_color)
pc.set_alpha(alpha)
# Add feature values.
_add_feature_values(dfeval.iloc[ID][sorted_ix], ax)
# Add local contributions.
ax.scatter(example,
np.arange(example.shape[0]),
color=sns.color_palette()[2],
s=100,
marker="s",
label='contributions for example')
# Legend
# Proxy plot, to show violinplot dist on legend.
ax.plot([0,0], [1,1], label='eval set contributions\ndistributions',
color=face_color, alpha=alpha, linewidth=10)
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large',
frameon=True)
legend.get_frame().set_facecolor('white')
# Format plot.
ax.set_yticks(np.arange(example.shape[0]))
ax.set_yticklabels(example.index)
ax.grid(False, axis='y')
ax.set_xlabel('Contribution to predicted probability', size=14)
```
Plot this example.
```
dist_violin_plot(df_dfc, ID)
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]));
```
Finally, third-party tools, such as [LIME](https://github.com/marcotcr/lime) and [shap](https://github.com/slundberg/shap), can also help understand individual predictions for a model.
## Global feature importances
Additionally, you might want to understand the model as a whole, rather than studying individual predictions. Below, you will compute and use:
1. Gain-based feature importances using `est.experimental_feature_importances`
2. Permutation importances
3. Aggregate DFCs using `est.experimental_predict_with_explanations`
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated ([source](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-307)). Check out [this article](http://explained.ai/rf-importance/index.html) for an in-depth overview and great discussion on different feature importance types.
### 1. Gain-based feature importances
Gain-based feature importances are built into the TensorFlow Boosted Trees estimators using `est.experimental_feature_importances`.
```
importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.Series(importances)
# Visualize importances.
N = 8
ax = (df_imp.iloc[0:N][::-1]
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6)))
ax.grid(False, axis='y')
```
### 2. Average absolute DFCs
You can also average the absolute values of DFCs to understand impact at a global level.
```
# Plot.
dfc_mean = df_dfc.abs().mean()
N = 8
sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
```
You can also see how DFCs vary as a feature value varies.
```
FEATURE = 'fare'
feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index()
ax = sns.regplot(feature.index.values, feature.values, lowess=True);
ax.set_ylabel('contribution')
ax.set_xlabel(FEATURE);
ax.set_xlim(0, 100);
```
### 3. Permutation feature importance
```
def permutation_importances(est, X_eval, y_eval, metric, features):
"""Column by column, shuffle values and observe effect on eval set.
source: http://explained.ai/rf-importance/index.html
A similar approach can be done during training. See "Drop-column importance"
in the above article."""
baseline = metric(est, X_eval, y_eval)
imp = []
for col in features:
save = X_eval[col].copy()
X_eval[col] = np.random.permutation(X_eval[col])
m = metric(est, X_eval, y_eval)
X_eval[col] = save
imp.append(baseline - m)
return np.array(imp)
def accuracy_metric(est, X, y):
"""TensorFlow estimator accuracy."""
eval_input_fn = make_input_fn(X,
y=y,
shuffle=False,
n_epochs=1)
return est.evaluate(input_fn=eval_input_fn)['accuracy']
features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS
importances = permutation_importances(est, dfeval, y_eval, accuracy_metric,
features)
df_imp = pd.Series(importances, index=features)
sorted_ix = df_imp.abs().sort_values().index
ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6))
ax.grid(False, axis='y')
ax.set_title('Permutation feature importance');
```
# Visualizing model fitting
Lets first simulate/create training data using the following formula:
$z=x* e^{-x^2 - y^2}$
Where $z$ is the dependent variable you are trying to predict and $x$ and $y$ are the features.
```
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi);
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
```
You can visualize the function. Redder colors correspond to larger function values.
```
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data');
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
def predict(est):
"""Predictions from a given estimator."""
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
```
First let's try to fit a linear model to the data.
```
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
```
It's not a very good fit. Next let's try to fit a GBDT model to it and try to understand how the model fits the function.
```
def create_bt_est(n_trees):
return tf.estimator.BoostedTreesRegressor(fc,
n_batches_per_layer=1,
n_trees=n_trees)
N_TREES = [1,2,3,4,10,20,50,100]
for n in N_TREES:
est = create_bt_est(n)
est.train(train_input_fn, max_steps=500)
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n), color='w', backgroundcolor='black', size=20);
```
As you increase the number of trees, the model's predictions better approximates the underlying function.
## Conclusion
In this tutorial you learned how to interpret Boosted Trees models using directional feature contributions and feature importance techniques. These techniques provide insight into how the features impact a model's predictions. Finally, you also gained intution for how a Boosted Tree model fits a complex function by viewing the decision surface for several models.
| github_jupyter |
##### Copyright 2021 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Parametrized Quantum Circuits for Reinforcement Learning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/quantum_reinforcement_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/quantum_reinforcement_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Quantum computers have been shown to provide computational advantages in certain problem areas. The field of quantum reinforcement learning (QRL) aims to harness this boost by designing RL agents that rely on quantum models of computation.
In this tutorial, you will implement two reinforcement learning algorithms based on parametrized/variational quantum circuits (PQCs or VQCs), namely a policy-gradient and a deep Q-learning implementation. These algorithms were introduced by [[1] Jerbi et al.](https://arxiv.org/abs/2103.05577) and [[2] Skolik et al.](https://arxiv.org/abs/2103.15084), respectively.
You will implement a PQC with data re-uploading in TFQ, and use it as:
1. an RL policy trained with a policy-gradient method,
2. a Q-function approximator trained with deep Q-learning,
each solving [CartPole-v1](http://gym.openai.com/envs/CartPole-v1/), a benchmarking task from OpenAI Gym. Note that, as showcased in [[1]](https://arxiv.org/abs/2103.05577) and [[2]](https://arxiv.org/abs/2103.15084), these agents can also be used to solve other task-environment from OpenAI Gym, such as [FrozenLake-v0](http://gym.openai.com/envs/FrozenLake-v0/), [MountainCar-v0](http://gym.openai.com/envs/MountainCar-v0/) or [Acrobot-v1](http://gym.openai.com/envs/Acrobot-v1/).
Features of this implementation:
- you will learn how to use a `tfq.layers.ControlledPQC` to implement a PQC with data re-uploading, appearing in many applications of QML. This implementation also naturally allows using trainable scaling parameters at the input of the PQC, to increase its expressivity,
- you will learn how to implement observables with trainable weights at the output of a PQC, to allow a flexible range of output values,
- you will learn how a `tf.keras.Model` can be trained with non-trivial ML loss functions, i.e., that are not compatible with `model.compile` and `model.fit`, using a `tf.GradientTape`.
## Setup
Install TensorFlow:
```
!pip install tensorflow==2.4.1
```
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
```
Install Gym:
```
!pip install gym==0.18.0
```
Now import TensorFlow and the module dependencies:
```
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
import tensorflow as tf
import tensorflow_quantum as tfq
import gym, cirq, sympy
import numpy as np
from functools import reduce
from collections import deque, defaultdict
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
tf.get_logger().setLevel('ERROR')
```
## 1. Build a PQC with data re-uploading
At the core of both RL algorithms you are implementing is a PQC that takes as input the agent's state $s$ in the environment (i.e., a numpy array) and outputs a vector of expectation values. These expectation values are then post-processed, either to produce an agent's policy $\pi(a|s)$ or approximate Q-values $Q(s,a)$. In this way, the PQCs are playing an analog role to that of deep neural networks in modern deep RL algorithms.
A popular way to encode an input vector in a PQC is through the use of single-qubit rotations, where rotation angles are controlled by the components of this input vector. In order to get a [highly-expressive model](https://arxiv.org/abs/2008.08605), these single-qubit encodings are not performed only once in the PQC, but in several "[re-uploadings](https://quantum-journal.org/papers/q-2020-02-06-226/)", interlayed with variational gates. The layout of such a PQC is depicted below:
<img src="./images/pqc_re-uploading.png" width="700">
As discussed in [[1]](https://arxiv.org/abs/2103.05577) and [[2]](https://arxiv.org/abs/2103.15084), a way to further enhance the expressivity and trainability of data re-uploading PQCs is to use trainable input-scaling parameters $\boldsymbol{\lambda}$ for each encoding gate of the PQC, and trainable observable weights $\boldsymbol{w}$ at its output.
### 1.1 Cirq circuit for ControlledPQC
The first step is to implement in Cirq the quantum circuit to be used as the PQC. For this, start by defining basic unitaries to be applied in the circuits, namely an arbitrary single-qubit rotation and an entangling layer of CZ gates:
```
def one_qubit_rotation(qubit, symbols):
"""
Returns Cirq gates that apply a rotation of the bloch sphere about the X,
Y and Z axis, specified by the values in `symbols`.
"""
return [cirq.rx(symbols[0])(qubit),
cirq.ry(symbols[1])(qubit),
cirq.rz(symbols[2])(qubit)]
def entangling_layer(qubits):
"""
Returns a layer of CZ entangling gates on `qubits` (arranged in a circular topology).
"""
cz_ops = [cirq.CZ(q0, q1) for q0, q1 in zip(qubits, qubits[1:])]
cz_ops += ([cirq.CZ(qubits[0], qubits[-1])] if len(qubits) != 2 else [])
return cz_ops
```
Now, use these functions to generate the Cirq circuit:
```
def generate_circuit(qubits, n_layers):
"""Prepares a data re-uploading circuit on `qubits` with `n_layers` layers."""
# Number of qubits
n_qubits = len(qubits)
# Sympy symbols for variational angles
params = sympy.symbols(f'theta(0:{3*(n_layers+1)*n_qubits})')
params = np.asarray(params).reshape((n_layers + 1, n_qubits, 3))
# Sympy symbols for encoding angles
inputs = sympy.symbols(f'x(0:{n_layers})'+f'_(0:{n_qubits})')
inputs = np.asarray(inputs).reshape((n_layers, n_qubits))
# Define circuit
circuit = cirq.Circuit()
for l in range(n_layers):
# Variational layer
circuit += cirq.Circuit(one_qubit_rotation(q, params[l, i]) for i, q in enumerate(qubits))
circuit += entangling_layer(qubits)
# Encoding layer
circuit += cirq.Circuit(cirq.rx(inputs[l, i])(q) for i, q in enumerate(qubits))
# Last varitional layer
circuit += cirq.Circuit(one_qubit_rotation(q, params[n_layers, i]) for i,q in enumerate(qubits))
return circuit, list(params.flat), list(inputs.flat)
```
Check that this produces a circuit that is alternating between variational and encoding layers.
```
n_qubits, n_layers = 3, 1
qubits = cirq.GridQubit.rect(1, n_qubits)
circuit, _, _ = generate_circuit(qubits, n_layers)
SVGCircuit(circuit)
```
### 1.2 ReUploadingPQC layer using ControlledPQC
To construct the re-uploading PQC from the figure above, you can create a custom Keras layer. This layer will manage the trainable parameters (variational angles $\boldsymbol{\theta}$ and input-scaling parameters $\boldsymbol{\lambda}$) and resolve the input values (input state $s$) into the appropriate symbols in the circuit.
```
class ReUploadingPQC(tf.keras.layers.Layer):
"""
Performs the transformation (s_1, ..., s_d) -> (theta_1, ..., theta_N, lmbd[1][1]s_1, ..., lmbd[1][M]s_1,
......., lmbd[d][1]s_d, ..., lmbd[d][M]s_d) for d=input_dim, N=theta_dim and M=n_layers.
An activation function from tf.keras.activations, specified by `activation` ('linear' by default) is
then applied to all lmbd[i][j]s_i.
All angles are finally permuted to follow the alphabetical order of their symbol names, as processed
by the ControlledPQC.
"""
def __init__(self, qubits, n_layers, observables, activation="linear", name="re-uploading_PQC"):
super(ReUploadingPQC, self).__init__(name=name)
self.n_layers = n_layers
self.n_qubits = len(qubits)
circuit, theta_symbols, input_symbols = generate_circuit(qubits, n_layers)
theta_init = tf.random_uniform_initializer(minval=0.0, maxval=np.pi)
self.theta = tf.Variable(
initial_value=theta_init(shape=(1, len(theta_symbols)), dtype="float32"),
trainable=True, name="thetas"
)
lmbd_init = tf.ones(shape=(self.n_qubits * self.n_layers,))
self.lmbd = tf.Variable(
initial_value=lmbd_init, dtype="float32", trainable=True, name="lambdas"
)
# Define explicit symbol order.
symbols = [str(symb) for symb in theta_symbols + input_symbols]
self.indices = tf.constant([symbols.index(a) for a in sorted(symbols)])
self.activation = activation
self.empty_circuit = tfq.convert_to_tensor([cirq.Circuit()])
self.computation_layer = tfq.layers.ControlledPQC(circuit, observables)
def call(self, inputs):
# inputs[0] = encoding data for the state.
batch_dim = tf.gather(tf.shape(inputs[0]), 0)
tiled_up_circuits = tf.repeat(self.empty_circuit, repeats=batch_dim)
tiled_up_thetas = tf.tile(self.theta, multiples=[batch_dim, 1])
tiled_up_inputs = tf.tile(inputs[0], multiples=[1, self.n_layers])
scaled_inputs = tf.einsum("i,ji->ji", self.lmbd, tiled_up_inputs)
squashed_inputs = tf.keras.layers.Activation(self.activation)(scaled_inputs)
joined_vars = tf.concat([tiled_up_thetas, squashed_inputs], axis=1)
joined_vars = tf.gather(joined_vars, self.indices, axis=1)
return self.computation_layer([tiled_up_circuits, joined_vars])
```
## 2. Policy-gradient RL with PQC policies
In this section, you will implement the policy-gradient algorithm presented in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a>. For this, you will start by constructing, out of the PQC that was just defined, the `softmax-VQC` policy (where VQC stands for variational quantum circuit):
$$ \pi_\theta(a|s) = \frac{e^{\beta \langle O_a \rangle_{s,\theta}}}{\sum_{a'} e^{\beta \langle O_{a'} \rangle_{s,\theta}}} $$
where $\langle O_a \rangle_{s,\theta}$ are expectation values of observables $O_a$ (one per action) measured at the output of the PQC, and $\beta$ is a tunable inverse-temperature parameter.
You can adopt the same observables used in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a> for CartPole, namely a global $Z_0Z_1Z_2Z_3$ Pauli product acting on all qubits, weighted by an action-specific weight for each action. To implement the weighting of the Pauli product, you can use an extra `tf.keras.layers.Layer` that stores the action-specific weights and applies them multiplicatively on the expectation value $\langle Z_0Z_1Z_2Z_3 \rangle_{s,\theta}$.
```
class Alternating(tf.keras.layers.Layer):
def __init__(self, output_dim):
super(Alternating, self).__init__()
self.w = tf.Variable(
initial_value=tf.constant([[(-1.)**i for i in range(output_dim)]]), dtype="float32",
trainable=True, name="obs-weights")
def call(self, inputs):
return tf.matmul(inputs, self.w)
```
Prepare the definition of your PQC:
```
n_qubits = 4 # Dimension of the state vectors in CartPole
n_layers = 5 # Number of layers in the PQC
n_actions = 2 # Number of actions in CartPole
qubits = cirq.GridQubit.rect(1, n_qubits)
```
and its observables:
```
ops = [cirq.Z(q) for q in qubits]
observables = [reduce((lambda x, y: x * y), ops)] # Z_0*Z_1*Z_2*Z_3
```
With this, define a `tf.keras.Model` that applies, sequentially, the `ReUploadingPQC` layer previously defined, followed by a post-processing layer that computes the weighted observables using `Alternating`, which are then fed into a `tf.keras.layers.Softmax` layer that outputs the `softmax-VQC` policy of the agent.
```
def generate_model_policy(qubits, n_layers, n_actions, beta, observables):
"""Generates a Keras model for a data re-uploading PQC policy."""
input_tensor = tf.keras.Input(shape=(len(qubits), ), dtype=tf.dtypes.float32, name='input')
re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables)([input_tensor])
process = tf.keras.Sequential([
Alternating(n_actions),
tf.keras.layers.Lambda(lambda x: x * beta),
tf.keras.layers.Softmax()
], name="observables-policy")
policy = process(re_uploading_pqc)
model = tf.keras.Model(inputs=[input_tensor], outputs=policy)
return model
model = generate_model_policy(qubits, n_layers, n_actions, 1.0, observables)
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
```
You can now train the PQC policy on CartPole-v1, using, e.g., the basic `REINFORCE` algorithm (see Alg. 1 in <a href="https://arxiv.org/abs/2103.05577" class="external">[1]</a>). Pay attention to the following points:
1. Because scaling parameters, variational angles and observables weights are trained with different learning rates, it is convenient to define 3 separate optimizers with their own learning rates, each updating one of these groups of parameters.
2. The loss function in policy-gradient RL is
$$ \mathcal{L}(\theta) = -\frac{1}{|\mathcal{B}|}\sum_{s_0,a_0,r_1,s_1,a_1, \ldots \in \mathcal{B}} \left(\sum_{t=0}^{H-1} \log(\pi_\theta(a_t|s_t)) \sum_{t'=1}^{H-t} \gamma^{t'} r_{t+t'} \right)$$
for a batch $\mathcal{B}$ of episodes $(s_0,a_0,r_1,s_1,a_1, \ldots)$ of interactions in the environment following the policy $\pi_\theta$. This is different from a supervised learning loss with fixed target values that the model should fit, which make it impossible to use a simple function call like `model.fit` to train the policy. Instead, using a `tf.GradientTape` allows to keep track of the computations involving the PQC (i.e., policy sampling) and store their contributions to the loss during the interaction. After running a batch of episodes, you can then apply backpropagation on these computations to get the gradients of the loss with respect to the PQC parameters and use the optimizers to update the policy-model.
Start by defining a function that gathers episodes of interaction with the environment:
```
def gather_episodes(state_bounds, n_actions, model, n_episodes, env_name):
"""Interact with environment in batched fashion."""
trajectories = [defaultdict(list) for _ in range(n_episodes)]
envs = [gym.make(env_name) for _ in range(n_episodes)]
done = [False for _ in range(n_episodes)]
states = [e.reset() for e in envs]
while not all(done):
unfinished_ids = [i for i in range(n_episodes) if not done[i]]
normalized_states = [s/state_bounds for i, s in enumerate(states) if not done[i]]
for i, state in zip(unfinished_ids, normalized_states):
trajectories[i]['states'].append(state)
# Compute policy for all unfinished envs in parallel
states = tf.convert_to_tensor(normalized_states)
action_probs = model([states])
# Store action and transition all environments to the next state
states = [None for i in range(n_episodes)]
for i, policy in zip(unfinished_ids, action_probs.numpy()):
action = np.random.choice(n_actions, p=policy)
states[i], reward, done[i], _ = envs[i].step(action)
trajectories[i]['actions'].append(action)
trajectories[i]['rewards'].append(reward)
return trajectories
```
and a function that computes discounted returns $\sum_{t'=1}^{H-t} \gamma^{t'} r_{t+t'}$ out of the rewards $r_t$ collected in an episode:
```
def compute_returns(rewards_history, gamma):
"""Compute discounted returns with discount factor `gamma`."""
returns = []
discounted_sum = 0
for r in rewards_history[::-1]:
discounted_sum = r + gamma * discounted_sum
returns.insert(0, discounted_sum)
# Normalize them for faster and more stable learning
returns = np.array(returns)
returns = (returns - np.mean(returns)) / (np.std(returns) + 1e-8)
returns = returns.tolist()
return returns
```
Define the hyperparameters:
```
state_bounds = np.array([2.4, 2.5, 0.21, 2.5])
gamma = 1
batch_size = 10
n_episodes = 1000
```
Prepare the optimizers:
```
optimizer_in = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
optimizer_var = tf.keras.optimizers.Adam(learning_rate=0.01, amsgrad=True)
optimizer_out = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
# Assign the model parameters to each optimizer
w_in, w_var, w_out = 1, 0, 2
```
Implement a function that updates the policy using states, actions and returns:
```
@tf.function
def reinforce_update(states, actions, returns, model):
states = tf.convert_to_tensor(states)
actions = tf.convert_to_tensor(actions)
returns = tf.convert_to_tensor(returns)
with tf.GradientTape() as tape:
tape.watch(model.trainable_variables)
logits = model(states)
p_actions = tf.gather_nd(logits, actions)
log_probs = tf.math.log(p_actions)
loss = tf.math.reduce_sum(-log_probs * returns) / batch_size
grads = tape.gradient(loss, model.trainable_variables)
for optimizer, w in zip([optimizer_in, optimizer_var, optimizer_out], [w_in, w_var, w_out]):
optimizer.apply_gradients([(grads[w], model.trainable_variables[w])])
```
Now implement the main training loop of the agent.
Note: This agent may need to simulate several million quantum circuits and can take as much as ~20 minutes to finish training.
```
env_name = "CartPole-v1"
# Start training the agent
episode_reward_history = []
for batch in range(n_episodes // batch_size):
# Gather episodes
episodes = gather_episodes(state_bounds, n_actions, model, batch_size, env_name)
# Group states, actions and returns in numpy arrays
states = np.concatenate([ep['states'] for ep in episodes])
actions = np.concatenate([ep['actions'] for ep in episodes])
rewards = [ep['rewards'] for ep in episodes]
returns = np.concatenate([compute_returns(ep_rwds, gamma) for ep_rwds in rewards])
returns = np.array(returns, dtype=np.float32)
id_action_pairs = np.array([[i, a] for i, a in enumerate(actions)])
# Update model parameters.
reinforce_update(states, id_action_pairs, returns, model)
# Store collected rewards
for ep_rwds in rewards:
episode_reward_history.append(np.sum(ep_rwds))
avg_rewards = np.mean(episode_reward_history[-10:])
print('Finished episode', (batch + 1) * batch_size,
'Average rewards: ', avg_rewards)
if avg_rewards >= 500.0:
break
```
Plot the learning history of the agent:
```
plt.figure(figsize=(10,5))
plt.plot(episode_reward_history)
plt.xlabel('Epsiode')
plt.ylabel('Collected rewards')
plt.show()
```
Congratulations, you have trained a quantum policy gradient model on Cartpole! The plot above shows the rewards collected by the agent per episode throughout its interaction with the environment. You should see that after a few hundred episodes, the performance of the agent gets close to optimal, i.e., 500 rewards per episode.
You can now visualize the performance of your agent using `env.render()` in a sample episode (uncomment/run the following cell only if your notebook has access to a display):
```
# from PIL import Image
# env = gym.make('CartPole-v1')
# state = env.reset()
# frames = []
# for t in range(500):
# im = Image.fromarray(env.render(mode='rgb_array'))
# frames.append(im)
# policy = model([tf.convert_to_tensor([state/state_bounds])])
# action = np.random.choice(n_actions, p=policy.numpy()[0])
# state, _, done, _ = env.step(action)
# if done:
# break
# env.close()
# frames[1].save('./images/gym_CartPole.gif',
# save_all=True, append_images=frames[2:], optimize=False, duration=40, loop=0)
```
<img src="./images/gym_CartPole.gif" width="700">
## 3. Deep Q-learning with PQC Q-function approximators
In this section, you will move to the implementation of the deep Q-learning algorithm presented in <a href="https://arxiv.org/abs/2103.15084" class="external">[2]</a>. As opposed to a policy-gradient approach, the deep Q-learning method uses a PQC to approximate the Q-function of the agent. That is, the PQC defines a function approximator:
$$ Q_\theta(s,a) = \langle O_a \rangle_{s,\theta} $$
where $\langle O_a \rangle_{s,\theta}$ are expectation values of observables $O_a$ (one per action) measured at the ouput of the PQC.
These Q-values are updated using a loss function derived from Q-learning:
$$ \mathcal{L}(\theta) = \frac{1}{|\mathcal{B}|}\sum_{s,a,r,s' \in \mathcal{B}} \left(Q_\theta(s,a) - [r +\max_{a'} Q_{\theta'}(s',a')]\right)^2$$
for a batch $\mathcal{B}$ of $1$-step interactions $(s,a,r,s')$ with the environment, sampled from the replay memory, and parameters $\theta'$ specifying the target PQC (i.e., a copy of the main PQC, whose parameters are sporadically copied from the main PQC throughout learning).
You can adopt the same observables used in <a href="https://arxiv.org/abs/2103.15084" class="external">[2]</a> for CartPole, namely a $Z_0Z_1$ Pauli product for action $0$ and a $Z_2Z_3$ Pauli product for action $1$. Both observables are re-scaled so their expectation values are in $[0,1]$ and weighted by an action-specific weight. To implement the re-scaling and weighting of the Pauli products, you can define again an extra `tf.keras.layers.Layer` that stores the action-specific weights and applies them multiplicatively on the expectation values $\left(1+\langle Z_0Z_1 \rangle_{s,\theta}\right)/2$ and $\left(1+\langle Z_2Z_3 \rangle_{s,\theta}\right)/2$.
```
class Rescaling(tf.keras.layers.Layer):
def __init__(self, input_dim):
super(Rescaling, self).__init__()
self.input_dim = input_dim
self.w = tf.Variable(
initial_value=tf.ones(shape=(1,input_dim)), dtype="float32",
trainable=True, name="obs-weights")
def call(self, inputs):
return tf.math.multiply((inputs+1)/2, tf.repeat(self.w,repeats=tf.shape(inputs)[0],axis=0))
```
Prepare the definition of your PQC and its observables:
```
n_qubits = 4 # Dimension of the state vectors in CartPole
n_layers = 5 # Number of layers in the PQC
n_actions = 2 # Number of actions in CartPole
qubits = cirq.GridQubit.rect(1, n_qubits)
ops = [cirq.Z(q) for q in qubits]
observables = [ops[0]*ops[1], ops[2]*ops[3]] # Z_0*Z_1 for action 0 and Z_2*Z_3 for action 1
```
Define a `tf.keras.Model` that, similarly to the PQC-policy model, constructs a Q-function approximator that is used to generate the main and target models of our Q-learning agent.
```
def generate_model_Qlearning(qubits, n_layers, n_actions, observables, target):
"""Generates a Keras model for a data re-uploading PQC Q-function approximator."""
input_tensor = tf.keras.Input(shape=(len(qubits), ), dtype=tf.dtypes.float32, name='input')
re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables, activation='tanh')([input_tensor])
process = tf.keras.Sequential([Rescaling(len(observables))], name=target*"Target"+"Q-values")
Q_values = process(re_uploading_pqc)
model = tf.keras.Model(inputs=[input_tensor], outputs=Q_values)
return model
model = generate_model_Qlearning(qubits, n_layers, n_actions, observables, False)
model_target = generate_model_Qlearning(qubits, n_layers, n_actions, observables, True)
model_target.set_weights(model.get_weights())
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
tf.keras.utils.plot_model(model_target, show_shapes=True, dpi=70)
```
You can now implement the deep Q-learning algorithm and test it on the CartPole-v1 environment. For the policy of the agent, you can use an $\varepsilon$-greedy policy:
$$ \pi(a|s) =
\begin{cases}
\delta_{a,\text{argmax}_{a'} Q_\theta(s,a')}\quad \text{w.p.}\quad 1 - \varepsilon\\
\frac{1}{\text{num_actions}}\quad \quad \quad \quad \text{w.p.}\quad \varepsilon
\end{cases} $$
where $\varepsilon$ is multiplicatively decayed at each episode of interaction.
Start by defining a function that performs an interaction step in the environment:
```
def interact_env(state, model, epsilon, n_actions, env):
# Preprocess state
state_array = np.array(state)
state = tf.convert_to_tensor([state_array])
# Sample action
coin = np.random.random()
if coin > epsilon:
q_vals = model([state])
action = int(tf.argmax(q_vals[0]).numpy())
else:
action = np.random.choice(n_actions)
# Apply sampled action in the environment, receive reward and next state
next_state, reward, done, _ = env.step(action)
interaction = {'state': state_array, 'action': action, 'next_state': next_state.copy(),
'reward': reward, 'done':float(done)}
return interaction
```
and a function that updates the Q-function using a batch of interactions:
```
@tf.function
def Q_learning_update(states, actions, rewards, next_states, done, model, gamma, n_actions):
states = tf.convert_to_tensor(states)
actions = tf.convert_to_tensor(actions)
rewards = tf.convert_to_tensor(rewards)
next_states = tf.convert_to_tensor(next_states)
done = tf.convert_to_tensor(done)
# Compute their target q_values and the masks on sampled actions
future_rewards = model_target([next_states])
target_q_values = rewards + (gamma * tf.reduce_max(future_rewards, axis=1)
* (1.0 - done))
masks = tf.one_hot(actions, n_actions)
# Train the model on the states and target Q-values
with tf.GradientTape() as tape:
tape.watch(model.trainable_variables)
q_values = model([states])
q_values_masked = tf.reduce_sum(tf.multiply(q_values, masks), axis=1)
loss = tf.keras.losses.Huber()(target_q_values, q_values_masked)
# Backpropagation
grads = tape.gradient(loss, model.trainable_variables)
for optimizer, w in zip([optimizer_in, optimizer_var, optimizer_out], [w_in, w_var, w_out]):
optimizer.apply_gradients([(grads[w], model.trainable_variables[w])])
```
Define the hyperparameters:
```
gamma = 0.99
n_episodes = 2000
# Define replay memory
max_memory_length = 10000 # Maximum replay length
replay_memory = deque(maxlen=max_memory_length)
epsilon = 1.0 # Epsilon greedy parameter
epsilon_min = 0.01 # Minimum epsilon greedy parameter
decay_epsilon = 0.99 # Decay rate of epsilon greedy parameter
batch_size = 16
steps_per_update = 10 # Train the model every x steps
steps_per_target_update = 30 # Update the target model every x steps
```
Prepare the optimizers:
```
optimizer_in = tf.keras.optimizers.Adam(learning_rate=0.001, amsgrad=True)
optimizer_var = tf.keras.optimizers.Adam(learning_rate=0.001, amsgrad=True)
optimizer_out = tf.keras.optimizers.Adam(learning_rate=0.1, amsgrad=True)
# Assign the model parameters to each optimizer
w_in, w_var, w_out = 1, 0, 2
```
Now implement the main training loop of the agent.
Note: This agent may need to simulate several million quantum circuits and can take as much as ~40 minutes to finish training.
```
env = gym.make("CartPole-v1")
episode_reward_history = []
step_count = 0
for episode in range(n_episodes):
episode_reward = 0
state = env.reset()
while True:
# Interact with env
interaction = interact_env(state, model, epsilon, n_actions, env)
# Store interaction in the replay memory
replay_memory.append(interaction)
state = interaction['next_state']
episode_reward += interaction['reward']
step_count += 1
# Update model
if step_count % steps_per_update == 0:
# Sample a batch of interactions and update Q_function
training_batch = np.random.choice(replay_memory, size=batch_size)
Q_learning_update(np.asarray([x['state'] for x in training_batch]),
np.asarray([x['action'] for x in training_batch]),
np.asarray([x['reward'] for x in training_batch], dtype=np.float32),
np.asarray([x['next_state'] for x in training_batch]),
np.asarray([x['done'] for x in training_batch], dtype=np.float32),
model, gamma, n_actions)
# Update target model
if step_count % steps_per_target_update == 0:
model_target.set_weights(model.get_weights())
# Check if the episode is finished
if interaction['done']:
break
# Decay epsilon
epsilon = max(epsilon * decay_epsilon, epsilon_min)
episode_reward_history.append(episode_reward)
if (episode+1)%10 == 0:
avg_rewards = np.mean(episode_reward_history[-10:])
print("Episode {}/{}, average last 10 rewards {}".format(
episode+1, n_episodes, avg_rewards))
if avg_rewards >= 500.0:
break
```
Plot the learning history of the agent:
```
plt.figure(figsize=(10,5))
plt.plot(episode_reward_history)
plt.xlabel('Epsiode')
plt.ylabel('Collected rewards')
plt.show()
```
Similarly to the plot above, you should see that after ~1000 episodes, the performance of the agent gets close to optimal, i.e., 500 rewards per episode. Learning takes longer for Q-learning agents since the Q-function is a "richer" function to be learned than the policy.
## 4. Exercise
Now that you have trained two different types of models, try experimenting with different environments (and different numbers of qubits and layers). You could also try combining the PQC models of the last two sections into an [actor-critic agent](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#actor-critic).
| github_jupyter |
```
!pip install --upgrade tables
!pip install eli5
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
from hyperopt import hp, fmin, tpe, STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
df = pd.read_hdf('/content/drive/My Drive/Colab Notebooks/dw_matrix2/car.h5')
df.shape
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0],list):continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(x.split('cm')[0].replace(' ','')))
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring = 'neg_mean_absolute_error')
return round(np.mean(scores),1), round(np.std(scores),1)
```
## XGBoost
```
cat_feats = [
'param_napęd__cat',
'param_rok-produkcji',
'param_stan__cat',
'param_skrzynia-biegów__cat',
'param_faktura-vat__cat',
'param_moc',
'param_marka-pojazdu__cat',
'feature_kamera-cofania__cat',
'param_typ__cat',
'seller_name__cat',
'param_pojemność-skokowa',
'feature_wspomaganie-kierownicy__cat',
'param_model-pojazdu__cat',
'param_wersja__cat',
'param_kod-silnika__cat',
'feature_system-start-stop__cat',
'feature_asystent-pasa-ruchu__cat',
'feature_czujniki-parkowania-przednie__cat',
'feature_łopatki-zmiany-biegów__cat',
'feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0
}
run_model(xgb.XGBRegressor(**xgb_params), cat_feats)
def obj_funct(params):
print('Training with params: ')
print(params)
mean_mae, score_std = run_model(xgb.XGBRFRegressor(**params), cat_feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05,0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5,16,1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5,1,0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5,1,0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
## run
best = fmin(obj_funct, xgb_reg_params, algo=tpe.suggest, max_evals=25, return_argmin=False)
best
def obj_funct(params):
print('Training with params: ')
print(params)
mean_mae, score_std = run_model(xgb.XGBRFRegressor(**params), cat_feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05,0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5,16,1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5,1,0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5,1,0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
## run
best = fmin(obj_funct, xgb_reg_params, algo=tpe.suggest, max_evals=25, return_argmin=False)
best
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
---
## Step 0: Load The Data
```
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "/home/yang/dataset/train.p"
validation_file= "/home/yang/dataset/valid.p"
testing_file = "/home/yang/dataset/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
X_train.shape, X_valid.shape, X_test.shape, y_train[0]
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
```
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = 4410
# TODO: Number of testing examples.
n_test = 12630
# TODO: What's the shape of an traffic sign image?
image_shape = (32, 32, 3)
# TODO: How many unique classes/labels there are in the dataset.
n_classes = 43
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
```
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
import numpy as np
np.set_printoptions(2)
```
----
## Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
- Neural network architecture (is the network over or underfitting?)
- Play around preprocessing techniques (normalization, rgb to grayscale, etc)
- Number of examples per label (some have more than others).
- Generate fake data.
Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
### Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
```
def assort_indices_by_class(y):
ret = [[] for class_ in range(n_classes)]
#print(ret)
for i in range(y.shape[0]):
ret[y[i]].append(i)
return ret
def show_samples_per_class(y):
indices_by_class = assort_indices_by_class(y)
plt.bar(range(n_classes), list(map(lambda i: len(indices_by_class[i]), range(n_classes))))
plt.xlabel("Class")
plt.ylabel("Number of training images")
plt.show()
#show_demo_grid(X_train, y_train, "Some images from the training set")
show_samples_per_class(y_train)
indices_by_class = assort_indices_by_class(y_train)
each_num_of_class = list(map(lambda i: len(indices_by_class[i]), range(n_classes)))
print('max:', np.max(each_num_of_class))
print('min:', np.min(each_num_of_class))
print('sum:', np.sum(each_num_of_class))
# [[1, 2, 3, 4], [5, 6, 7]] => [[1, 2, 3, 4, 1, 2, 3, 4], [5, 6, 7, 5, 6, 7, 5, 6]]
# => [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]
def extendEachClass(X_train, y_train, n_classes, extend_num):
indices_by_class = [[] for i in range(n_classes)]
for i in range(y_train.shape[0]):
indices_by_class[y_train[i]].append(i)
X = []; y = []
for i, indices in enumerate(indices_by_class):
n_ori = len(indices)
assert extend_num > n_ori
times = extend_num // n_ori
one_X = [X_train[i] for i in indices]
X.extend(one_X * times)
rest = extend_num - n_ori*times
X.extend(one_X[0:rest])
y.extend([i]*extend_num)
return np.array(X), np.array(y)
X_train, y_train = extendEachClass(X_train, y_train, n_classes, 4000)
show_samples_per_class(y_train)
n_train = X_train.shape[0]
n_train
```
Augment data with random rotation and random noise.
```
import random
import skimage as sk
image = X_train[0]
image = sk.transform.rotate(image, random.uniform(-10, 10))
image = sk.util.random_noise(image, var=0.001)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,4))
ax1.imshow(X_train[0])
ax2.imshow(image)
for i, img in enumerate(X_train):
image = sk.transform.rotate(img, random.uniform(-10, 10))
if random.randint(1, 10) < 4:
image = sk.util.random_noise(image, var=0.0001)
X_train[i] = (image*255+0.5).astype(np.uint8)
#fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,4))
#ax1.imshow(img)
#ax2.imshow(image)
#break
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
%matplotlib inline
import random
train_samples = []
# show image of 15 random data points
fig, axs = plt.subplots(3,5, figsize=(10,6))
axs = axs.ravel()
for i in range(15):
index = random.randint(0, n_train)
train_samples.append(index)
image = X_train[index]
axs[i].imshow(image)
axs[i].set_title('%d:%d'%(index, y_train[index]))
axs[i].axis('off')
```
### Model Architecture
```
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet(x, is_training=False):
#def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
weights = {
'wc1': tf.Variable(tf.truncated_normal([5, 5, 3, 6], mean=mu, stddev=sigma)),
'wc2': tf.Variable(tf.truncated_normal([5, 5, 6, 16], mean=mu, stddev=sigma)),
'wd1': tf.Variable(tf.truncated_normal([400, 120], mean=mu, stddev=sigma)),
'wd2': tf.Variable(tf.truncated_normal([120, 84], mean=mu, stddev=sigma)),
'out': tf.Variable(tf.truncated_normal([84, 43], mean=mu, stddev=sigma))
}
biases = {
'bc1': tf.Variable(tf.zeros(6)),
'bc2': tf.Variable(tf.zeros(16)),
'bd1': tf.Variable(tf.zeros(120)),
'bd2': tf.Variable(tf.zeros(84)),
'out': tf.Variable(tf.zeros(43))
}
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
conv1 = tf.contrib.layers.batch_norm(inputs=conv1, decay=0.9, updates_collections=None, is_training=is_training)
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], strides=[1, 1, 1, 1], padding='VALID')
conv2 = tf.contrib.layers.batch_norm(inputs=conv2, decay=0.9, updates_collections=None, is_training=is_training)
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = tf.contrib.layers.flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1 = tf.add(tf.matmul(fc0, weights['wd1']), biases['bd1'])
fc1 = tf.contrib.layers.batch_norm(inputs=fc1, decay=0.9, updates_collections=None, is_training=is_training)
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.layers.dropout(fc1, rate=0.5,training=is_training)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2 = tf.add(tf.matmul(fc1, weights['wd2']), biases['bd2'])
fc2 = tf.contrib.layers.batch_norm(inputs=fc2, decay=0.9, updates_collections=None, is_training=is_training)
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
fc2 = tf.layers.dropout(fc2, rate=0.5, training=is_training)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 43.
logits = tf.add(tf.matmul(fc2, weights['out']), biases['out'])
return logits
```
### Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
```
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from sklearn.utils import shuffle
EPOCHS = 30
BATCH_SIZE = 256
LEARNING_RATE = 0.001
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
is_training = tf.placeholder(dtype=tf.bool)
#logits = LeNet(x)
logits = LeNet(x, is_training)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = LEARNING_RATE)
training_operation = optimizer.minimize(loss_operation)
#logits_eval = LeNet(x, is_training=False)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
total_loss = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, is_training:False})
loss = sess.run(loss_operation, feed_dict={x: batch_x, y: batch_y, is_training:False})
total_loss += loss
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples, total_loss
print('num_examples: ', len(X_train))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, is_training:True})
validation_accuracy, loss = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}, Loss = {:.3f}".format(validation_accuracy, loss))
print()
saver.save(sess, './lenet')
print("Model saved")
### Testing the model
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "./lenet")
test_accuracy, _ = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
test_accuracy, _ = evaluate(X_train, y_train)
print("Train Accuracy = {:.3f}".format(test_accuracy))
test_accuracy, _ = evaluate(X_valid, y_valid)
print("Valid Accuracy = {:.3f}".format(test_accuracy))
```
---
## Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import glob
import matplotlib.image as mpimg
import cv2
saver = tf.train.Saver()
my_images = []
for i, fn_img in enumerate(glob.glob('./test_images/*.png')):
image = mpimg.imread(fn_img)*255 + 0.5
image = image.astype(np.uint8)
image = cv2.resize(image, (32, 32))
if i % 3 == 0:
plt.figure()
f, axes = plt.subplots(1, 3, figsize=(10, 4))
f.tight_layout()
axes[i%3].imshow(image)
my_images.append(image)
# Uncomment below for adding some training images for testing the model
#for i in train_samples:
# my_images.append(X_train[i])
```
### Predict the Sign Type for Each Image
```
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import csv
sing_types = {}
with open('signnames.csv', newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',')
for ClassID, SignName in list(spamreader)[1:]:
sing_types[int(ClassID)] = SignName
probs = tf.nn.softmax(logits)
top_k = tf.nn.top_k(probs, k=1)
with tf.Session() as sess:
saver.restore(sess, "./lenet")
output = sess.run(top_k, feed_dict={x: my_images, is_training:False})
for image, prob, idx in zip(my_images, output[0], output[1]):
fig, ax = plt.subplots(1, 2, figsize=(4,3))
fig.tight_layout()
ax[0].imshow(image)
ax[0].set_title('Predict: %.2f%%\n%s'%(prob*100, sing_types[idx[0]]))
example_img = X_train[np.argwhere(y_train == idx)[0][0]]
ax[1].imshow(example_img)
ax[1].set_title('Example:\n')
```
### Analyze Performance
```
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
my_labels = [33, 4, 3, 14, 32]
predict_labels = output[1].reshape((1,-1))
acc = (my_labels == predict_labels).sum()/len(my_labels)*100
print('%.2f%%'%(acc))
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
```
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
```
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
```
Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
```
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
probs = tf.nn.softmax(logits)
top_k = tf.nn.top_k(probs, k=5)
import numpy as np
np.set_printoptions(4)
with tf.Session() as sess:
saver.restore(sess, "./lenet")
output = sess.run(top_k, feed_dict={x:my_images, is_training:False})
print(output)
for prob, idx in zip(output[0], output[1]):
print(prob, idx, [sing_types[i] for i in idx])
```
### Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
---
## Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
```
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess, feed_dict={x:image_input, is_training:False})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6, 8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
#with tf.Session() as sess:
# saver.restore(sess, "./lenet")
# [print(tensor.name) for tensor in sess.graph.as_graph_def().node]
#show the first layer before maxpooling
with tf.Session() as sess:
saver.restore(sess, "./lenet")
layer_Relu = sess.graph.get_tensor_by_name("Relu:0")
layer_MaxPool = sess.graph.get_tensor_by_name("MaxPool:0")
layer_Relu_1 = sess.graph.get_tensor_by_name("Relu_1:0")
layer_MaxPool_1 = sess.graph.get_tensor_by_name("MaxPool_1:0")
plt.figure(0, figsize=(4,3))
plt.imshow(my_images[0])
outputFeatureMap(my_images[0:1], layer_Relu, plt_num=1)
outputFeatureMap(my_images[0:1], layer_MaxPool, plt_num=2)
outputFeatureMap(my_images[0:1], layer_Relu_1, plt_num=3)
outputFeatureMap(my_images[0:1], layer_MaxPool_1, plt_num=4)
#show the first layer before maxpooling
with tf.Session() as sess:
saver.restore(sess, "./lenet")
layer_Relu = sess.graph.get_tensor_by_name("Relu:0")
layer_MaxPool = sess.graph.get_tensor_by_name("MaxPool:0")
layer_Relu_1 = sess.graph.get_tensor_by_name("Relu_1:0")
layer_MaxPool_1 = sess.graph.get_tensor_by_name("MaxPool_1:0")
plt.figure(0, figsize=(4,3))
plt.imshow(my_images[1])
outputFeatureMap(my_images[1:2], layer_Relu, plt_num=1)
outputFeatureMap(my_images[1:2], layer_MaxPool, plt_num=2)
outputFeatureMap(my_images[1:2], layer_Relu_1, plt_num=3)
outputFeatureMap(my_images[1:2], layer_MaxPool_1, plt_num=4)
```
| github_jupyter |
# Continuous Control
---
You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!
### 1. Start the Environment
Run the next code cell to install a few packages. This line will take a few minutes to run!
```
%load_ext autoreload
%aimport agent
!pip -q install ./python
```
The environments corresponding to both versions of the environment are already saved in the Workspace and can be accessed at the file paths provided below.
Please select one of the two options below for loading the environment.
```
from unityagents import UnityEnvironment
import numpy as np
from workspace_utils import active_session
# select this option to load version 1 (with a single agent) of the environment
#env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')
# select this option to load version 2 (with 20 agents) of the environment
env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64')
import agent
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Note that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment.
```
# Udacity provided starter code
'''env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
iteration = 0
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
print("\t", actions)
print("\t", type(actions))
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
if rewards[0] != 0:
print("rewards", rewards)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
print(iteration)
break
iteration += 1
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
'''
import queue
from collections import deque
# parameters
print_every = 100
# seems to be 1000 for the env anyway
max_ts = 1000
max_episodes = 500
curr_agent = agent.Agent(state_size, action_size, num_agents)
# can't use a queue.Queue because "Insertion will block once this size has been reached, until queue items are consumed"
scores = deque(maxlen=100) # initialize the score (for each agent)
scores_history = []
episode_won_i = 0
with active_session():
for i in range(max_episodes):
# initialize for the start of the episode
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations # get the current state (for each agent)
# resets the noise class variable
curr_agent.reset()
score = np.zeros(num_agents)
for t in range(max_ts):
action = curr_agent.act(state.astype('float32', copy=False))
# env_info's variables are lists
env_info = env.step(action)[brain_name]
reward = env_info.rewards
next_state = env_info.vector_observations
done = env_info.local_done
score = score + reward
curr_agent.step(state, action, reward, next_state, done)
state = next_state
if np.any(done):
if t < 1000:
print("episode {} at {} ts; done reached".format(i, t))
break
# goal for 20 agent version: [version 2] the agent is able to receive an *average* reward (over 100 episodes, and over all 20 agents) of at least +30
scores_history.append(np.mean(score))
scores.append(np.mean(score))
if i % print_every == 0:
print("episode {}; average score past 100 episodes: {}".format(i, np.mean(scores)))
if np.mean(scores) >= 30:
episode_won_i = i
print("Solved in {} episodes".format(episode_won_i))
curr_agent.save()
break
print(scores_history)
scores_history = [0.25999999418854713, 0.82699998151510956, 0.80999998189508915, 1.111999975144863, 0.95199997872114184, 0.85899998079985385, 0.77649998264387254, 0.81699998173862698, 0.75399998314678673, 1.1524999742396176, 1.6509999630972743, 1.8824999579228461, 1.5514999653212727, 1.1974999732337892, 1.1789999736472965, 1.548999965377152, 1.3214999704621733, 1.549499965365976, 1.9759999558329582, 2.6074999417178333, 3.5399999208748341, 2.9739999335259197, 3.3299999255686998, 2.822999936901033, 3.5214999212883411, 3.6449999185279012, 4.3089999036863444, 3.8949999129399657, 4.1559999071061613, 4.3774999021552503, 5.2574998824857175, 5.2364998829551039, 5.741999871656299, 6.3619998577982191, 6.9109998455271127, 5.8834998684935274, 6.2294998607598249, 7.0354998427443203, 7.1714998397044836, 7.8859998237341644, 7.8044998255558315, 7.0139998432248829, 9.315499791782349, 7.1659998398274185, 7.6839998282492159, 8.6849998058751225, 9.0064997986890383, 8.8914998012594886, 9.8759997792541974, 10.670999761484563, 10.203999771922827, 11.658499739412218, 9.8984997787512832, 10.995499754231423, 10.244499771017582, 11.174999750219285, 9.4099997896701097, 9.7374997823499143, 11.192999749816954, 15.216999659873546, 12.333499724324792, 12.915499711316079, 13.408999700285495, 12.409499722626061, 12.672999716736376, 11.970499732438475, 14.337999679520726, 15.064499663282186, 15.943499643635004, 14.22699968200177, 19.195999570935964, 19.59499956201762, 17.685999604687094, 15.413999655470253, 14.908499666769057, 16.776499625016005, 16.713499626424163, 16.094999640248716, 15.719999648630619, 16.09449964025989, 18.615999583899974, 16.283499636035412, 15.947499643545598, 15.514499653223902, 13.14449970619753, 13.249999703839421, 14.709499671217054, 14.284999680705369, 13.541999697312713, 14.469499676581473, 14.261499681230635, 14.282499680761248, 17.449999609962106, 16.383499633800238, 16.893999622389675, 21.93099950980395, 18.545999585464596, 20.028999552316964, 22.999499485921113, 22.242999502830209, 22.007499508094043, 22.127999505400659, 26.668499403912573, 23.677999470755459, 22.062999506853522, 26.560999406315386, 26.544499406684189, 27.429499386902897, 28.274499368015675, 31.920499286521228, 28.920999353565275, 31.113499304559081, 34.306999233178793, 33.544999250210822, 33.18849925817922, 31.337999299541117, 31.446499297115952, 31.223999302089215, 30.629999315366149, 31.040999306179582, 27.212999391742052, 27.107499394100159, 32.404999275691807, 31.149999303743243, 35.910999197326603, 33.48199925161898, 32.881499265041199, 29.100999349541961, 28.482499363366514, 29.815499333571644, 24.692499448079616, 31.880499287415297, 30.063999328017236, 35.754999200813472, 35.985999195650223, 35.194999213330448, 35.904999197460711, 33.893999242410061, 30.628999315388501, 32.559499272238462, 31.846499288175256, 33.198999257944521, 31.859499287884681, 29.424999342299998, 30.40249932045117, 30.468499318975955, 29.781499334331603, 31.499999295920134, 30.901999309286474, 32.562999272160233, 32.765999267622831, 30.801999311521648, 26.590999405644833, 31.830499288532884, 29.979499329905956, 26.513999407365919, 26.400999409891664, 24.0414994626306, 25.036999440379439, 26.001499418821187, 29.511999340355395, 29.885999331995844, 24.535999451577663, 24.166999459825455, 24.663499448727816, 28.851999355107544, 28.547499361913651, 29.66499933693558, 33.063499260973188, 30.591499316226692, 32.076499283034352, 34.222499235067517, 35.066999216191469, 35.596999204345046, 33.603499248903248, 32.843499265890571, 31.173999303206802, 33.451499252300707, 31.937499286141247, 31.882499287370592, 32.213499279972169, 33.936999241448937, 31.264999301172793, 34.738999223522839, 32.307499277871102, 34.133999237045643, 32.10899928230792, 30.538999317400158, 33.186499258223918, 30.563999316841365, 29.660499337036164, 28.236999368853866, 27.724999380297959, 25.028999440558255, 25.875999421626329, 27.581499383505435, 32.104499282408504]
# smoother plot
avg_scores = []
window_size = 100
for i in range(window_size, len(scores_history)):
avg_scores.append(sum(scores_history[i-window_size:i])/window_size)
# plot of rewards
import matplotlib.pyplot as plt
plt.title("Rewards over Time")
plt.xlabel("# of Epochs")
plt.ylabel("Reward")
plt.plot(range(1,len(scores_history) +1 ), scores_history)
print("starts at episode {} and averages the past {} episodes".format(window_size, window_size))
plt.title("Average Rewards (Averaged Over a Rolling Window of Previous 100 Epochs)")
plt.xlabel("# of Epochs")
plt.ylabel("Reward")
plt.plot(range(100,len(avg_scores) +100 ), avg_scores)
import importlib
importlib.reload(agent)
%autoreload 1
```
When finished, you can close the environment.
```
#env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! A few **important notes**:
- When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
- To structure your work, you're welcome to work directly in this Jupyter notebook, or you might like to start over with a new file! You can see the list of files in the workspace by clicking on **_Jupyter_** in the top left corner of the notebook.
- In this coding environment, you will not be able to watch the agents while they are training. However, **_after training the agents_**, you can download the saved model weights to watch the agents on your own machine!
| github_jupyter |
# Fastqc Analysis: Headcrop15
## Author: Håkon Kaspersen
## Last updated: 16.08.2018
### This Notebook is used for visualizing fastqc-reports from multiple read sets.
### Load libraries
```
install.packages("pacman", repos = "http://cran.us.r-project.org")
pacman::p_load(fastqcr,dplyr,ggplot2,tidyr,viridis,ggsci,scales,svglite)
```
### Specify input directory
##### Please specify the location of your fastqc.zip files.
##### Note: Specify the location of the folder holding the files. If the fastqc.zip files are located in multiple folders (for instance for grouping purposes), please specify the path to the directory holding all the folders. Also note that R use "/" and not "\" to specify folders, if copy-pasted from windows, please change to "/".
```
input_dir <- "D:/R_Data/fastqc/fastqc_trim_testing/Headcrop15"
```
### Load functions
##### This activates all the functions used in this notebook.
```
# Function that lists all the file names in the input folder
file_names <- function(filepath, folder) {
files <- list.files(path = paste0(filepath, "/", folder), pattern = "_fastqc.zip")
return(files)
}
# Function that searches recursively for all filenames with fastqc.zip
file_names_recursive <- function(filepath) {
files <- list.files(path = filepath, pattern = "_fastqc.zip", recursive = TRUE)
return(files)
}
# Lists folders in the input folder
folder_names <- function(filepath) {
folders <- list.files(path = filepath)
return(folders)
}
# Identifies the names of the files and groups them accoring to their respective folders
get_grouping_variable <- function(path, folder) {
files <- file_names(path, folder)
df <- data.frame(files = files, group = folder, stringsAsFactors = FALSE)
df$files <- gsub("(.*?)_fastqc.zip", "\\1", df$files)
colnames(df) <- c("ref","group")
return(df)
}
# Creates a data frame with grouping information of the reports
create_group_df <- function(path) {
folders <- folder_names(path)
if (length(folders) > 1) {
df <- lapply(folders, function(folder) get_grouping_variable(path, folder))
df <- bind_rows(df)
} else {
df <- get_grouping_variable(path, folders)
}
return(df)
}
# Function that filter out duplicated counts
filter_counts <- function(df) {
df <- df %>%
mutate(dupl = duplicated(Count)) %>%
filter(dupl == FALSE)
return(df)
}
# Function that prepares the sequence length data for plotting
prepare_seq_len_data <- function(list) {
x <- split(list$sequence_length_distribution, list$sequence_length_distribution$group)
x <- lapply(x, filter_counts)
x <- bind_rows(x)
return(x)
}
# Function that imports and wrangles fastqc data
get_fastqc_data <- function(filepath) {
folders <- folder_names(filepath)
get_files <- file_names_recursive(filepath)
data_list <- lapply(get_files,
FUN = function(file) {
qc_read(paste0(filepath, "/", file),
modules = "all",
verbose = FALSE)
})
names(data_list) <- gsub("(.*?)/(.*?)_fastqc.zip", "\\2", get_files)
data_list <- purrr::transpose(data_list)
data_list$sequence_length_distribution <- NULL
data_list$kmer_content <- NULL
list_names <- names(data_list)
list_numbers <- 1:length(list_names)
for (i in list_numbers) {
assign(list_names[i], bind_rows(data_list[[i]], .id = "ref"))
}
df_list <- list(summary,
basic_statistics,
per_base_sequence_quality,
per_tile_sequence_quality,
per_sequence_quality_scores,
per_base_sequence_content,
per_sequence_gc_content,
per_base_n_content,
sequence_duplication_levels,
overrepresented_sequences,
adapter_content,
total_deduplicated_percentage)
names(df_list) <- list_names
df_list$basic_statistics <- df_list$basic_statistics %>%
spread(Measure,Value) %>%
left_join(group_df, by = "ref")
return(df_list)
}
```
### Create grouping data frame
##### This function creates a data frame with the grouping information of the files, based on the folder names that hold them. Each file is specified a group name based on the folder they are in. This information is used to specify each group of files in the plots below.
##### Note: No changes should be made from here on!
```
group_df <- create_group_df(input_dir)
```
### Import data
##### This function imports the data from the location specified in the "input_dir" above.
```
df_list <- get_fastqc_data(input_dir)
```
### Adapter content
##### This plot presents the adapter content for all sequences in each group. Each adapter type is specified by color. The X-axis represents the read positions, where the left side represents the start of the read and the right side the end.
```
df_list$adapter_content %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
# get data ready for boxplot by gathering the columns into two columns
gather(key,
value, -c(ref,
Position,
group,
seqlen)) %>%
# plotting
ggplot(aes(factor(
Position,
levels = unique(Position),
ordered = TRUE
), value, color = key)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.5) +
labs(
x = "Position in Read",
y = "Percent (%) Adapter Content",
color = NULL,
title = "Adapter content"
) +
scale_colour_jama() +
scale_y_continuous(limits = c(0, 20)) +
theme_classic() +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
# creates separate windows for each group
facet_wrap(~ group, scales = "free", dir = "v")
```
### Per base sequence content
##### This plot presents the per base sequence content for all reads in each group. Each base is represented by color.
```
df_list$per_base_sequence_content %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
gather(key, value, -c(ref, Base, group, seqlen)) %>%
ggplot(aes(factor(
Base, levels = unique(Base), ordered = TRUE
), value, color = key)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.5) +
labs(
x = "Position in Read",
y = "Percent (%)",
color = NULL,
title = "Per base sequence content"
) +
theme_classic() +
scale_color_jama() +
theme(
axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
legend.position = "bottom"
) +
# creates separate windows for each group
facet_wrap( ~ group, scales = "free", dir = "v")
```
### Sequence duplication levels
##### This plot presents the duplication levels for all reads in each group. The X-axis contain "bins" of different duplication levels, and the number is a representation of how many of the same read is present. For example, the bin "3" represents the reads that have triplicates in the file.
```
df_list$sequence_duplication_levels %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
gather(key, value, -c(ref, `Duplication Level`, group, seqlen)) %>%
ggplot(aes(
factor(
`Duplication Level`,
levels = unique(`Duplication Level`),
ordered = TRUE
),
value,
fill = key
)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.5) +
scale_fill_manual(values = c("#ef8a62",
"#67a9cf")) +
theme_classic() +
labs(x = "Duplication Level",
y = "Percent (%) of Sequences",
fill = NULL,
title = "Sequence duplication levels") +
theme(legend.position = "bottom",
axis.text.x = element_text(
angle = 90,
hjust = 1,
vjust = 0.4
)) +
# creates separate windows for each group
facet_wrap( ~ group, scales = "free")
```
### Sequence quality scores
##### This plot presents the sequence quality scores for all reads in each group. The X-axis represent the phred-score for each read, and the y-axis represent the amount of reads per phred-score. Higher score is better.
```
df_list$per_sequence_quality_scores %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
ggplot(aes(factor(Quality), Count, fill = factor(Quality))) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.5) +
scale_y_continuous(labels = comma) +
scale_x_discrete(breaks = c(0, 5, 10, 15, 20, 25, 30, 35, 40)) +
scale_fill_viridis(discrete = TRUE) +
labs(x = "Quality",
y = "Number of reads",
title = "Per sequence quality scores") +
guides(fill = FALSE) +
theme_classic() +
theme(axis.text.x = element_text(size = 10)) +
# creates separate windows for each group
facet_wrap( ~ group)
```
### Per sequence GC content
##### This plot presents the % GC-content for all reads in each group.
```
df_list$per_sequence_gc_content %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
ggplot(aes(factor(`GC Content`), Count)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.5) +
labs(x = "GC content (%)",
y = "Number of reads",
title = "Per sequence GC content") +
scale_y_continuous(labels = comma) +
scale_x_discrete(breaks = as.character(seq(
from = 0, to = 100, by = 10
))) +
theme_classic() +
# creates separate windows for each group
facet_wrap( ~ group, scales = "free")
```
### Per base N content
##### This plot presents the per base N content for all reads in each group. N's are added to the read instead of a base if the sequencer is unable to do the base call with sufficient confidence. As before , the X-axis represents the position in the read, where left is the start of the read and right is the end.
```
df_list$per_base_n_content %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
ggplot(aes(factor(
Base, levels = unique(Base), ordered = TRUE
), `N-Count`)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(fill = "#e6e6e6",
outlier.size = 0.5) +
labs(x = "Position in read",
title = "Per base N content") +
guides(fill = FALSE) +
theme_classic() +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank()) +
# creates separate windows for each group
facet_wrap( ~ group, scales = "free", dir = "v")
```
### Total deduplicated percentage
##### This plot presents the total deduplicated percentage for all reads in each group.
```
df_list$total_deduplicated_percentage %>%
gather(key = "ref", value = `1`) %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length",
perc = `1`) %>%
mutate(perc = as.numeric(perc)) %>%
ggplot(aes(factor(group), perc)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(fill = "#e6e6e6",
outlier.size = 0.5) +
scale_y_continuous(limits = c(0, 100)) +
labs(x = "Group",
y = "Total percentage of deduplicated reads",
title = "Total deduplicated percentage") +
theme_classic()
```
### Per base mean sequence quality
##### This plot presents the per base mean sequence quality for all reads in each group. The green line depicts the "good quality" limit of the phred-score 28. As before, the X-axis represents the position in each read.
```
df_list$per_base_sequence_quality %>%
# import information from another data frame from same list,
# to get grouping information etc.
left_join(., df_list$basic_statistics[, c("ref", "Sequence length", "group")], by = "ref") %>%
rename(seqlen = "Sequence length") %>%
mutate(Base = factor(Base,
levels = unique(Base),
ordered = TRUE)) %>%
group_by(group) %>%
mutate(xmax = length(unique(Base)) + 1) %>%
ungroup() %>%
ggplot(aes(Base,
Mean)) +
# specifies the ends of the boxplot as errorbars
stat_boxplot(geom = "errorbar", width = 0.4) +
geom_boxplot(outlier.size = 0.4,
fill = "#7f7f7f") +
geom_hline(aes(yintercept = 28),
color = "green") +
labs(x = "Position in read",
y = "Sequence quality",
title = "Per base mean sequence quality") +
scale_y_continuous(limits = c(0, 42)) +
theme_classic() +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank()) +
# creates separate windows for each group
facet_wrap(~ group, scales = "free", dir = "v")
```
| github_jupyter |
```
# %load ../startup.py
import os,sys
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
os.environ['PYTHONPATH']=os.environ.get('LIB')
if not os.environ.get('LIB') in sys.path:
sys.path.insert(0,os.environ.get('LIB'))
DAT_DIR = os.environ.get('DAT_DIR')
%load_ext autoreload
%autoreload 2
%pylab inline
%matplotlib inline
%load_ext rpy2.ipython
import matplotlib.text as text
import pandas as pd
import numpy as np
import pylab as pl
import scipy as sp
import sys
import rpy2
import os
from matplotlib import gridspec
from scipy.interpolate import splev, splrep
import xlwt
import seaborn as sns
from scipy import stats
import rpy2.robjects.packages as rpackages
import seaborn as sns
from statsmodels import robust
from heprnhci.db.mongo import *
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:80% !important;}</style>")) # increase jupyter screen width to 100%
pd.options.display.max_colwidth = 500
pd.set_option('display.precision',2)
HCI=openMongo(db=os.environ.get('MONGO_HCIDB'),host=os.environ.get('MONGO_HOST'),auth=False)
```
# Goal: QC
* Outlier detection and removal
* Intraplate
* Distribution of cv per endpoint
* Interplate
* Distribution of corrcoef for ctrl+ FC
* Distribution of AC50 for ctrl+
# Data
```
Res=[]
Res =pd.DataFrame(list(HCI.heprn_raw.find({},dict(_id=0,plate_id=1,timeh=1,samples=1))))
Res.groupby('timeh').aggregate(dict(plate_id=len))
HCI.heprn_raw.find_one()['wells']
R0 = []
for X in HCI.heprn_raw.find():
R0.append(pd.DataFrame(X['wells']))
R0 = pd.concat(R0)
R0.groupby(['timeh','FN1']).aggregate(dict(dsstox_sid=len))
R0[['plate_id','row','col']].drop_duplicates().shape[0] * 4 * 6
(4+88)*2*6*10*10
```
## Outliers
```
from scipy.stats import iqr
def outHi(X): return np.percentile(X,75)+1.5*iqr(X)
def outLo(X): return np.percentile(X,25)-1.5*iqr(X)
Res = []
for X in HCI.heprn_raw.find():
print(X['plate_id'])
R0 = pd.DataFrame(X['wells'])\
.pivot_table(index=['dsstox_sid','sample_id','name','timeh',
'stype','conc','row','col'],
columns='FN1',
values='raw_value')
for ft in R0.columns:
hi,lo = outHi(R0[ft]),outLo(R0[ft])
nhi,nlo=(R0[ft]<lo).sum(),(R0[ft]>hi).sum()
R = dict(plate_id=X['plate_id'],timeh=X['timeh'],
ft=ft,n_gt_hi=nhi,n_lt_lo=nlo,n=R0.shape[0])
Res.append(R)
OUT0=pd.DataFrame(Res)
OUT0.loc[:,'nout']=OUT0.n_gt_hi+OUT0.n_lt_lo
OUT0.loc[:,'fout']=OUT0.nout/OUT0.n*1.0
OUT0.groupby(['plate_id','timeh']).aggregate(dict(fout=np.mean,nout=sum))
OUT0.head()
import seaborn as sns
sns.set(style='whitegrid')
g=sns.catplot(x='timeh',y='fout',data=OUT0,col='ft',col_wrap=3,size=2,aspect=1.5)
#sns.catplot(x='timeh',y='n_lt_lo',data=OUT0,row='ft',size=2,aspect=2,
# color='green')
g.savefig(FIG_DIR+'heprn-outlier-frac-v1a.png')
```
### Outlier summary
* there is a time-dependent increase in outliers
* less than 10% of points are outliers across all plates
* label outliers for later evaluation in conc-response analysis
# Distribution of CVs intraplate
```
Res = []
def cv(X): return np.std(X)/np.mean(X)
for X in HCI.heprn_raw.find(dict(timeh={'$in':[24,48,72]})):
print(X['plate_id'])
R0 = pd.DataFrame(X['wells'])\
.groupby(['dsstox_sid','sample_id','name','timeh',
'stype','FN1'])\
.aggregate(dict(raw_value=cv))
Res.append(R0.reset_index())
CV0=pd.concat(Res)
CV0.rename(columns=dict(raw_value='cv'),inplace=True)
#d.DataFrame(list(HCI.chemicals.find(dict(stype='ctrl+'),dict(_id=0))))
#CV0.head()
X = CV0.groupby(['timeh','FN1','stype']).aggregate(dict(cv=[np.mean]))
```
## Overall intraplate CVs
```
CV0.replace(dict(stype={'chem_test':'test chem','ctrl+':'pos. ctrl','ctrl-':'DMSO'}),inplace=True)
X = CV0[CV0.stype!='pos. ctrl'].pivot_table(index=['timeh','stype'],columns='FN1',
values='cv',aggfunc=np.mean)
#.query("stype!='ctrl+'")\
X.insert(X.shape[1],'Ave. cv (time)',X.mean(axis=1))
X.loc[('','Ave. cv (endpoint)'),:] = X.mean(axis=0)
X.to_excel(SUP_DIR+'S2-heprn-intra-plate-cv-stype.xlsx')
X
B = ''
for i,x in X.loc[('')].iloc[:,:-1].T.sort_values('Ave. cv (endpoint)').round(decimals=2).reset_index().iterrows():
B += " %s (%3.2f)," % tuple(x)
B
```
## Summary of CV intraplate
* The mean CVs for all endpoints across all time points was 0.13, and it varied from 0.04 for NS to 0.30 for Apoptosis
* The CVs for all endpoints showed a time-dependent increase
# Inter-plate CV
Use the effect information to determine the CV for the same treatment across plates using stype='ctrl+'
```
#HCI.heprn_ch.find_one()
FT = pd.DataFrame(list(HCI.hci_feats.find({},dict(_id=0))))
Res = []
def cv(X): return np.std(X)/np.mean(X)
for X in HCI.heprn_ch.find():
print X['plate_id']
R0 = pd.DataFrame(X['chem_ch']).query("stype=='ctrl+'")
Res.append(R0)
Z0=pd.concat(Res)
Z0 = Z0.merge(FT[['FA0','FN1']],on='FA0')
Z0.columns
Res = []
def cv(X): return np.abs(np.std(X)/np.mean(X))
X0 = Z0[Z0.timeh>6].pivot_table(index=['timeh','stype'],columns='FN1',
values='z_plt',aggfunc=cv)
X0
Z0[Z0.timeh>6].pivot_table(index=['timeh','stype'],columns='FN1',
values='z_ctrl',aggfunc=cv)
Z0[Z0.timeh>6].pivot_table(index=['timeh','stype'],columns='FN1',
values='l2fc_plt',aggfunc=cv)
```
| github_jupyter |
# 04: Matrix - An Exercise in Parallelism
An early use for Spark has been Machine Learning. Spark's `MLlib` of algorithms contains classes for vectors and matrices, which are important for many ML algorithms. This exercise uses a simpler representation of matrices to explore another topic; explicit parallelism.
The sample data is generated internally; there is no input that is read. The output is written to the file system as before.
See the corresponding Spark job [Matrix4.scala](https://github.com/deanwampler/spark-scala-tutorial/blob/master/src/main/scala/sparktutorial/Matrix4.scala).
Let's start with a class to represent a Matrix.
```
/**
* A special-purpose matrix case class. Each cell is given the value
* i*N + j for indices (i,j), counting from 0.
* Note: Must be serializable, which is automatic for case classes.
*/
case class Matrix(m: Int, n: Int) {
assert(m > 0 && n > 0, "m and n must be > 0")
private def makeRow(start: Long): Array[Long] =
Array.iterate(start, n)(i => i+1)
private val repr: Array[Array[Long]] =
Array.iterate(makeRow(0), m)(rowi => makeRow(rowi(0) + n))
/** Return row i, <em>indexed from 0</em>. */
def apply(i: Int): Array[Long] = repr(i)
/** Return the (i,j) element, <em>indexed from 0</em>. */
def apply(i: Int, j: Int): Long = repr(i)(j)
private val cellFormat = {
val maxEntryLength = (m*n - 1).toString.length
s"%${maxEntryLength}d"
}
private def rowString(rowI: Array[Long]) =
rowI map (cell => cellFormat.format(cell)) mkString ", "
override def toString = repr map rowString mkString "\n"
}
```
Some variables:
```
val nRows = 5
val nCols = 10
val out = "output/matrix4"
```
Let's create a matrix.
```
val matrix = Matrix(nRows, nCols)
```
With a Scala data structure like this, we can use `SparkContext.parallelize` to convert it into an `RDD`. In this case, we'll actually create an `RDD` with a count of indices for the number of rows, `1 to nRows`. Then we'll map over that `RDD` and use it compute the average of each row's columns. Finally, we'll "collect" the results back to an `Array` in the driver.
```
val sums_avgs = sc.parallelize(1 to nRows).map { i =>
// Matrix indices count from 0.
val sum = matrix(i-1) reduce (_ + _) // Recall that "_ + _" is the same as "(i1, i2) => i1 + i2".
(sum, sum/nCols) // We'll return RDD[(sum, average)]
}.collect // ... then convert to an array
```
## Recap
`RDD.parallelize` is a convenient way to convert a data structure into an RDD.
## Exercises
### Exercise 1: Try different values of nRows and nCols
### Exercise 2: Try other statistics, like standard deviation
The code for the standard deviation that you would add is the following:
```scala
val row = matrix(i-1)
...
val sumsquares = row.map(x => x*x).reduce(_+_)
val stddev = math.sqrt(1.0*sumsquares) // 1.0* => so we get a Double for the sqrt!
```
Given the synthesized data in the matrix, are the average and standard deviation actually very meaningful here, if this were representative of real data?
| github_jupyter |
# Text Analysis - Dictionary of the Spanish language
- **Created by: Andrés Segura-Tinoco**
- **Created on: Aug 20, 2020**
- **Created on: Aug 02, 2021**
- **Data: Dictionary of the Spanish language**
### Text Analysis
1. Approximate number of words in the DSL
2. Number of words with acute accent in Spanish language
3. Frequency of words per size
4. Top 15 bigger words
5. Frequency of letters in DSL words
6. Vowel and consonant ratio
7. Frequency of words per letter of the alphabet
8. Most frequent n-grams
```
# Load Python libraries
import re
import codecs
import pandas as pd
from collections import Counter
# Import plot libraries
import matplotlib.pyplot as plt
```
### Util functions
```
# Util function - Read a plain text file
def read_file_lines(file_path):
lines = []
with codecs.open(file_path, encoding='utf-8') as f:
for line in f:
lines.append(line)
return lines
# Util function - Apply data quality to words
def apply_dq_word(word):
new_word = word.replace('\n', '')
# Get first token
if ',' in new_word:
new_word = new_word.split(',')[0]
# Remove extra whitespaces
new_word = new_word.strip()
# Remove digits
while re.search("\d", new_word):
new_word = new_word[0:len(new_word)-1]
return new_word
# Util function - Plot column chart
def plot_col_chart(df, figsize, x_var, y_var, title, color='#1f77b4', legend=None, x_label=None):
fig, ax = plt.subplots()
df.plot.bar(ax=ax, x=x_var, y=y_var, color=color, figsize=figsize)
if legend:
ax.legend(legend)
else:
ax.get_legend().remove()
if x_label:
x = np.arange(len(x_label))
plt.xticks(x, x_label, rotation=0)
else:
plt.xticks(rotation=0)
plt.title(title, fontsize=16)
plt.xlabel(x_var.capitalize())
plt.ylabel(y_var.capitalize())
plt.show()
# Util function - Plot bar chart
def plot_bar_chart(df, figsize, x_var, y_var, title, color='#1f77b4', legend=None):
fig, ax = plt.subplots()
df.plot.barh(ax=ax, x=x_var, y=y_var, figsize=figsize)
if legend:
ax.legend(legend)
else:
ax.get_legend().remove()
plt.title(title, fontsize=16)
plt.xlabel(y_var.capitalize())
plt.ylabel(x_var.capitalize())
plt.show()
```
## 1. Approximate number of words in the DSL
```
# Range of files by first letter of word
letter_list = list(map(chr, range(97, 123)))
letter_list.append('ñ')
len(letter_list)
# Read words by letter [a-z]
word_dict = Counter()
file_path = '../data/dics/'
# Read data only first time
for letter in letter_list:
filename = file_path + letter + '.txt'
word_list = read_file_lines(filename)
for word in word_list:
word = apply_dq_word(word)
word_dict[word] += 1
# Show results
n_words = len(word_dict)
print('Total of different words: %d' % n_words)
```
## 2. Number of words with acute accent in Spanish language
```
# Counting words with acute accent
aa_freq = Counter()
regexp = re.compile('[áéíóúÁÉÍÓÚ]')
for word in word_dict.keys():
match = regexp.search(word.lower())
if match:
l = match.group(0)
aa_freq[l] += 1
# Show results
count = sum(aa_freq.values())
perc_words = 100.0 * count / n_words
print('Total words with acute accent: %d (%0.2f %s)' % (count, perc_words, '%'))
# Cooking dataframe
df = pd.DataFrame.from_records(aa_freq.most_common(), columns = ['vowel', 'frequency']).sort_values(by=['vowel'])
df['perc'] = round(100.0 * df['frequency'] / count, 2)
df
# Plotting data
figsize = (12, 6)
x_var = 'vowel'
y_var = 'perc'
title = 'Frequency of accented vowels'
plot_col_chart(df, figsize, x_var, y_var, title)
```
## 3. Frequency of words per size
```
# Processing
word_size = Counter()
for word in word_dict.keys():
size = len(word)
word_size[size] += 1
# Cooking dataframe
df = pd.DataFrame.from_records(word_size.most_common(), columns = ['size', 'frequency']).sort_values(by=['size'])
df['perc'] = 100.0 * df['frequency'] / n_words
df
# Plotting data
figsize = (12, 6)
x_var = 'size'
y_var = 'frequency'
title = 'Frequency of words per size'
plot_col_chart(df, figsize, x_var, y_var, title)
```
## 4. Top 15 bigger words
```
# Processing
top_size = Counter()
threshold = 21
for word in word_dict.keys():
size = len(word)
if size >= threshold:
top_size[word] = size
# Top 15 bigger words
top_size.most_common()
```
## 5. Frequency of letters in DSL words
```
# Processing
letter_freq = Counter()
for word in word_dict.keys():
word = word.lower()
for l in word:
letter_freq[l] += 1
n_total = sum(letter_freq.values())
n_total
# Cooking dataframe
df = pd.DataFrame.from_records(letter_freq.most_common(), columns = ['letter', 'frequency']).sort_values(by=['letter'])
df['perc'] = 100.0 * df['frequency'] / n_total
df
# Plotting data
figsize = (12, 6)
x_var = 'letter'
y_var = 'frequency'
title = 'Letter frequency in DSL words'
plot_col_chart(df, figsize, x_var, y_var, title)
# Plotting sorted data
figsize = (12, 6)
x_var = 'letter'
y_var = 'perc'
title = 'Letter frequency in DSL words (Sorted)'
color = '#2ca02c'
plot_col_chart(df.sort_values(by='perc', ascending=False), figsize, x_var, y_var, title, color)
```
## 6. Vowel and consonant ratio
```
vowel_list = 'aeiouáéíóúèîü'
vowel_total = 0
consonant_total = 0
for ix, row in df.iterrows():
letter = str(row['letter'])
freq = int(row['frequency'])
if letter in vowel_list:
vowel_total += freq
elif letter.isalpha():
consonant_total += freq
letter_total = vowel_total + consonant_total
# Initialize list of lists
data = [['vowels', vowel_total, (100.0 * vowel_total / letter_total)],
['consonant', consonant_total, (100.0 * consonant_total / letter_total)]]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['type', 'frequency', 'perc'])
df
# Plotting data
figsize = (6, 6)
x_var = 'type'
y_var = 'perc'
title = 'Vowel and consonant ratio'
plot_col_chart(df, figsize, x_var, y_var, title)
```
## 7. Frequency of words per letter of the alphabet
```
norm_dict = {'á':'a', 'é':'e', 'í':'i', 'ó':'o', 'ú':'u'}
# Processing
first_letter_freq = Counter()
for word in word_dict.keys():
first_letter = word[0].lower()
if first_letter.isalpha():
if first_letter in norm_dict.keys():
first_letter = norm_dict[first_letter]
first_letter_freq[first_letter] += 1
# Cooking dataframe
df = pd.DataFrame.from_records(first_letter_freq.most_common(), columns = ['letter', 'frequency']).sort_values(by=['letter'])
df['perc'] = 100.0 * df['frequency'] / n_words
df
# Plotting data
figsize = (12, 6)
x_var = 'letter'
y_var = 'frequency'
title = 'Frequency of words per letter of the alphabet'
plot_col_chart(df, figsize, x_var, y_var, title)
# Plotting sorted data
figsize = (12, 6)
x_var = 'letter'
y_var = 'perc'
title = 'Frequency of words per letter of the alphabet (Sorted)'
color = '#2ca02c'
plot_col_chart(df.sort_values(by='perc', ascending=False), figsize, x_var, y_var, title, color)
```
## 8. Most frequent n-grams
```
# Processing
top_ngrams = 25
bi_grams = Counter()
tri_grams = Counter()
for word in word_dict.keys():
word = word.lower()
n = len(word)
size = 2
for i in range(size, n+1):
n_grams = word[i-size:i]
bi_grams[n_grams] += 1
size = 3
for i in range(size, n+1):
n_grams = word[i-size:i]
tri_grams[n_grams] += 1
# Cooking dataframe
df_bi = pd.DataFrame.from_records(bi_grams.most_common(top_ngrams), columns=['bi-grams', 'frequency'])
df_tri = pd.DataFrame.from_records(tri_grams.most_common(top_ngrams), columns=['tri-grams', 'frequency'])
# Plotting sorted data
figsize = (8, 10)
x_var = 'bi-grams'
y_var = 'frequency'
title = str(top_ngrams) + ' bi-grams most frequent in Spanish'
plot_bar_chart(df_bi.sort_values(by=['frequency']), figsize, x_var, y_var, title)
# Plotting sorted data
figsize = (8, 10)
x_var = 'tri-grams'
y_var = 'frequency'
title = str(top_ngrams) + ' tri-grams most frequent in Spanish'
plot_bar_chart(df_tri.sort_values(by=['frequency']), figsize, x_var, y_var, title)
```
---
<a href="https://ansegura7.github.io/DSL_Analysis/">« Home</a>
| github_jupyter |
# Example Analysis
*CLARITE facilitates the quality control and analysis process for EWAS of metabolic-related traits*
Data from NHANES was used in an EWAS analysis including utilizing the provided survey weight information. The first two cycles of NHANES (1999-2000 and 2001-2002) are assigned to a 'discovery' dataset and the next two cycles (2003-2004 and 2005-2006) are assigned to a 'replication' datset.
```
import pandas as pd
import numpy as np
from scipy import stats
import clarite
pd.options.display.max_rows = 10
pd.options.display.max_columns = 6
```
## Load Data
```
data_folder = "../../../../data/NHANES_99-06/"
data_main_table_over18 = data_folder + "MainTable_keepvar_over18.tsv"
data_main_table = data_folder + "MainTable.csv"
data_var_description = data_folder + "VarDescription.csv"
data_var_categories = data_folder + "VarCat_nopf.txt"
output = "output"
```
### Data of all samples with age >= 18
```
# Data
nhanes = clarite.load.from_tsv(data_main_table_over18, index_col="ID")
nhanes.head()
```
### Variable Descriptions
```
var_descriptions = pd.read_csv(data_var_description)[["tab_desc","module","var","var_desc"]]\
.drop_duplicates()\
.set_index("var")
var_descriptions.head()
# Convert variable descriptions to a dictionary for convenience
var_descr_dict = var_descriptions["var_desc"].to_dict()
```
### Survey Weights, as provided by NHANES
Survey weight information is used so that the results apply to the US civillian non-institutionalized population.
This includes:
* SDMVPSU (Cluster ID)
* SDMVSTRA (Nested Strata ID)
* 2-year weights
* 4-year weights
Different variables require different weights, as many of them were measured on a subset of the full dataset. For example:
* *WTINT* is the survey weight for interview variables.
* *WTMEC* is the survey weight for variables measured in the Mobile Exam Centers (a subset of interviewed samples)
2-year and 4-year weights are provided. It is important to adjust the weights when combining multiple cycles, by computing the weighted average. In this case 4-year weights (covering the first 2 cycles) are provided by NHANES and the replication weights (the 3rd and 4th cycles) were computed from the 2-year weights prior to loading them here.
```
survey_design_discovery = pd.read_csv(data_folder + "weights/weights_discovery.txt", sep="\t")\
.rename(columns={'SEQN':'ID'})\
.set_index("ID")\
.drop(columns="SDDSRVYR")
survey_design_discovery.head()
survey_design_replication = pd.read_csv(data_folder + "weights/weights_replication_4yr.txt", sep="\t")\
.rename(columns={'SEQN':'ID'})\
.set_index("ID")\
.drop(columns="SDDSRVYR")
survey_design_replication.head()
# These files map variables to their correct weights, and were compiled by reading throught the NHANES codebook
var_weights = pd.read_csv(data_folder + "weights/VarWeights.csv")
var_weights.head()
# Convert the data to two dictionaries for convenience
weights_discovery = var_weights.set_index('variable_name')['discovery'].to_dict()
weights_replication = var_weights.set_index('variable_name')['replication'].to_dict()
```
### Survey Year data
Survey year is found in a separate file and can be matched using the *SEQN* ID value.
```
survey_year = pd.read_csv(data_main_table)[["SEQN", "SDDSRVYR"]].rename(columns={'SEQN':'ID'}).set_index("ID")
nhanes = clarite.modify.merge_variables(nhanes, survey_year, how="left")
```
## Define the phenotype and covariates
```
phenotype = "BMXBMI"
print(f"{phenotype} = {var_descriptions.loc[phenotype, 'var_desc']}")
covariates = ["female", "black", "mexican", "other_hispanic", "other_eth", "SES_LEVEL", "RIDAGEYR", "SDDSRVYR"]
```
## Initial cleanup / variable selection
### Remove any samples missing the phenotype or one of the covariates
```
nhanes = clarite.modify.rowfilter_incomplete_obs(nhanes, only=[phenotype] + covariates)
```
### Remove variables that aren't appropriate for the analysis
#### Physical fitness measures
These are measurements rather than proxies for environmental exposures
```
phys_fitness_vars = ["CVDVOMAX","CVDESVO2","CVDS1HR","CVDS1SY","CVDS1DI","CVDS2HR","CVDS2SY","CVDS2DI","CVDR1HR","CVDR1SY","CVDR1DI","CVDR2HR","CVDR2SY","CVDR2DI","physical_activity"]
for v in phys_fitness_vars:
print(f"\t{v} = {var_descr_dict[v]}")
nhanes = nhanes.drop(columns=phys_fitness_vars)
```
#### Lipid variables
These are likely correlated with BMI in some way
```
lipid_vars = ["LBDHDD", "LBDHDL", "LBDLDL", "LBXSTR", "LBXTC", "LBXTR"]
print("Removing lipid measurement variables:")
for v in lipid_vars:
print(f"\t{v} = {var_descr_dict[v]}")
nhanes = nhanes.drop(columns=lipid_vars)
```
#### Indeterminate variables
These variables don't have clear meanings
```
indeterminent_vars = ["house_type","hepa","hepb", "house_age", "current_past_smoking"]
print("Removing variables with indeterminate meanings:")
for v in indeterminent_vars:
print(f"\t{v} = {var_descr_dict[v]}")
nhanes = nhanes.drop(columns=indeterminent_vars)
```
### Recode "missing" values
```
# SMQ077 and DDB100 have Refused/Don't Know for "7" and "9"
nhanes = clarite.modify.recode_values(nhanes, {7: np.nan, 9: np.nan}, only=['SMQ077', 'DBD100'])
```
### Split the data into *discovery* and *replication*
```
discovery = (nhanes['SDDSRVYR']==1) | (nhanes['SDDSRVYR']==2)
replication = (nhanes['SDDSRVYR']==3) | (nhanes['SDDSRVYR']==4)
nhanes_discovery = nhanes.loc[discovery]
nhanes_replication = nhanes.loc[replication]
nhanes_discovery.head()
nhanes_replication.head()
```
## QC
### Minimum of 200 non-NA values in each variable
Drop variables that have too small of a sample size
```
nhanes_discovery = clarite.modify.colfilter_min_n(nhanes_discovery, skip=[phenotype] + covariates)
nhanes_replication = clarite.modify.colfilter_min_n(nhanes_replication, skip=[phenotype] + covariates)
```
### Categorize Variables
This is important, as different variable types must be processed in different ways. The number of unique values for each variable is a good heuristic for determining this. The default settings were used here, but different cutoffs can be specified. CLARITE reports the results in neatly formatted text:
```
nhanes_discovery = clarite.modify.categorize(nhanes_discovery)
nhanes_replication = clarite.modify.categorize(nhanes_replication)
```
### Checking categorization
#### Distributions of variables may be plotted using CLARITE:
``` python
clarite.plot.distributions(nhanes_discovery,
filename="discovery_distributions.pdf",
continuous_kind='count',
nrows=4,
ncols=3,
quality='medium')
```
#### One variable needed correcting where the heuristic was not correct
```
v = "L_GLUTAMINE_gm"
print(f"\t{v} = {var_descr_dict[v]}\n")
nhanes_discovery = clarite.modify.make_continuous(nhanes_discovery, only=[v])
nhanes_replication = clarite.modify.make_continuous(nhanes_replication, only=[v])
```
#### After examining all of the uncategorized variables, they are all continuous
```
discovery_types = clarite.describe.get_types(nhanes_discovery)
discovery_unknown = discovery_types[discovery_types == 'unknown'].index
for v in list(discovery_unknown):
print(f"\t{v} = {var_descr_dict[v]}")
nhanes_discovery = clarite.modify.make_continuous(nhanes_discovery, only=discovery_unknown)
replication_types = clarite.describe.get_types(nhanes_replication)
replication_unknown = replication_types[replication_types == 'unknown'].index
for v in list(replication_unknown):
print(f"\t{v} = {var_descr_dict[v]}")
nhanes_replication = clarite.modify.make_continuous(nhanes_replication, only=replication_unknown)
```
#### Types should match across discovery/replication
```
# Take note of which variables were differently typed in each dataset
print("Correcting differences in variable types between discovery and replication")
# Merge current type series
dtypes = pd.DataFrame({'discovery':clarite.describe.get_types(nhanes_discovery),
'replication':clarite.describe.get_types(nhanes_replication)
})
diff_dtypes = dtypes.loc[(dtypes['discovery'] != dtypes['replication']) &
(~dtypes['discovery'].isna()) &
(~dtypes['replication'].isna())]
# Discovery
# Binary -> Categorical
compare_bin_cat = list(diff_dtypes.loc[(diff_dtypes['discovery']=='binary') &
(diff_dtypes['replication']=='categorical'),].index)
if len(compare_bin_cat) > 0:
print(f"Bin vs Cat: {', '.join(compare_bin_cat)}")
nhanes_discovery = clarite.modify.make_categorical(nhanes_discovery, only=compare_bin_cat)
print()
# Binary -> Continuous
compare_bin_cont = list(diff_dtypes.loc[(diff_dtypes['discovery']=='binary') &
(diff_dtypes['replication']=='continuous'),].index)
if len(compare_bin_cont) > 0:
print(f"Bin vs Cont: {', '.join(compare_bin_cont)}")
nhanes_discovery = clarite.modify.make_continuous(nhanes_discovery, only=compare_bin_cont)
print()
# Categorical -> Continuous
compare_cat_cont = list(diff_dtypes.loc[(diff_dtypes['discovery']=='categorical') &
(diff_dtypes['replication']=='continuous'),].index)
if len(compare_cat_cont) > 0:
print(f"Cat vs Cont: {', '.join(compare_cat_cont)}")
nhanes_discovery = clarite.modify.make_continuous(nhanes_discovery, only=compare_cat_cont)
print()
# Replication
# Binary -> Categorical
compare_cat_bin = list(diff_dtypes.loc[(diff_dtypes['discovery']=='categorical') &
(diff_dtypes['replication']=='binary'),].index)
if len(compare_cat_bin) > 0:
print(f"Cat vs Bin: {', '.join(compare_cat_bin)}")
nhanes_replication = clarite.modify.make_categorical(nhanes_replication, only=compare_cat_bin)
print()
# Binary -> Continuous
compare_cont_bin = list(diff_dtypes.loc[(diff_dtypes['discovery']=='continuous') &
(diff_dtypes['replication']=='binary'),].index)
if len(compare_cont_bin) > 0:
print(f"Cont vs Bin: {', '.join(compare_cont_bin)}")
nhanes_replication = clarite.modify.make_continuous(nhanes_replication, only=compare_cont_bin)
print()
# Categorical -> Continuous
compare_cont_cat = list(diff_dtypes.loc[(diff_dtypes['discovery']=='continuous') &
(diff_dtypes['replication']=='categorical'),].index)
if len(compare_cont_cat) > 0:
print(f"Cont vs Cat: {', '.join(compare_cont_cat)}")
nhanes_replication = clarite.modify.make_continuous(nhanes_replication, only=compare_cont_cat)
print()
```
### Filtering
These are a standard set of filters with default settings
```
# 200 non-na samples
discovery_1_min_n = clarite.modify.colfilter_min_n(nhanes_discovery)
replication_1_min_n = clarite.modify.colfilter_min_n(nhanes_replication)
# 200 samples per category
discovery_2_min_cat_n = clarite.modify.colfilter_min_cat_n(discovery_1_min_n, skip=[c for c in covariates + [phenotype] if c in discovery_1_min_n.columns] )
replication_2_min_cat_n = clarite.modify.colfilter_min_cat_n(replication_1_min_n,skip=[c for c in covariates + [phenotype] if c in replication_1_min_n.columns])
# 90percent zero filter
discovery_3_pzero = clarite.modify.colfilter_percent_zero(discovery_2_min_cat_n)
replication_3_pzero = clarite.modify.colfilter_percent_zero(replication_2_min_cat_n)
# Those without weights
keep = set(weights_discovery.keys()) | set([phenotype] + covariates)
discovery_4_weights = discovery_3_pzero[[c for c in list(discovery_3_pzero) if c in keep]]
keep = set(weights_replication.keys()) | set([phenotype] + covariates)
replication_4_weights = replication_3_pzero[[c for c in list(replication_3_pzero) if c in keep]]
```
### Summarize
```
# Summarize Results
print("\nDiscovery:")
clarite.describe.summarize(discovery_4_weights)
print('-'*50)
print("Replication:")
clarite.describe.summarize(replication_4_weights)
```
### Keep only variables that passed QC in both datasets
```
both = set(list(discovery_4_weights)) & set(list(replication_4_weights))
discovery_final = discovery_4_weights[both]
replication_final = replication_4_weights[both]
print(f"{len(both)} variables in common")
```
## Checking the phenotype distribution
The phenotype appears to be skewed, so it will need to be corrected. CLARITE makes it easy to plot distributions and to transform variables.
```
title = f"Discovery: Skew of BMIMBX = {stats.skew(discovery_final['BMXBMI']):.6}"
clarite.plot.histogram(discovery_final, column="BMXBMI", title=title, bins=100)
# Log-transform
discovery_final = clarite.modify.transform(discovery_final, transform_method='log', only='BMXBMI')
#Plot
title = f"Discovery: Skew of BMXBMI after log transform = {stats.skew(discovery_final['BMXBMI']):.6}"
clarite.plot.histogram(discovery_final, column="BMXBMI", title=title, bins=100)
title = f"Replication: Skew of BMIMBX = {stats.skew(replication_final['BMXBMI']):.6}"
clarite.plot.histogram(replication_final, column="BMXBMI", title=title, bins=100)
# Log-transform
replication_final = clarite.modify.transform(replication_final, transform_method='log', only='BMXBMI')
#Plot
title = f"Replication: Skew of logBMI = {stats.skew(replication_final['BMXBMI']):.6}"
clarite.plot.histogram(replication_final, column="BMXBMI", title=title, bins=100)
```
## EWAS
### Survey Design Spec
When utilizing survey data, a survey design spec object must be created.
```
sd_discovery = clarite.survey.SurveyDesignSpec(survey_df=survey_design_discovery,
strata="SDMVSTRA",
cluster="SDMVPSU",
nest=True,
weights=weights_discovery,
single_cluster='centered')
```
### EWAS
This can then be passed into the EWAS function
```
ewas_discovery = clarite.analyze.ewas(phenotype, covariates, discovery_final, sd_discovery)
```
There is a separate function for adding pvalues with multiple-test-correction applied.
```
clarite.analyze.add_corrected_pvalues(ewas_discovery)
```
Saving results is straightforward
```
ewas_discovery.to_csv(output + "/BMI_Discovery_Results.txt", sep="\t")
```
### Selecting top results
Variables with an FDR less than 0.1 were selected (using standard functionality from the Pandas library, since the ewas results are simply a Pandas DataFrame).
```
significant_discovery_variables = ewas_discovery[ewas_discovery['pvalue_fdr']<0.1].index.get_level_values('Variable')
print(f"Using {len(significant_discovery_variables)} variables based on FDR-corrected pvalues from the discovery dataset")
```
## Replication
The variables with low FDR in the discovery dataset were analyzed in the replication dataset
### Filter out variables
```
keep_cols = list(significant_discovery_variables) + covariates + [phenotype]
replication_final_sig = clarite.modify.colfilter(replication_final, only=keep_cols)
clarite.describe.summarize(replication_final_sig)
```
### Run Replication EWAS
```
survey_design_replication
sd_replication = clarite.survey.SurveyDesignSpec(survey_df=survey_design_replication,
strata="SDMVSTRA",
cluster="SDMVPSU",
nest=True,
weights=weights_replication,
single_cluster='centered')
ewas_replication = clarite.analyze.ewas(phenotype, covariates, replication_final_sig, sd_replication)
clarite.analyze.add_corrected_pvalues(ewas_replication)
ewas_replication.to_csv(output + "/BMI_Replication_Results.txt", sep="\t")
## Compare results
# Combine results
ewas_keep_cols = ['pvalue', 'pvalue_bonferroni', 'pvalue_fdr']
combined = pd.merge(ewas_discovery[['Variable_type'] + ewas_keep_cols],
ewas_replication[ewas_keep_cols],
left_index=True, right_index=True, suffixes=("_disc", "_repl"))
# FDR < 0.1 in both
fdr_significant = combined.loc[(combined['pvalue_fdr_disc'] <= 0.1) & (combined['pvalue_fdr_repl'] <= 0.1),]
fdr_significant = fdr_significant.assign(m=fdr_significant[['pvalue_fdr_disc', 'pvalue_fdr_repl']].mean(axis=1))\
.sort_values('m').drop('m', axis=1)
fdr_significant.to_csv(output + "/Significant_Results_FDR_0.1.txt", sep="\t")
print(f"{len(fdr_significant)} variables had FDR < 0.1 in both discovery and replication")
# Bonferroni < 0.05 in both
bonf_significant05 = combined.loc[(combined['pvalue_bonferroni_disc'] <= 0.05) & (combined['pvalue_bonferroni_repl'] <= 0.05),]
bonf_significant05 = bonf_significant05.assign(m=fdr_significant[['pvalue_bonferroni_disc', 'pvalue_bonferroni_repl']].mean(axis=1))\
.sort_values('m').drop('m', axis=1)
bonf_significant05.to_csv(output + "/Significant_Results_Bonferroni_0.05.txt", sep="\t")
print(f"{len(bonf_significant05)} variables had Bonferroni < 0.05 in both discovery and replication")
# Bonferroni < 0.01 in both
bonf_significant01 = combined.loc[(combined['pvalue_bonferroni_disc'] <= 0.01) & (combined['pvalue_bonferroni_repl'] <= 0.01),]
bonf_significant01 = bonf_significant01.assign(m=fdr_significant[['pvalue_bonferroni_disc', 'pvalue_bonferroni_repl']].mean(axis=1))\
.sort_values('m').drop('m', axis=1)
bonf_significant01.to_csv(output + "/Significant_Results_Bonferroni_0.01.txt", sep="\t")
print(f"{len(bonf_significant01)} variables had Bonferroni < 0.01 in both discovery and replication")
bonf_significant01.head()
```
## Manhattan Plots
CLARITE provides functionality for generating highly customizable Manhattan plots from EWAS results
```
data_categories = pd.read_csv(data_var_categories, sep="\t").set_index('Variable')
data_categories.columns = ['category']
data_categories = data_categories['category'].to_dict()
clarite.plot.manhattan({'discovery': ewas_discovery, 'replication': ewas_replication},
categories=data_categories, title="Weighted EWAS Results", filename=output + "/ewas_plot.png",
figsize=(14, 10))
```
| github_jupyter |
<!--TITLE: Maximum Pooling-->
# Introduction #
In Lesson 2 we began our discussion of how the base in a convnet performs feature extraction. We learned about how the first two operations in this process occur in a `Conv2D` layer with `relu` activation.
In this lesson, we'll look at the third (and final) operation in this sequence: **condense** with **maximum pooling**, which in Keras is done by a `MaxPool2D` layer.
# Condense with Maximum Pooling #
Adding condensing step to the model we had before, will give us this:
```
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
model = keras.Sequential([
layers.Conv2D(filters=64, kernel_size=3), # activation is None
layers.MaxPool2D(pool_size=2),
# More layers follow
])
```
A `MaxPool2D` layer is much like a `Conv2D` layer, except that it uses a simple maximum function instead of a kernel, with the `pool_size` parameter analogous to `kernel_size`. A `MaxPool2D` layer doesn't have any trainable weights like a convolutional layer does in its kernel, however.
Let's take another look at the extraction figure from the last lesson. Remember that `MaxPool2D` is the **Condense** step.
<figure>
<!-- <img src="./images/2-show-extraction.png" width="1200" alt="An example of the feature extraction process."> -->
<img src="https://i.imgur.com/IYO9lqp.png" width="600" alt="An example of the feature extraction process.">
</figure>
Notice that after applying the ReLU function (**Detect**) the feature map ends up with a lot of "dead space," that is, large areas containing only 0's (the black areas in the image). Having to carry these 0 activations through the entire network would increase the size of the model without adding much useful information. Instead, we would like to *condense* the feature map to retain only the most useful part -- the feature itself.
This in fact is what **maximum pooling** does. Max pooling takes a patch of activations in the original feature map and replaces them with the maximum activation in that patch.
<figure>
<!-- <img src="./images/3-max-pooling.png" width="600" alt="Maximum pooling replaces a patch with the maximum value in that patch."> -->
<img src="https://imgur.com/hK5U2cd.png" width="400" alt="Maximum pooling replaces a patch with the maximum value in that patch.">
</figure>
When applied after the ReLU activation, it has the effect of "intensifying" features. The pooling step increases the proportion of active pixels to zero pixels.
# Example - Apply Maximum Pooling #
Let's add the "condense" step to the feature extraction we did in the example in Lesson 2. This next hidden cell will take us back to where we left off.
```
#$HIDE_INPUT$
import tensorflow as tf
import matplotlib.pyplot as plt
import warnings
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Read image
image_path = '../input/computer-vision-resources/car_feature.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
# Define kernel
kernel = tf.constant([
[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1],
], dtype=tf.float32)
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
# Filter step
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
# we'll talk about these two in the next lesson!
strides=1,
padding='SAME'
)
# Detect step
image_detect = tf.nn.relu(image_filter)
# Show what we have so far
plt.figure(figsize=(12, 6))
plt.subplot(131)
plt.imshow(tf.squeeze(image), cmap='gray')
plt.axis('off')
plt.title('Input')
plt.subplot(132)
plt.imshow(tf.squeeze(image_filter))
plt.axis('off')
plt.title('Filter')
plt.subplot(133)
plt.imshow(tf.squeeze(image_detect))
plt.axis('off')
plt.title('Detect')
plt.show();
```
We'll use another one of the functions in `tf.nn` to apply the pooling step, `tf.nn.pool`. This is a Python function that does the same thing as the `MaxPool2D` layer you use when model building, but, being a simple function, is easier to use directly.
```
import tensorflow as tf
image_condense = tf.nn.pool(
input=image_detect, # image in the Detect step above
window_shape=(2, 2),
pooling_type='MAX',
# we'll see what these do in the next lesson!
strides=(2, 2),
padding='SAME',
)
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image_condense))
plt.axis('off')
plt.show();
```
Pretty cool! Hopefully you can see how the pooling step was able to intensify the feature by condensing the image around the most active pixels.
# Translation Invariance #
We called the zero-pixels "unimportant". Does this mean they carry no information at all? In fact, the zero-pixels carry *positional information*. The blank space still positions the feature within the image. When `MaxPool2D` removes some of these pixels, it removes some of the positional information in the feature map. This gives a convnet a property called **translation invariance**. This means that a convnet with maximum pooling will tend not to distinguish features by their *location* in the image. ("Translation" is the mathematical word for changing the position of something without rotating it or changing its shape or size.)
Watch what happens when we repeatedly apply maximum pooling to the following feature map.
<figure>
<!-- <img src="./images/4-two-dots.png" width="800" alt="Pooling tends to destroy positional information."> -->
<img src="https://i.imgur.com/97j8WA1.png" width="800" alt="Pooling tends to destroy positional information.">
</figure>
The two dots in the original image became indistinguishable after repeated pooling. In other words, pooling destroyed some of their positional information. Since the network can no longer distinguish between them in the feature maps, it can't distinguish them in the original image either: it has become *invariant* to that difference in position.
In fact, pooling only creates translation invariance in a network *over small distances*, as with the two dots in the image. Features that begin far apart will remain distinct after pooling; only *some* of the positional information was lost, but not all of it.
<figure>
<!-- <img src="./images/4-two-dots-2.png" width="800" alt="Pooling tends to destroy positional information."> -->
<img src="https://i.imgur.com/kUMWdcP.png" width="800" alt="But only over small distances. Two dots far apart stay separated">
</figure>
This invariance to small differences in the positions of features is a nice property for an image classifier to have. Just because of differences in perspective or framing, the same kind of feature might be positioned in various parts of the original image, but we would still like for the classifier to recognize that they are the same. Because this invariance is *built into* the network, we can get away with using much less data for training: we no longer have to teach it to ignore that difference. This gives convolutional networks a big efficiency advantage over a network with only dense layers. (You'll see another way to get invariance for free in **Lesson 6** with **Data Augmentation**!)
# Conclusion #
In this lesson, we learned the about the last step of feature extraction: **condense** with `MaxPool2D`. In Lesson 4, we'll finish up our discussion of convolution and pooling with *moving windows*.
# Your Turn #
Now, start the [**Exercise**](#$NEXT_NOTEBOOK_URL$) to finish the extraction you started in Lesson 2, see this invariance property in action, and also learn about another kind of pooling: **average pooling**!
| github_jupyter |
# Weight Sampling Tutorial
If you want to fine-tune one of the trained original SSD models on your own dataset, chances are that your dataset doesn't have the same number of classes as the trained model you're trying to fine-tune.
This notebook explains a few options for how to deal with this situation. In particular, one solution is to sub-sample (or up-sample) the weight tensors of all the classification layers so that their shapes correspond to the number of classes in your dataset.
This notebook explains how this is done.
## 0. Our example
I'll use a concrete example to make the process clear, but of course the process explained here is the same for any dataset.
Consider the following example. You have a dataset on road traffic objects. Let this dataset contain annotations for the following object classes of interest:
`['car', 'truck', 'pedestrian', 'bicyclist', 'traffic_light', 'motorcycle', 'bus', 'stop_sign']`
That is, your dataset contains annotations for 8 object classes.
You would now like to train an SSD300 on this dataset. However, instead of going through all the trouble of training a new model from scratch, you would instead like to use the fully trained original SSD300 model that was trained on MS COCO and fine-tune it on your dataset.
The problem is: The SSD300 that was trained on MS COCO predicts 80 different classes, but your dataset has only 8 classes. The weight tensors of the classification layers of the MS COCO model don't have the right shape for your model that is supposed to learn only 8 classes. Bummer.
So what options do we have?
### Option 1: Just ignore the fact that we need only 8 classes
The maybe not so obvious but totally obvious option is: We could just ignore the fact that the trained MS COCO model predicts 80 different classes, but we only want to fine-tune it on 8 classes. We could simply map the 8 classes in our annotated dataset to any 8 indices out of the 80 that the MS COCO model predicts. The class IDs in our dataset could be indices 1-8, they could be the indices `[0, 3, 8, 1, 2, 10, 4, 6, 12]`, or any other 8 out of the 80. Whatever we would choose them to be. The point is that we would be training only 8 out of every 80 neurons that predict the class for a given box and the other 72 would simply not be trained. Nothing would happen to them, because the gradient for them would always be zero, because these indices don't appear in our dataset.
This would work, and it wouldn't even be a terrible option. Since only 8 out of the 80 classes would get trained, the model might get gradually worse at predicting the other 72 clases, but we don't care about them anyway, at least not right now. And if we ever realize that we now want to predict more than 8 different classes, our model would be expandable in that sense. Any new class we want to add could just get any one of the remaining free indices as its ID. We wouldn't need to change anything about the model, it would just be a matter of having the dataset annotated accordingly.
Still, in this example we don't want to take this route. We don't want to carry around the computational overhead of having overly complex classifier layers, 90 percent of which we don't use anyway, but still their whole output needs to be computed in every forward pass.
So what else could we do instead?
### Option 2: Just ignore those weights that are causing problems
We could build a new SSD300 with 8 classes and load into it the weights of the MS COCO SSD300 for all layers except the classification layers. Would that work? Yes, that would work. The only conflict is with the weights of the classification layers, and we can avoid this conflict by simply ignoring them. While this solution would be easy, it has a significant downside: If we're not loading trained weights for the classification layers of our new SSD300 model, then they will be initialized randomly. We'd still benefit from the trained weights for all the other layers, but the classifier layers would need to be trained from scratch.
Not the end of the world, but we like pre-trained stuff, because it saves us a lot of training time. So what else could we do?
### Option 3: Sub-sample the weights that are causing problems
Instead of throwing the problematic weights away like in option 2, we could also sub-sample them. If the weight tensors of the classification layers of the MS COCO model don't have the right shape for our new model, we'll just **make** them have the right shape. This way we can still benefit from the pre-trained weights in those classification layers. Seems much better than option 2.
The great thing in this example is: MS COCO happens to contain all of the eight classes that we care about. So when we sub-sample the weight tensors of the classification layers, we won't just do so randomly. Instead, we'll pick exactly those elements from the tensor that are responsible for the classification of the 8 classes that we care about.
However, even if the classes in your dataset were entirely different from the classes in any of the fully trained models, it would still make a lot of sense to use the weights of the fully trained model. Any trained weights are always a better starting point for the training than random initialization, even if your model will be trained on entirely different object classes.
And of course, in case you happen to have the opposite problem, where your dataset has **more** classes than the trained model you would like to fine-tune, then you can simply do the same thing in the opposite direction: Instead of sub-sampling the classification layer weights, you would then **up-sample** them. Works just the same way as what we'll be doing below.
Let's get to it.
```
import h5py
import numpy as np
import shutil
from misc_utils.tensor_sampling_utils import sample_tensors
```
## 1. Load the trained weights file and make a copy
First, we'll load the HDF5 file that contains the trained weights that we need (the source file). In our case this is "`VGG_coco_SSD_300x300_iter_400000.h5`" (download link available in the README of this repo), which are the weights of the original SSD300 model that was trained on MS COCO.
Then, we'll make a copy of that weights file. That copy will be our output file (the destination file).
```
# TODO: Set the path for the source weights file you want to load.
weights_source_path = 'VGG_coco_SSD_300x300_iter_400000.h5'
# TODO: Set the path and name for the destination weights file
# that you want to create.
weights_destination_path = 'VGG_coco_SSD_300x300_iter_400000_subsampled_4_classes.h5'
# Make a copy of the weights file.
shutil.copy(weights_source_path, weights_destination_path)
# Load both the source weights file and the copy we made.
# We will load the original weights file in read-only mode so that we can't mess up anything.
weights_source_file = h5py.File(weights_source_path, 'r')
weights_destination_file = h5py.File(weights_destination_path)
```
## 2. Figure out which weight tensors we need to sub-sample
Next, we need to figure out exactly which weight tensors we need to sub-sample. As mentioned above, the weights for all layers except the classification layers are fine, we don't need to change anything about those.
So which are the classification layers in SSD300? Their names are:
```
classifier_names = ['conv4_3_norm_mbox_conf',
'fc7_mbox_conf',
'conv6_2_mbox_conf',
'conv7_2_mbox_conf',
'conv8_2_mbox_conf',
'conv9_2_mbox_conf']
```
## 3. Figure out which slices to pick
The following section is optional. I'll look at one classification layer and explain what we want to do, just for your understanding. If you don't care about that, just skip ahead to the next section.
We know which weight tensors we want to sub-sample, but we still need to decide which (or at least how many) elements of those tensors we want to keep. Let's take a look at the first of the classifier layers, "`conv4_3_norm_mbox_conf`". Its two weight tensors, the kernel and the bias, have the following shapes:
```
conv4_3_norm_mbox_conf_kernel = weights_source_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_source_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
```
So the last axis has 324 elements. Why is that?
- MS COCO has 80 classes, but the model also has one 'backgroud' class, so that makes 81 classes effectively.
- The 'conv4_3_norm_mbox_loc' layer predicts 4 boxes for each spatial position, so the 'conv4_3_norm_mbox_conf' layer has to predict one of the 81 classes for each of those 4 boxes.
That's why the last axis has 4 * 81 = 324 elements.
So how many elements do we want in the last axis for this layer?
Let's do the same calculation as above:
- Our dataset has 8 classes, but our model will also have a 'background' class, so that makes 9 classes effectively.
- We need to predict one of those 9 classes for each of the four boxes at each spatial position.
That makes 4 * 9 = 36 elements.
Now we know that we want to keep 36 elements in the last axis and leave all other axes unchanged. But which 36 elements out of the original 324 elements do we want?
Should we just pick them randomly? If the object classes in our dataset had absolutely nothing to do with the classes in MS COCO, then choosing those 36 elements randomly would be fine (and the next section covers this case, too). But in our particular example case, choosing these elements randomly would be a waste. Since MS COCO happens to contain exactly the 8 classes that we need, instead of sub-sampling randomly, we'll just take exactly those elements that were trained to predict our 8 classes.
Here are the indices of the 9 classes in MS COCO that we are interested in:
`[0, 1, 2, 3, 4, 6, 8, 10, 12]`
The indices above represent the following classes in the MS COCO datasets:
`['background', 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign']`
How did I find out those indices? I just looked them up in the annotations of the MS COCO dataset.
While these are the classes we want, we don't want them in this order. In our dataset, the classes happen to be in the following order as stated at the top of this notebook:
`['background', 'car', 'truck', 'pedestrian', 'bicyclist', 'traffic_light', 'motorcycle', 'bus', 'stop_sign']`
For example, '`traffic_light`' is class ID 5 in our dataset but class ID 10 in the SSD300 MS COCO model. So the order in which I actually want to pick the 9 indices above is this:
`[0, 3, 8, 1, 2, 10, 4, 6, 12]`
So out of every 81 in the 324 elements, I want to pick the 9 elements above. This gives us the following 36 indices:
```
n_classes_source = 81
classes_of_interest = [0, 3, 8, 1, 2]
subsampling_indices = []
for i in range(int(324/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
print(subsampling_indices)
```
These are the indices of the 36 elements that we want to pick from both the bias vector and from the last axis of the kernel tensor.
This was the detailed example for the '`conv4_3_norm_mbox_conf`' layer. And of course we haven't actually sub-sampled the weights for this layer yet, we have only figured out which elements we want to keep. The piece of code in the next section will perform the sub-sampling for all the classifier layers.
## 4. Sub-sample the classifier weights
The code in this section iterates over all the classifier layers of the source weights file and performs the following steps for each classifier layer:
1. Get the kernel and bias tensors from the source weights file.
2. Compute the sub-sampling indices for the last axis. The first three axes of the kernel remain unchanged.
3. Overwrite the corresponding kernel and bias tensors in the destination weights file with our newly created sub-sampled kernel and bias tensors.
The second step does what was explained in the previous section.
In case you want to **up-sample** the last axis rather than sub-sample it, simply set the `classes_of_interest` variable below to the length you want it to have. The added elements will be initialized either randomly or optionally with zeros. Check out the documentation of `sample_tensors()` for details.
```
# TODO: Set the number of classes in the source weights file. Note that this number must include
# the background class, so for MS COCO's 80 classes, this must be 80 + 1 = 81.
n_classes_source = 81
# TODO: Set the indices of the classes that you want to pick for the sub-sampled weight tensors.
# In case you would like to just randomly sample a certain number of classes, you can just set
# `classes_of_interest` to an integer instead of the list below. Either way, don't forget to
# include the background class. That is, if you set an integer, and you want `n` positive classes,
# then you must set `classes_of_interest = n + 1`.
classes_of_interest = [0, 3, 8, 1, 2]
# classes_of_interest = 9 # Uncomment this in case you want to just randomly sub-sample the last axis instead of providing a list of indices.
for name in classifier_names:
# Get the trained weights for this layer from the source HDF5 weights file.
kernel = weights_source_file[name][name]['kernel:0'].value
bias = weights_source_file[name][name]['bias:0'].value
# Get the shape of the kernel. We're interested in sub-sampling
# the last dimension, 'o'.
height, width, in_channels, out_channels = kernel.shape
# Compute the indices of the elements we want to sub-sample.
# Keep in mind that each classification predictor layer predicts multiple
# bounding boxes for every spatial location, so we want to sub-sample
# the relevant classes for each of these boxes.
if isinstance(classes_of_interest, (list, tuple)):
subsampling_indices = []
for i in range(int(out_channels/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
elif isinstance(classes_of_interest, int):
subsampling_indices = int(classes_of_interest * (out_channels/n_classes_source))
else:
raise ValueError("`classes_of_interest` must be either an integer or a list/tuple.")
# Sub-sample the kernel and bias.
# The `sample_tensors()` function used below provides extensive
# documentation, so don't hesitate to read it if you want to know
# what exactly is going on here.
new_kernel, new_bias = sample_tensors(weights_list=[kernel, bias],
sampling_instructions=[height, width, in_channels, subsampling_indices],
axes=[[3]], # The one bias dimension corresponds to the last kernel dimension.
init=['gaussian', 'zeros'],
mean=0.0,
stddev=0.005)
# Delete the old weights from the destination file.
del weights_destination_file[name][name]['kernel:0']
del weights_destination_file[name][name]['bias:0']
# Create new datasets for the sub-sampled weights.
weights_destination_file[name][name].create_dataset(name='kernel:0', data=new_kernel)
weights_destination_file[name][name].create_dataset(name='bias:0', data=new_bias)
# Make sure all data is written to our output file before this sub-routine exits.
weights_destination_file.flush()
```
That's it, we're done.
Let's just quickly inspect the shapes of the weights of the '`conv4_3_norm_mbox_conf`' layer in the destination weights file:
```
conv4_3_norm_mbox_conf_kernel = weights_destination_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_destination_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
```
Nice! Exactly what we wanted, 36 elements in the last axis. Now the weights are compatible with our new SSD300 model that predicts 8 positive classes.
This is the end of the relevant part of this tutorial, but we can do one more thing and verify that the sub-sampled weights actually work. Let's do that in the next section.
## 5. Verify that our sub-sampled weights actually work
In our example case above we sub-sampled the fully trained weights of the SSD300 model trained on MS COCO from 80 classes to just the 8 classes that we needed.
We can now create a new SSD300 with 8 classes, load our sub-sampled weights into it, and see how the model performs on a few test images that contain objects for some of those 8 classes. Let's do it.
```
from keras.optimizers import Adam
from keras import backend as K
from keras.models import load_model
from models.keras_ssd300 import ssd_300
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.object_detection_2d_patch_sampling_ops import RandomMaxCropFixedAR
from data_generator.object_detection_2d_geometric_ops import Resize
from math import ceil
```
### 5.1. Set the parameters for the model.
As always, set the parameters for the model. We're going to set the configuration for the SSD300 MS COCO model.
```
img_height = 500 # Height of the input images
img_width = 500 # Width of the input images
img_channels = 3 # Number of color channels of the input images
subtract_mean = [123, 117, 104] # The per-channel mean of the images in the dataset
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we should set this to `True`, but weirdly the results are better without swapping.
# TODO: Set the number of classes.
n_classes = 4 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets.
# scales = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets.
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not you want to limit the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are scaled as in the original implementation
normalize_coords = True
```
### 5.2. Build the model
Build the model and load our newly created, sub-sampled weights into it.
```
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
#mode='inference',
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=subtract_mean,
divide_by_stddev=None,
swap_channels=swap_channels,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
nms_max_output_size=400,
return_predictor_sizes=False)
print("Model built.")
# 2: Load the sub-sampled weights into the model.
# Load the weights that we've just created via sub-sampling.
weights_path = weights_destination_path
model.load_weights(weights_path, by_name=True)
print("Weights file loaded:", weights_path)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model.
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
```
### 5.3. Load some images to test our model on
We sub-sampled some of the road traffic categories from the trained SSD300 MS COCO weights, so let's try out our model on a few road traffic images. The Udacity road traffic dataset linked to in the `ssd7_training.ipynb` notebook lends itself to this task. Let's instantiate a `DataGenerator` and load the Udacity dataset. Everything here is preset already, but if you'd like to learn more about the data generator and its capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.
```
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# TODO: Set the paths to the datasets here.
# The directories that contain the images.
images_train_dir = '/home/rblin/Images/BD_QCAV/train_polar/PARAM_POLAR/RetinaNet_I'
images_test_dir = '/home/rblin/Images/BD_QCAV/test_polar/PARAM_POLAR/RetinaNet_I'
# The directories that contain the annotations.
annotations_train_dir = '/home/rblin/Images/BD_QCAV/train_polar/LABELS'
annotations_test_dir = '/home/rblin/Images/BD_QCAV/test_polar/LABELS'
# The paths to the image sets.
train_image_set_filename = '/home/rblin/Images/BD_QCAV/train_polar/PARAM_POLAR/train_polar.txt'
test_image_set_filename = '/home/rblin/Images/BD_QCAV/test_polar/PARAM_POLAR/test_polar.txt'
"""VOC_2012_train_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/train.txt'
VOC_2007_val_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/val.txt'
VOC_2012_val_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/val.txt'
VOC_2007_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt'
VOC_2012_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt'
VOC_2007_test_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/test.txt'"""
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background', 'bike', 'car', 'motorbike', 'person']
train_dataset.parse_xml(images_dirs=[images_train_dir],
image_set_filenames=[train_image_set_filename],
annotations_dirs=[annotations_train_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[images_test_dir],
image_set_filenames=[test_image_set_filename],
annotations_dirs=[annotations_test_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
train_dataset.create_hdf5_dataset(file_path='dataset_QCAV_trainval.h5',
resize=False,
variable_image_size=True,
verbose=True)
val_dataset.create_hdf5_dataset(file_path='dataset_QCAV_test.h5',
resize=False,
variable_image_size=True,
verbose=True)
```
Make sure the batch generator generates images of size `(300, 300)`. We'll first randomly crop the largest possible patch with aspect ratio 1.0 and then resize to `(300, 300)`.
```
# 3: Set the batch size.
batch_size = 1 # Change the batch size if you like, or if you run into GPU memory issues.
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
model.get_layer('fc7_mbox_conf').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='ssd300_itsc_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
csv_logger = CSVLogger(filename='ssd300_itsc_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan]
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 120
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
```
| github_jupyter |
#Numerical Analysis' project
Movie recommendation system
```
from scipy.sparse import csr_matrix
from scipy.stats import pearsonr
from numpy.linalg import matrix_rank
from tqdm.notebook import tqdm
from enum import IntEnum
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import time
# Upload the dataset
movies = pd.read_csv('movies.csv')
ratings = pd.read_csv('ratings.csv')
# Create 2 sets containing all the possible user and movie ids
userIds_available = set()
movieIds_available = set()
for id in np.array(ratings['userId'] , dtype = int):
userIds_available.add(id)
for id in np.array(movies['movieId'] , dtype = int):
movieIds_available.add(id)
# Convert the sets in 2 lists
userIds_available = list(userIds_available)
movieIds_available = list(movieIds_available)
# Order the list
userIds_available.sort()
movieIds_available.sort()
print(len(userIds_available) , len(movieIds_available))
def binary_search(array , x):
low = 0
high = len(array) - 1
while(high >= low):
mid = int((high + low) / 2)
if array[mid] == x:
return mid
elif array[mid] > x:
high = mid - 1
else:
low = mid + 1
print("Element %d not found" % x)
return -1
#ratings # 105339 users' ratings , 668 different users
#movies # 10329 movies
rows = np.array(ratings['userId'])
cols = np.array(ratings['movieId'])
vals = np.array(ratings['rating'])
n = rows.max() # Max user id
p = cols.max() # Max movie id
N = len(vals) # Number of ratings
# Update the arrays rows/cols with the true position instead of the ids
for i_user in tqdm(range(len(rows))):
rows[i_user] = binary_search(userIds_available , rows[i_user])
for i_movie in tqdm(range(len(cols))):
cols[i_movie] = binary_search(movieIds_available , cols[i_movie])
n , p , N
# Command for analyse input data matrix
movies.head()
ratings.head()
movies.info()
ratings.info()
movies.describe()
ratings.describe()
sns.distplot(ratings['rating'])
sns.distplot(ratings['movieId'])
sns.scatterplot(data = ratings , x = 'userId' , y = 'movieId' , hue = 'rating')
ratings.corr()
# Shuffle the data
indexes = np.arange(N)
np.random.seed(0) # for reproducibility
np.random.shuffle(indexes)
indexes
# Reordering the arrays
rows = rows[indexes]
cols = cols[indexes]
vals = vals[indexes]
```
Building the train set (80%) and the validation set (20%)
```
# Split data in training and testing
num_training = int(N * 0.8)
rows_train = rows[:num_training]
cols_train = cols[:num_training]
vals_train = vals[:num_training]
rows_test = rows[num_training:]
cols_test = cols[num_training:]
vals_test = vals[num_training:]
print(len(rows_train) , len(cols_train) , len(vals_train))
```
Building the 'Ratings matrix'
Users on the rows and Movies on the columns
Initializing all the elements to 0 and then update position (i,j) with the rating of movie j by user i if it's present
```
ratings_matrix = np.zeros((len(userIds_available) , len(movieIds_available)))
def init_ratings_matrix():
# Initialize the matrix with all zeros
ratings_matrix = np.zeros((len(userIds_available) , len(movieIds_available)))
# Update the matrix with the known values (contained in vals_train array)
ratings_matrix[rows_train, cols_train] = vals_train
return ratings_matrix
ratings_matrix = init_ratings_matrix()
frame = pd.DataFrame(ratings_matrix, index = userIds_available , columns = movieIds_available)
print(frame)
# Count the number of missing values
def count_missing_values(matrix):
missing_values = 0
for i_user in tqdm(range(matrix.shape[0])):
for j_movie in range(matrix.shape[1]):
# If the movie in position j_movie hasn't a rating
if matrix[i_user , j_movie] == 0:
missing_values += 1
print("There are %d missing valuess" % (missing_values))
print("There are %d values inserted" % (matrix.shape[0] * matrix.shape[1] - missing_values))
print("There are %d values" % (matrix.shape[0] * matrix.shape[1]))
count_missing_values(ratings_matrix)
```
Building movie-genre correlation matrix M
$$
M_{i,j} =
\begin{cases}
1 & \text{if movie i is of genre j}\\
0 & \text{otherwise}
\end{cases}
$$
```
# Put in a set all the genres available
genre_available = set()
for i in range(movies.shape[0]):
genres = movies['genres'][i].split('|')
for g in genres: genre_available.add(g)
# print("All genres available are: " , id_available , genre_available)
num_movies = len(movieIds_available)
num_genres = len(genre_available)
print("Max movie id: " , max(movies['movieId']))
print("Number of movies is: " , num_movies)
print("Number of genres is: " , num_genres)
# Initialize the matrix with all zeros of int8 type
correlation_matrix = np.zeros((num_movies , num_genres) , dtype = np.int8)
# Update the table with the correspondance
for i in tqdm(range(movies.shape[0])):
id = movies['movieId'][i]
# Take the right position in the matrix
id = movieIds_available.index(id)
genres = movies['genres'][i].split('|')
for pos , g in enumerate(genre_available):
if g in genres:
correlation_matrix[id , pos] = 1
frame = pd.DataFrame(correlation_matrix, index = movieIds_available , columns = genre_available)
print(frame)
```
Next step:
create a movie-movie matrix to find similiar movies: movies which covers the same genres
```
def cosine_similarity(vector1 , vector2):
"""
vector1 and vector2 are rows of correlation_matrix or of ratings_matrix
"""
return np.dot(vector1, vector2)/(np.linalg.norm(vector1) * np.linalg.norm(vector2))
def cosine_similarity_users(vector1 , vector2):
'''
Apply this similarity between users -> want to find similar behaviour in rating common movies and then
use it for movies that one of the two hasn't watched yet, not use them here
vector1 and vector2 are vector containing ratings of two users
'''
common_vector1 = []
common_vector2 = []
# Take just the movies rated in both the array to find a similarity between the two users
for i in range(len(vector1)):
if vector1[i] != 0 and vector2[i] != 0:
common_vector1.append(vector1[i])
common_vector2.append(vector2[i])
# If the two vectors(users) has at least 5 common ratings
if len(common_vector1) > 5:
return np.dot(common_vector1, common_vector2)/(np.linalg.norm(common_vector1) * np.linalg.norm(common_vector2))
else:
return 0
# Creating clusters for movies
# Key is the number of the cluster, value is a list of movie ids
movie_cluster = {}
# Over the threshold movies are considered similar
threshold = 0.7
# Key is the movie id, value is the cluster's number of the movie
index_cluster = {}
# Create a copy of the ids available
movieIds_copy = movieIds_available.copy()
num_cluster = 0
index1 = 1
# To show the progress bar
pbar = tqdm(total = len(movieIds_copy))
# While there is a movie not yet assigned to a cluster
while len(movieIds_copy) > 0:
# Pick the first of the actual list
id_x = movieIds_copy[0]
# Create an empty list that will contains all the similar movies of id_x
list_movies = []
# Set the cluster for the current movie
index_cluster[id_x] = num_cluster
# Add the current movie in the current cluster
list_movies.append(id_x)
# Pick all the others not yet in a cluster and check if they are similar to id_x
while (index1 < len(movieIds_copy)):
id_y = movieIds_copy[index1]
sim = cosine_similarity(correlation_matrix[movieIds_available.index(id_x)], correlation_matrix[movieIds_available.index(id_y)])
# If they are similar enough
if sim >= threshold:
# Set the cluster for id_y
index_cluster[id_y] = num_cluster
# Add id_y in the list
list_movies.append(id_y)
# Remove id_y from the list of movies not yet assigned
movieIds_copy.remove(id_y)
# Update the bar when an element is deleted
pbar.update(1)
else:
# Increment the index
index1 += 1
# Remove id_x from the list of movies not yet assigned
movieIds_copy.remove(id_x)
# Update the bar when an element is deleted
pbar.update(1)
# Set the list of movies to the cluster
movie_cluster[num_cluster] = list_movies
num_cluster += 1
index1 = 1
# Close the bar
pbar.close()
print("Number of cluster is: " , num_cluster)
# Creating clusters for users
# Key is the number of the cluster, value is a list of user ids
users_cluster = {}
# Over the threshold users are considered similar
threshold = 0.95
# Key is the user id, value is the cluster's number of the user
user_index_cluster = {}
# Create a copy of the ids available
userIds_copy = userIds_available.copy()
num_cluster = 0
index2 = 1
# To show the progress bar
pbar = tqdm(total = len(userIds_copy))
# While there is a user not yet assigned to a cluster
while len(userIds_copy) > 0:
# Pick the first of the actual list
id_x = userIds_copy[0]
# Create an empty list that will contains all the similar users of id_x
list_users = []
# Set the cluster for the current user
user_index_cluster[id_x] = num_cluster
# Add the current movie in the current cluster
list_users.append(id_x)
# Pick all the others not yet in a cluster and check if they are similar to id_x
while ( index2 < len(userIds_copy)):
id_y = userIds_copy[index2]
sim = cosine_similarity_users(ratings_matrix[userIds_available.index(id_x)], ratings_matrix[userIds_available.index(id_y)])
# If they are similar enough
if sim >= threshold:
# Set the cluster for id_y
user_index_cluster[id_y] = num_cluster
# Add id_y in the list
list_users.append(id_y)
# Remove id_y from the list of users not yet assigned
userIds_copy.remove(id_y)
# Update the bar when an element is deleted
pbar.update(1)
else :
# Increment the index
index2 += 1
# Remove id_x from the list of users not yet assigned
userIds_copy.remove(id_x)
# Update the bar when an element is deleted
pbar.update(1)
# Set the list of users to the cluster
users_cluster[num_cluster] = list_users
num_cluster += 1
index2 = 1
# Close the bar
pbar.close()
print("Number of cluster is: " , num_cluster)
# Order each sublist of the dictionaries to reduce the complexity of the search
for key , value in movie_cluster.items():
new_value = value
new_value.sort()
movie_cluster[key] = new_value
for key , value in users_cluster.items():
new_value = value
new_value.sort()
users_cluster[key] = new_value
# Array that contains the position of each ratings (used as mapping)
ratings_position_array = list([0.5 , 1 , 1.5 , 2 , 2.5 , 3 , 3.5 , 4 , 4.5 , 5])
# Given an array with the amount of times each rating appears, return the mean
# of the most common ones
def get_rating_to_assign(array):
# To save the max count present
max_rating_count = 0
# To save the quantity of ratings present
count_of_ratings = 0
# For each rating (element of the array)
for i in range(len(array)):
# Add the number of his ratings
count_of_ratings += array[i]
# If greater than the current maximum count, update it
if array[i] > max_rating_count:
max_rating_count = array[i]
# If there aren't ratings
if count_of_ratings < 1:
return 0
# Fill the list with the more common ratings
list_of_max = set()
for i in range(len(array)):
# If the current rating appear max_rating_count times, consider it
if array[i] == max_rating_count:
# Add the rating corresponing to this position in the set
list_of_max.add(ratings_position_array[i])
if len(list_of_max) == 0:
return 0
# Calculate the avg between the ratings that appear more times
rating = 0
for r in list_of_max:
rating += r
return rating / len(list_of_max)
```
Collaborative Filtering
```
# Filling matrix with some ratings due to user similarities calculating the
# average cluster rating for a movie using the most common only
partial_ratings_matrix = ratings_matrix.copy() # to maintain the original
# To count the number of predicted value in that phase
num_of_predicted_value = 0
# For each user
for i_user in tqdm(range(partial_ratings_matrix.shape[0])):
# Take the cluster key for the user
cluster = user_index_cluster[userIds_available[i_user]]
# Take all the similar users
sim_users_ids = users_cluster[cluster]
# If there is at least a similar user
if len(sim_users_ids) > 1:
# For each movie
for j_movie in range(partial_ratings_matrix.shape[1]):
# If the user hasn't watched the movie yet
if ratings_matrix[i_user , j_movie] == 0:
# For each movie calculate the most common rating and assing it
# Array that will contains the count of all the different ratings the movie received
ratings_array = np.zeros(10)
# Since the list are ordered, pos is used to find all the ids in the list with at most n iterations
pos = 0
# For each user similar to i_user
for user_id in sim_users_ids:
# Take the row corresponding to the user
pos = userIds_available.index(user_id , pos)
# If the similar user has watched it
if ratings_matrix[pos , j_movie] != 0:
# Take the position of the rating in the array from a "map array"
position_in_array = ratings_position_array.index(ratings_matrix[pos , j_movie])
# Sum 1 in the "counter array" in the corresponding position
ratings_array[position_in_array] += 1
# Retrive the rating to assign
rating = get_rating_to_assign(ratings_array)
# If it's a valid rating
if rating > 0:
# Assign it
partial_ratings_matrix[i_user , j_movie] = rating
num_of_predicted_value += 1
print(num_of_predicted_value)
# Other possibility
# Filling matrix with some ratings due to user similarities calculating the
# average cluster rating using all the ratings
partial_ratings_matrix = ratings_matrix.copy() # to maintain the original
# To count the number of predicted value in that phase
num_of_predicted_value = 0
# For each user
for i_user in tqdm(range(partial_ratings_matrix.shape[0])):
# Take the cluster key for the user
cluster = user_index_cluster[userIds_available[i_user]]
# Take all the similar users
sim_users_ids = users_cluster[cluster]
# If there is at least a similar user
if len(sim_users_ids) > 1:
# For each movie
for j_movie in range(partial_ratings_matrix.shape[1]):
# If the user hasn't watched the movie yet
if ratings_matrix[i_user , j_movie] == 0:
# For each movie calculate the avg rating given by similar users
ratings_sum = 0
total_contributions = 0
# Since the list are ordered, pos is used to find all the ids in the list with at most n iterations
pos = 0
# For each user similar to i_user
for user_id in sim_users_ids:
# Take the row corresponding to the user
pos = userIds_available.index(user_id , pos)
# If the similar user has watched it
if ratings_matrix[pos , j_movie] != 0:
ratings_sum += ratings_matrix[pos , j_movie]
total_contributions += 1
# If at least a similar user has watched the movie
if total_contributions > 0:
# Calculate the mean and assign it
average = ratings_sum / total_contributions
partial_ratings_matrix[i_user , j_movie] = average
num_of_predicted_value += 1
print(num_of_predicted_value)
# Count the number of missing values
count_missing_values(partial_ratings_matrix)
```
Content-based Filtering
```
# Filling matrix with some ratings due to content similarities performing the
# mean between content and collaborative filtering
possible_ratings_matrix = partial_ratings_matrix.copy() # to maintain the original
# To count the number of predicted value in that phase
num_of_predicted_value = 0
# For each user
for i_user in tqdm(range(possible_ratings_matrix.shape[0])):
# For each movie cluster
for key , cluster in movie_cluster.items():
# Calculate the average rate and assign it to all the elements in it
pos = 0
ratings_sum = 0
elements_in_sum = 0
# List containing all the movie the i_user still has to whatch
movie_to_rate = list()
# For each movie in the current cluster
for movie_id in cluster:
# Take the position in the matrix of that movie
pos = movieIds_available.index(movie_id , pos)
# If the movie doesn't have a rate
if ratings_matrix[i_user , pos] == 0:
# Add in the new list
movie_to_rate.append(pos)
else:
# Sum the rate for the avg
ratings_sum += ratings_matrix[i_user , pos]
elements_in_sum += 1
# If there is at least a rating
if elements_in_sum > 0:
rating = ratings_sum / elements_in_sum
else:
continue
# For all the movies in the cluster that hasn't been watched yet
for movie_pos in movie_to_rate:
# If the movie hasn't a prediction from the collaborative filtering
if possible_ratings_matrix[i_user , movie_pos] == 0:
possible_ratings_matrix[i_user , movie_pos] = rating
num_of_predicted_value += 1
else:
# If the movie has a prediction from the collaborative filterting -> make the avg
possible_ratings_matrix[i_user , movie_pos] = (possible_ratings_matrix[i_user , movie_pos] + rating) / 2
print(num_of_predicted_value)
# Filling matrix with some ratings due to content similarities
# -> standard version -> in case on content filtering only
possible_ratings_matrix = partial_ratings_matrix.copy() # to maintain the original
# If content filtering without collaborative filtering
#possible_ratings_matrix = ratings_matrix.copy()
# To count the number of predicted value in that phase
num_of_predicted_value = 0
# For each user
for i_user in tqdm(range(possible_ratings_matrix.shape[0])):
# For each movie
for j_movie in range(possible_ratings_matrix.shape[1]):
# If user i_user has whatched and rated movie j_movie
if ratings_matrix[i_user , j_movie] >= 0.5:
# Take movies similar to j_movie
cluster = index_cluster[movieIds_available[j_movie]]
sim_movies_ids = movie_cluster[cluster]
# Calculate the avg rating for the cluster
pos = 0
ratings_sum = 0
elements_in_sum = 0
# List containing all the movie the i_user still has to whatch
movie_to_rate = list()
# For each movie similar to j_movie
for movie_id in sim_movies_ids:
# Take the position in the matrix of that movie
pos = movieIds_available.index(movie_id , pos)
# If the movie doesn't have a rate
if ratings_matrix[i_user , pos] == 0:
# Add in the new list
movie_to_rate.append(pos)
else:
# Sum the rate for the avg
ratings_sum += ratings_matrix[i_user , pos]
elements_in_sum += 1
# If there is at least a rating
if elements_in_sum > 0:
rating = ratings_sum / elements_in_sum
else:
continue
# For all the movies in the cluster that hasn't been rated yet, insert the cluster's average
for movie_pos in movie_to_rate:
if possible_ratings_matrix[i_user , movie_pos] == 0:
# Assign the average rating
possible_ratings_matrix[i_user , movie_pos] = rating
num_of_predicted_value += 1
print(num_of_predicted_value)
# Count the number of missing values
count_missing_values(possible_ratings_matrix)
# Content + collaborative filtering
#np.savetxt('content_collaborative_filterting_matrix.csv' , possible_ratings_matrix , delimiter = ',' , fmt = '%1.1f')
# Content + collaborative filtering
#possible_ratings_matrix = np.loadtxt('content_collaborative_filterting_matrix.csv', delimiter=',')
print(ratings_matrix)
print("===============================")
print(possible_ratings_matrix)
```
# Singular value truncation (SVT) based recommender system
```
# Analyzing the errors/precision/recall/f1 score after the prediction of the ratings predicted only
# Initialize the list for the evaluation of the initial errors
rows_test_limited = list()
cols_test_limited = list()
vals_test_limited = list()
# Fill the lists
def fill_test_lists():
for i in range(len(rows_test)):
# Add just the position filled with the algortithm
if possible_ratings_matrix[rows_test[i]][cols_test[i]] != 0:
rows_test_limited.append(rows_test[i])
cols_test_limited.append(cols_test[i])
vals_test_limited.append(vals_test[i])
# Calculate RMSE and rho
def analyze_starting_errors():
vals_pred_limited = possible_ratings_matrix[rows_test_limited, cols_test_limited]
err = vals_test_limited - vals_pred_limited
RMSE = np.sqrt(np.mean(err**2))
rho = pearsonr(vals_test_limited, vals_pred_limited)[0]
return RMSE , rho
# Perform some evaluations
def precision_and_recall_initial_state():
total_recommended = 0 # true positive + false negative
predicted_recommended_items = 0 # true positive + false positive
predicted_true_recommended_items = 0 # true positive
# A movie is recommended if it's rating is greater than this value
recommendation_value = 3
for i in range(len(rows_test_limited)):
true_rating = vals_test_limited[i]
predicted_value = possible_ratings_matrix[rows_test_limited[i]][cols_test_limited[i]]
# Calculate true positive
if true_rating >= recommendation_value:
total_recommended += 1
if predicted_value >= recommendation_value:
predicted_true_recommended_items += 1
# Calculate true positive + false positive
if predicted_value >= recommendation_value:
predicted_recommended_items += 1
print("True positive: " , predicted_true_recommended_items)
print("True positive + false positive: " , predicted_recommended_items)
print("True positive + false negative: " , total_recommended)
precision = predicted_true_recommended_items / predicted_recommended_items
recall = predicted_true_recommended_items / total_recommended
print("Precision: " , precision)
print("Recall: " , recall)
return precision , recall
def F1_measure(precision_value , recall_value):
return 2 * precision_value * recall_value / ( precision_value + recall_value)
fill_test_lists()
print(analyze_starting_errors())
print("At this stage %d values have already been predicted" % len(rows_test_limited))
precision , recall = precision_and_recall_initial_state()
F1_measure(precision , recall)
# Reconstruct rows_train, cols_train, vals_train with all the value of the input + already predicted values
counter = 0
rows_train_updated = list()
cols_train_updated = list()
vals_train_updated = list()
for i_user in tqdm(range(possible_ratings_matrix.shape[0])):
for j_movie in range(possible_ratings_matrix.shape[1]):
# If it is a default or predicted value, save the position
if possible_ratings_matrix[i_user][j_movie] != 0:
rows_train_updated.append(i_user)
cols_train_updated.append(j_movie)
vals_train_updated.append(possible_ratings_matrix[i_user][j_movie])
counter += 1
print("Saved %d values" % counter)
def errors():
vals_pred = X_hat[rows_test, cols_test]
err = vals_test - vals_pred
RMSE = np.sqrt(np.mean(err**2))
rho = pearsonr(vals_test, vals_pred)[0]
return RMSE , rho
# Initialize the matrix on which perform the SVT
X_hat = possible_ratings_matrix
# Perform some evaluations on the whole test set
def precision_and_recall():
total_recommended = 0 # true positive + false negative
predicted_recommended_items = 0 # true positive + false positive
predicted_true_recommended_items = 0 # true positive
# A movie is recommended if it's rating is greater than this value
recommendation_value = 3
for i in range(len(rows_test)):
true_rating = vals_test[i]
predicted_value = X_hat[rows_test[i]][cols_test[i]]
# Calculate true positive
if true_rating >= recommendation_value:
total_recommended += 1
if predicted_value >= recommendation_value:
predicted_true_recommended_items += 1
# Calculate true positive + false positive
if predicted_value >= recommendation_value:
predicted_recommended_items += 1
#print("True positive: " , predicted_true_recommended_items)
#print("True positive + false positive: " , predicted_recommended_items)
#print("True positive + false negative: " , total_recommended)
precision = predicted_true_recommended_items / predicted_recommended_items
recall = predicted_true_recommended_items / total_recommended
#print("Precision: " , precision)
#print("Recall: " , recall)
return precision , recall
precision , recall = precision_and_recall()
F1_measure(precision , recall)
# Max number of iterations
n_max_iter = 100
# Minimum
increment_tol = 1e-1
# Threshold parameters
a = 0.01
b = 200
RMSE_list = list()
rho_list = list()
precision_list = list()
recall_list = list()
f1_score_list = list()
# Calculating errors / parameters
RMSE , rho = errors()
precision , recall = precision_and_recall()
f1_score = F1_measure(precision , recall)
# Add the calculated values in the lists
RMSE_list.append(RMSE)
rho_list.append(rho)
precision_list.append(precision)
recall_list.append(recall)
f1_score_list.append(f1_score)
for k in tqdm(range(n_max_iter)):
# Copy the current matrix X_hat
X_old = X_hat.copy()
# Performing the SVD of the current matrix
U,s,VT = np.linalg.svd(X_hat, full_matrices=False)
# Update the threshold
threshold = b * np.exp(-k * a)
#threshold = 50
# Update the singular values
s[s > 0] = s[s > 0] - threshold
s[s < 0] = 0
# Calculating the new matrix trough SVD
X_hat = U @ np.diag(s) @ VT
# Maintain the default values
X_hat[rows_train_updated,cols_train_updated] = vals_train_updated
# Some negative values could appear -> set them to 0
X_hat[X_hat < 0] = 0
# Calculate the increment -> how much the new matrix is different from the previuos one
increment = np.linalg.norm(X_hat - X_old)
# Every 10 iterations calculate the values
if k % 10 == 9:
# Calculate the errors
RMSE , rho = errors()
# Add the errors in the lists
RMSE_list.append(RMSE)
rho_list.append(rho)
precision , recall = precision_and_recall()
f1_score = F1_measure(precision , recall)
precision_list.append(precision)
recall_list.append(recall)
f1_score_list.append(f1_score)
# Show the errors
print('================== iter %d - theshold %1.2f - increment %1.3e' % (k+1, threshold, increment))
print('RMSE: %1.3f' % RMSE)
print('rho : %1.3f' % rho)
print('precision: %1.3f' % precision)
print('recall: %1.3f' % recall)
print('F1-score: %1.3f' % f1_score)
# If the increment is lower -> stop the algorithm
if increment < increment_tol:
break
# Save the result as a CSV file
#np.savetxt('final_ratings_matrix.csv', X_hat, delimiter=',' , fmt='%1.1f')
# Load the matrix final_ratings_matrix from the CSV file
#X_hat = np.loadtxt('final_ratings_matrix.csv', delimiter=',')
# Calculate the final precision/recall/F1-score
precision , recall = precision_and_recall()
F1_measure(precision , recall)
# Function that retrieve a list of movie to recommend to a specified user
def retrieve_recommended_items(userId):
# Take all the movies and all their titles
movie_ids = np.array(movies['movieId'])
movie_titles = np.array(movies['title'])
# Initialize a matrix with all zeros
true_ratings_matrix = np.zeros((len(userIds_available) , len(movieIds_available)))
# Update the matrix with the known values
true_ratings_matrix[rows, cols] = vals
# Take the position of the user required in the matrix (which row)
user_position = userIds_available.index(userId)
# Create a list to contain all the movies to recommend
movie_to_recommend = list()
# List containing the predicted ratings
ratings_list =list()
# Set the max rating to look for at the beginning
max_rating = 5
# Until there are lesser than 10 movie chose and the rating is good enough(>=3)
while len(movie_to_recommend) < 10 and max_rating >= 3:
# For each movie
for movie_i in range(X_hat.shape[1]):
# If it's a movie to recommend and the user hasn't rated it
if X_hat[user_position , movie_i] >= max_rating and X_hat[user_position , movie_i] < (max_rating + 0.5)
and true_ratings_matrix[user_position , movie_i] == 0:
# Add the movie id in the list
movie_to_recommend.append(movieIds_available[movie_i])
# Add the current max_rating in the list
ratings_list.append(max_rating)
# Reduce the max rating to look for
max_rating -= 0.5
# Create a list for the titles
if len(movie_to_recommend) > 10:
title_list = movie_to_recommend[0:10]
else:
title_list = movie_to_recommend
ratings_list = ratings_list[0:len(title_list)]
# In each position of the list substitute the title corresponding to the id
for i in range(len(title_list)):
found = False
# Search the position
for j in range(len(movie_ids)):
if movie_ids[j] == title_list[i]:
found = True
break
# If it's been found
if found:
title_list[i] = movie_titles[j]
# Return the two lists
return title_list , ratings_list
# Take a random user
user_id = np.random.randint(0 , 668)
print("User id is: " , user_id)
# Retrieve the recommended items with the predicted ratings
user_list , ratings_list = retrieve_recommended_items(user_id)
# Put the two lists togheter
if len(user_list) > 0:
M = np.block([user_list , ratings_list])
frame = pd.DataFrame(M.T, index = np.linspace(1 , len(user_list) , len(user_list) , dtype = int) , columns = np.array(["Title" , "Rating"]))
pd.set_option('colheader_justify', 'center')
print(frame)
else:
print("Sorry, no movie to recommend! Whatch more!")
# Show the variations
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
fig , axis = plt.subplots(5 , 1 , figsize = (14 , 16))
axis[0].plot(RMSE_list)
axis[0].set_title("RMSE")
axis[1].plot(rho_list)
axis[1].set_title("Rho")
axis[2].plot(precision_list)
axis[2].set_title("Precision")
axis[3].plot(recall_list)
axis[3].set_title("Recall")
axis[4].plot(f1_score_list)
axis[4].set_title("F1-measure")
```
| github_jupyter |
```
# Imports (some imports may not be necessary)
import numpy as np
import math
from qiskit import IBMQ, Aer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, assemble, transpile
from qiskit.visualization import plot_histogram
import qiskit.quantum_info as qi
# Functions to define QFT gate in Qiskit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(math.pi/2**(n-qubit), qubit, n)
# At the end of our function, we call the same function again on
# the next qubits (we reduced n by one earlier in the function)
qft_rotations(circuit, n)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qftCircuit(circuit, n):
"""QFT on the first n qubits in circuit"""
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
# 1 qubit QFT
n = 1
qc1 = QuantumCircuit(n,n)
qftCircuit(qc1,n)
qc1.draw()
state1 = qi.Statevector.from_instruction(qc1)
stateVec1 = state1.__array__()
print(stateVec1)
# Comment out these lines
import sys
sys.path.insert(0, 'C:\\Users\\masch\\QuantumComputing\\QComp\\pgmpy')
# Imports
import cmath
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete.CPD import TabularCPD
from pgmpy.inference import VariableElimination
from pgmpy.inference import BeliefPropagation
#QFT gate modeled using pgmpy
N = 2
omega_N = cmath.exp(2*math.pi*1j/N)
qft1 = BayesianNetwork([('q0m0','q0m1')])
cpd_q0m0 = TabularCPD(variable = 'q0m0', variable_card = 2, values = [[1], [0]])
cpd_q0m1 = TabularCPD(variable='q0m1', variable_card = 2, values = [[1/np.sqrt(N),1/np.sqrt(N)], [1/np.sqrt(N),omega_N/np.sqrt(N)]], evidence = ['q0m0'], evidence_card = [2])
"""
U_QFT =
[1/sqrt(2) 1/sqrt(2)]
[1/sqrt(2) -1/sqrt(2)]
"""
qft1.add_cpds(cpd_q0m0,cpd_q0m1)
qftInfer1 = VariableElimination(qft1)
q1 = qftInfer1.query(['q0m1'])
print(q1)
# Obtain the ordering of the variables in the display above, as well as their values
q1Vars = q1.variables
q1Values = q1.values
print(q1Vars)
print(q1Values)
# 2 qubit QFT
n = 2
qc2 = QuantumCircuit(n,n)
qftCircuit(qc2,n)
qc2.draw()
state2 = qi.Statevector.from_instruction(qc2)
stateVec2 = state2.__array__()
print(stateVec2)
#QFT with 2 qubits
N = 4
omega_N = cmath.exp(2*math.pi*1j/N)
qft2 = BayesianNetwork([('q0m0','q0m1'), ('q1m0','q0m1'), ('q0m0','q1m1'), ('q1m0','q1m1')])
cpd_q0m0 = TabularCPD(variable = 'q0m0', variable_card = 2, values = [[1], [0]])
cpd_q1m0 = TabularCPD(variable = 'q1m0', variable_card = 2, values = [[1], [0]])
cpd_q0m1 = TabularCPD(variable='q0m1', variable_card = 2, values = [[1/np.sqrt(2),1/np.sqrt(2),1/np.sqrt(2),1/np.sqrt(2)], [1/np.sqrt(2),(omega_N**2)/np.sqrt(2),(omega_N**4)/np.sqrt(2),(omega_N**6)/np.sqrt(2)]], evidence = ['q0m0','q1m0'], evidence_card = [2,2])
cpd_q1m1 = TabularCPD(variable='q1m1', variable_card = 2, values = [[1/np.sqrt(2),1/np.sqrt(2),1/np.sqrt(2),1/np.sqrt(2)], [1/np.sqrt(2),(omega_N**1)/np.sqrt(2),(omega_N**2)/np.sqrt(2),(omega_N**3)/np.sqrt(2)]], evidence = ['q0m0','q1m0'], evidence_card = [2,2])
"""
U_QFT =
[1/2 1/2 1/2 1/2]
[1/2 i/2 -1/2 -i/2]
[1/2 -1/2 1/2 -1/2]
[1/2 -i/2 -1/2 i/2]
"""
qft2.add_cpds(cpd_q0m0,cpd_q0m1,cpd_q1m0,cpd_q1m1)
qftInfer2 = VariableElimination(qft2)
q2 = qftInfer2.query(['q0m1','q1m1'])
print(q2)
"""
U_QFT(00) = 1/2 00 + 1/2 01 + 1/2 10 + 1/2 11 = (1/sqrt(2)* (0 + 1)) * (1/sqrt(2)* (0 + 1))
U_QFT(01) = 1/2 00 + i/2 01 - 1/2 10 - i/2 11 = (1/sqrt(2)* (0 - 1)) * (1/sqrt(2)* (0 + i*1))
"""
# Obtain the ordering of the variables in the display above, as well as their values
q2Vars = q2.variables
q2Values = q2.values
print(q2Vars)
print(q2Values)
# 3 qubit QFT
n = 3
qc3 = QuantumCircuit(n,n)
qftCircuit(qc3,n)
qc3.draw()
state3 = qi.Statevector.from_instruction(qc3)
stateVec3 = state3.__array__()
print(stateVec3)
#QFT with 3 qubits
N = 8
A = 1/(np.sqrt(2))
omega_N = cmath.exp(2*math.pi*1j/N)
qft3 = BayesianNetwork([('q0m0','q0m1'), ('q0m0','q1m1'), ('q0m0','q2m1'), ('q1m0','q0m1'), ('q1m0','q1m1'), ('q1m0','q2m1'), ('q2m0','q0m1'), ('q2m0','q1m1'), ('q2m0','q2m1')])
cpd_q0m0 = TabularCPD(variable = 'q0m0', variable_card = 2, values = [[1], [0]])
cpd_q1m0 = TabularCPD(variable = 'q1m0', variable_card = 2, values = [[1], [0]])
cpd_q2m0 = TabularCPD(variable = 'q2m0', variable_card = 2, values = [[1], [0]])
cpd_q0m1 = TabularCPD(variable='q0m1', variable_card = 2, values = [[A,A,A,A,A,A,A,A], [A,A*(omega_N**4),A*(omega_N**8),A*(omega_N**12),A*(omega_N**16),A*(omega_N**20),A*(omega_N**24),A*(omega_N**28)]], evidence = ['q0m0','q1m0','q2m0'], evidence_card = [2,2,2])
cpd_q1m1 = TabularCPD(variable='q1m1', variable_card = 2, values = [[A,A,A,A,A,A,A,A], [A,A*(omega_N**2),A*(omega_N**4),A*(omega_N**6),A*(omega_N**8),A*(omega_N**10),A*(omega_N**12),A*(omega_N**14)]], evidence = ['q0m0','q1m0','q2m0'], evidence_card = [2,2,2])
cpd_q2m1 = TabularCPD(variable='q2m1', variable_card = 2, values = [[A,A,A,A,A,A,A,A], [A,A*(omega_N**1),A*(omega_N**2),A*(omega_N**3),A*(omega_N**4),A*(omega_N**5),A*(omega_N**6),A*(omega_N**7)]], evidence = ['q0m0','q1m0','q2m0'], evidence_card = [2,2,2])
"""
Let w = e^(i*pi/3)
U_QFT =
1/(2*sqrt(2))*
[1 1 1 1 1 1 1 1 ]
[1 w w^2 w^3 w^4 w^5 w^6 w^7 ]
[1 w^2 w^4 w^6 w^8 w^10 w^12 w^14]
[1 w^3 w^6 w^9 w^12 w^15 w^18 w^21]
[1 w^4 w^8 w^12 w^16 w^20 w^24 w^28]
[1 w^5 w^10 w^15 w^20 w^25 w^30 w^35]
[1 w^6 w^12 w^18 w^24 w^30 w^36 w^42]
[1 w^7 w^14 w^21 w^28 w^35 w^42 w^49]
"""
qft3.add_cpds(cpd_q0m0,cpd_q0m1,cpd_q1m0,cpd_q1m1,cpd_q2m0,cpd_q2m1)
qftInfer3 = VariableElimination(qft3)
q3 = qftInfer3.query(['q0m1','q1m1','q2m1'])
print(q3)
# Obtain the ordering of the variables in the display above, as well as their values
q3Vars = q3.variables
q3Values = q3.values
print(q3Vars)
print(q3Values)
def bitListBack(n):
N = 2**n
numList = []
numFormat = "0" + str(n) + "b"
for i in range(N):
numList.append((str(format(i,numFormat))[::-1]))
return numList
def QiskitDict(stateVec,n):
qbits = bitListBack(n)
QbitDict = {}
for i in range(2**n):
QbitDict[qbits[i]]=np.round(stateVec[i],4)
return QbitDict
print("1 qubit qft")
print(QiskitDict(stateVec1,1))
print("2 qubit qft")
print(QiskitDict(stateVec2,2))
print("3 qubit qft")
print(QiskitDict(stateVec3,3))
# Obtain the ordering of the variables in the display above, as well as their values
valArr = q3.variables
valuesArr = q3.values
def create_var_order(orderArr):
currNum = 0
numArr = []
for order in orderArr:
if len(order) == 4:
currNum = order[1]
numArr.append(currNum)
return numArr
def bitList(n):
N = 2**n
numList = []
numFormat = "0" + str(n) + "b"
for i in range(N):
numList.append((str(format(i,numFormat))))
return numList
def columnize(listOfBits):
n = len(listOfBits[0])
holder = []
for i in range(n):
col = []
for bit in listOfBits:
col.append(bit[i])
holder.append(col)
return holder
def reform():
varOrderArr = create_var_order(valArr)
listOfBits = bitList(len(varOrderArr))
columns = columnize(listOfBits)
rearrangedColumns = [None]*len(columns)
for index, order in enumerate(varOrderArr):
rearrangedColumns[index] = columns[int(order)]
numOfCols = len(rearrangedColumns)
bitStr = ""
finalBitArr = []
for bitIndex in range(len(rearrangedColumns[0])):
for num in range(numOfCols):
bitStr+=str(rearrangedColumns[num][bitIndex])
finalBitArr.append(bitStr)
bitStr = ""
return finalBitArr
def createHashTable():
resHash = {}
bitOrder=reform()
valuesFlat = valuesArr.flatten()
for index, key in enumerate(bitOrder):
resHash[key] = np.round(valuesFlat[index], 4)
return resHash
PgmpyHash = createHashTable()
print(PgmpyHash == QiskitDict(stateVec3,3))
print(PgmpyHash)
print(QiskitDict(stateVec3,3))
"""
TO DO LIST:
1. Implement a function that automates the comparison between pgmpy and qiskit
2. Organize files to look nice
3. Push to Github
SOON:
4. Implement density matrices into pgmpy - eventually incorporate error events/Soham's work
5. Implement a function that generates the CPDs for hadamard, qft, pauli matrix gates, etc... save time
"""
```
| github_jupyter |
```
import pandas as pds
import sklearn as skl
import seaborn as sns
import numpy as num
planetas = pds.read_csv("cumulative.csv")
planetas.replace('',num.nan,inplace = True)
planetas.dropna(inplace = True)
```
# Para demonstrarmos o funcionamento das diferentes funções de Kernel num SVM, vamos pegar um database não-trivial e tentar classifica-lo com o nosso modelo.
* Para isso, escolhemos o database da NASA: "Kepler Exoplanet Search Results"
* Nesse database, temos informações sobre diversos planetas detectados pelo telescópio Kepler, cujo foco é de encontrar exoplanetas (planetas que orbitam estrelas que não o Sol) pelo universo.
# Na coluna *kResult*, temos a situação do planeta ID:
* É um exoplaneta, portanto CONFIRMED.
* Não é um exoplaneta, portanto FALSE POSITIVE.
* Talvez seja um exoplaneta, portanto CANDIDATE.
# Vamos criar um modelo que classifique se um planeta é/pode ser um exoplaneta ou não.
```
planetas.head(20)
```
# Percebemos claramente, nós gráficos em pares abaixo, que as amostras do nosso database não são nada triviais, e nem linearmente separáveis. Portanto, caso queiramos utilizar o SVM para classificar os dados, teremos que testar diferentes funções de Kernel e molda-las para uma melhor precisão.
```
sns.pairplot(planetas.head(1000), vars = planetas.columns[8:14],hue = 'kResult')
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import svm
yPlanetas = planetas['kResult'].copy()
xPlanetas = planetas.drop(['kResult','kName','kID','Not Transit-Like Flag','Stellar Eclipse Flag','Centroid Offset Flag','Ephemeris Match Indicates Contamination Flag'],axis = 1)
xTreino, xTeste, yTreino, yTeste = train_test_split(xPlanetas, yPlanetas, test_size=0.80, random_state=3)
```
# Aqui criaremos 4 modelos com funções de kernel diferentes (parâmetro *kernel*) e treinaremos cada um deles com nossos dados de treino para no fim escolher o de precisão mais satisfatória.
```
modeloLinear = svm.SVC(kernel = 'linear')
modeloPoly = svm.SVC(kernel = 'poly')
modeloRBF = svm.SVC(kernel = 'rbf')
modeloSigmoid = svm.SVC(kernel = 'sigmoid')
```
<img src = "./SVM-Kernel-Function-Types.png">
```
modeloLinear.fit(xTreino,yTreino)
modeloPoly.fit(xTreino,yTreino)
modeloRBF.fit(xTreino,yTreino)
modeloSigmoid.fit(xTreino,yTreino)
```
# Aqui iremos mostrar o "score" de cada um dos nossos modelos, isto é, o quão preciso o modelo foi em relação à realidade, e os coeficientes da função de decisão.
* Perceba que os modelos Linear, Polinomial e RBF tiveram eficiência muito próxima, o que indica a complexidade do database. Portanto, para melhorar a precisão, teremos que manualmente testar os parâmetros (ou *coeficientes* ) de cada um dos modelos.
* Perceba também que a pontuação média de 60% indica que nosso modelo tem uma certa eficiência considerável, já o dobro da eficiência esperada para um modelo aleátorio (pontuação em torno de 30%). Isso demonstra que o truque de kernel é efetivo na manipulação de dados extremamente complexos como o utilizado.
```
print(" Score = ",modeloLinear.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloLinear.decision_function(xTeste))
print(" Score = ",modeloPoly.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloPoly.decision_function(xTeste))
print(" Score = ",modeloRBF.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloRBF.decision_function(xTeste))
print(" Score = ",modeloSigmoid.score(xTeste,yTeste), "\n")
print(" Coeficientes da função de decisão: \n\n",modeloSigmoid.decision_function(xTeste))
```
| github_jupyter |
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
#from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
from sklearn.decomposition import PCA
from scipy.stats.mstats import zscore # This is to standardized the parameters
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D, Conv2D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.layers import Flatten
from keras.optimizers import Adam
%matplotlib inline
np.random.seed(1)
```
## 1. Load Data
```
import math as M
from matplotlib import mlab
from matplotlib.colors import Normalize
from obspy.imaging.cm import obspy_sequential
import matplotlib.pyplot as plt
from skimage.transform import resize
import scipy
def getSpectogram(data, samp_rate, per_lap=0.9, wlen=None, log=False,
outfile=None, fmt=None, axes=None, dbscale=False,
mult=8.0, zorder=None, title=None,
show=True, sphinx=False, clip=[0.0, 1.0]):
# enforce float for samp_rate
samp_rate = float(samp_rate)
# set wlen from samp_rate if not specified otherwise
if not wlen:
wlen = samp_rate / 100.
npts = len(data)
# nfft needs to be an integer, otherwise a deprecation will be raised
# XXX add condition for too many windows => calculation takes for ever
nfft = int(_nearest_pow_2(wlen * samp_rate))
if nfft > npts:
nfft = int(_nearest_pow_2(npts / 8.0))
if mult is not None:
mult = int(_nearest_pow_2(mult))
mult = mult * nfft
nlap = int(nfft * float(per_lap))
data = data - data.mean()
end = npts / samp_rate
specgram, freq, time = mlab.specgram(data, Fs=samp_rate, NFFT=nfft,
pad_to=mult, noverlap=nlap)
# db scale and remove zero/offset for amplitude
if dbscale:
specgram = 10 * np.log10(specgram[1:, :])
else:
specgram = np.sqrt(specgram[1:, :])
freq = freq[1:]
vmin, vmax = clip
if vmin < 0 or vmax > 1 or vmin >= vmax:
msg = "Invalid parameters for clip option."
raise ValueError(msg)
_range = float(specgram.max() - specgram.min())
vmin = specgram.min() + vmin * _range
vmax = specgram.min() + vmax * _range
norm = Normalize(vmin, vmax, clip=True)
return freq,time,specgram
def _nearest_pow_2(x):
"""
Find power of two nearest to x
>>> _nearest_pow_2(3)
2.0
>>> _nearest_pow_2(15)
16.0
:type x: float
:param x: Number
:rtype: Int
:return: Nearest power of 2 to x
"""
a = M.pow(2, M.ceil(np.log2(x)))
b = M.pow(2, M.floor(np.log2(x)))
if abs(a - x) < abs(b - x):
return a
else:
return b
events = np.load("NewDatasets/Data_D11.npy")
label = np.load("NewDatasets/Label_D11.npy")
times = np.load("NewDatasets/Time_D11.npy")
events=events.reshape([events.shape[0],events.shape[1]])
times=times[:,:label.shape[0]]
#print(times)
#events = np.load("Datasets/DataDetection_M_2.8_R_0.5_S_4_Sec_256.npy")
#label = np.load("Datasets/LabelDetection_M_2.8_R_0.5_S_4_Sec_256.npy")
#times=np.load("Datasets/TimeDetection_M_2.8_R_0.5_S_4_Sec_256.npy")
print(events.shape)
print(label.shape)
print(times.shape)
#times = np.load("Datasets/TimeDetection_M_2.8_R_0.5_S_4_Sec_256.npy") # features, # samples
times = (times - times[0,:]) * 3600 * 24 # set time to 0 and in seconds
fs = (times[:,0] < 60).nonzero()[0].shape[0] / 60 # sampling frequency
print(fs)
fs=100
from scipy.signal import spectrogram
eventNumber = 0
freq , time, Sxx = getSpectogram(events[:,eventNumber], fs, dbscale = True)
#Sxx = scipy.misc.imresize(Sxx, [64, 64])
Sxx = scipy.misc.imresize(Sxx, [64, 128])
spectrogram_shape = Sxx.shape
print(spectrogram_shape)
print(events.shape)
print(label.shape)
plt.imshow(Sxx)
print(label[eventNumber])
plt.figure()
plt.plot(events[:,0])
plt.figure()
plt.plot(events[:,600])
plt.figure()
plt.plot(events[:,-1])
print(label[0])
print(label[600])
print(label[-1])
label[0]
print(events.shape)
print(label.shape)
print(times.shape)
#print(label.shape[0])
#0:label.shape[0]
print(times.shape)
print(fs)
print(times)
data = np.zeros((events.shape[1], spectrogram_shape[0], spectrogram_shape[1]))
for i in range(events.shape[1]):
_, _, Sxx = getSpectogram(events[:,i], fs)
Sxx = scipy.misc.imresize(Sxx, [64, 128])
data[i, :, :] = (Sxx - np.mean(Sxx)) / np.std(Sxx)
#data[i, :, :] = zscore(np.log10(Sxx))
data = data[:,:,:,np.newaxis]
def split_reshape_dataset(X, Y, ratio):
#X = X.T[:,:,np.newaxis, np.newaxis]
#Y = Y.T
m = X.shape[0] # number of samples
sortInd = np.arange(m)
np.random.shuffle(sortInd)
nTrain = int(ratio * m)
X_train = X[sortInd[:nTrain], :, :, :]
Y_train = Y[sortInd[:nTrain],:]
X_test = X[sortInd[nTrain:], :, :, :]
Y_test = Y[sortInd[nTrain:],:]
return X_train, X_test, Y_train, Y_test
#data = data[300:700,:]
#data = (data - np.mean(data, axis = 0, keepdims= True)) / np.std(data, axis = 0, keepdims = True)
#data=zscore(data)
RatioTraining=0.8; # 0.8 before
X_train, X_test, Y_train, Y_test = split_reshape_dataset(data, label, RatioTraining)
Y_train =convert_to_one_hot(Y_train,2).T
Y_test = convert_to_one_hot(Y_test,2).T
print(X_train.shape)
print(Y_train.shape)
print(data.shape)
print(label.shape)
i = 104
def ComputeModel(input_shape):
"""
Function creating the model's graph in Keras.
Argument:
input_shape -- shape of the model's input data (using Keras conventions)
Returns:
model -- Keras model instance
"""
X_input = Input(shape = input_shape)
# keras.layers.Conv1D(filters, kernel_size, strides=1, padding='valid', data_format='channels_last', dilation_rate=1, activation=None,
# Step 1: CONV layer
X = Conv1D(filters=196,kernel_size=16,strides=4)(X_input) #None # CONV1D
X = BatchNormalization()(X) #None # Batch normalization
X = Activation(activation='relu')(X) #None # ReLu activation
X = Dropout(rate=0.8)(X) #None # dropout (use 0.8)
# Step 2: First GRU Layer
X = GRU(units=128, return_sequences=True)(X)#None # GRU (use 128 units and return the sequences)
X = Dropout(rate=.8)(X) #None # dropout (use 0.8)
X = BatchNormalization()(X) #None # Batch normalization
# Step 3: Second GRU Layer
X = GRU(units=128, return_sequences=True)(X) #None # GRU (use 128 units and return the sequences)
X = Dropout(rate=0.8)(X) #None # dropout (use 0.8)
X = BatchNormalization()(X) #None # Batch normalization
X = Dropout(rate=0.8)(X) #None # dropout (use 0.8)
# Step 3: Second GRU Layer
'''
X = GRU(units=128, return_sequences=True)(X) #None # GRU (use 128 units and return the sequences)
X = Dropout(rate=0.8)(X) #None # dropout (use 0.8)
X = BatchNormalization()(X) #None # Batch normalization
X = Dropout(rate=0.8)(X) #None
X = GRU(units=128, return_sequences=True)(X) #None # GRU (use 128 units and return the sequences)
X = Dropout(rate=0.8)(X) #None # dropout (use 0.8)
X = BatchNormalization()(X) #None # Batch normalization
X = Dropout(rate=0.8)(X) #None
'''
# Step 4: Time-distributed dense layer (≈1 line)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
X = Flatten()(X)
X = Dense(1, activation = "sigmoid")(X) # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
Tx=2.5E-2
print(spectrogram_shape)
print(X_train.shape)
print(spectrogram_shape)
model = ComputeModel(input_shape = (spectrogram_shape[0],spectrogram_shape[1]))
model.summary()
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
#X_train_reshape=np.squeeze(X_train)
#X_train_reshape.shape
Y_train.shape
Y_train2=Y_train[:,0]
#Y_train2=Y_train2[:,np.newaxis,np.newaxis]
print(np.squeeze(X_train).shape)
print(Y_train2.shape)
model.fit(np.squeeze(X_train), Y_train2, batch_size = 50, epochs=200)
loss, acc = model.evaluate(np.squeeze(X_test), Y_test[:,0])
print("Dev set accuracy = ", acc)
```
| github_jupyter |
```
import pandas as pd
from lxml import etree
import numpy as np
from pyproj import Proj,transform
pd.set_option('display.max_columns', 500)
import datetime as dt
import sys
# LET OP
# 1. BIJ MAY MOET PARKING GARAGE LEEG
# 2. BIJ FEB STAAT ER 1 LEGE PARKINGPLACEWITHREGULATIONS TUSSEN
# voor nu handmatig verwijderd
# start datum en tijd BETAALD parkeren op straat en garage
starttime_paid = "2018-05-01T09:00:00"
# eind datum en tijd BETAALD parkeren op straat en garage
endtime_paid = "2018-05-02T00:00:00"
# start datum en tijd GRATIS parkeren op straat
starttime_free = "2018-05-01T00:00:00"
# eind datum en tijd GRATIS parkeren op straat
endtime_free = "2018-05-01T09:00:00"
# aantal plaatsen in garage
# before opening 0
# after opening 600
num_spaces_off_street = 0
num_parking_places_per_floor = 0
num_floors = 0
height_garage = '210.0'
# aantal plaatsen op straat
# before removal 567
# after removal 81
num_spaces_on_street = 567
# spatial reference system
crs = 'EPSG:4326'
```
# parkingRegulations
Tarief verandering datum:
14/04/2019
Straat
- 2018: 4€ 9-24 MA-ZA
- 2019: 6€ 9-24 MA-ZA
Parkeergarage
- 2018: 4,29€ MA-ZO maximaal dagtarief 47,50€
- 2019: 5,10€ MA-ZO maximale dagtarief 51€
## Residents
Check schedules generator 7.1.2
```
ovin_permitholder_opid_list = ['10101113',
'10105032',
'10110021',
'10115097',
'10119034',
'10122049',
'10130011',
'10130056',
'10207088',
'10208002',
'10209022',
'10215041',
'10216071',
'10221100',
'10223039',
'10234069',
'10240096',
'10250068',
'10318057',
'10326066',
'10333058',
'10334126',
'10338065',
'10343133',
'10350074',
'10350075',
'10401020',
'10401029',
'10404122',
'10405094',
'10409192',
'10412072',
'10421081',
'10428096',
'10430061',
'10437102',
'10442064',
'10445037',
'10452076',
'10509144',
'10525053',
'10534094',
'10534147',
'10541092',
'10547126',
'11109055',
'11111005',
'11113019',
'11116075',
'11120027',
'11121008',
'11123026',
'11124005',
'11131033',
'11139007',
'11206090',
'11207122',
'11210110',
'11211103',
'11214035',
'11214102',
'11234138',
'11306049',
'11311055',
'11316091',
'11318058',
'11328083',
'11329088',
'11333116',
'11337069',
'11344053',
'11402077',
'11406103',
'11408010',
'11414095',
'11419048',
'11419084',
'11419085',
'11423092',
'11425120',
'11427063',
'11428048',
'11429105',
'11432034',
'11441131',
'11442044',
'11442081',
'11449089',
'11504044',
'11505109',
'11505128',
'11508024',
'11510117',
'11511055',
'11517088',
'11523131',
'11525049',
'11525072',
'11530021',
'11534023',
'11538120',
'11545151',
'11550064',
'11551055',
'12104006',
'12110036',
'12113001',
'12115091',
'12123092',
'12127082',
'12130011',
'12131002',
'12131056',
'12133001',
'12133035',
'12133065',
'12134054',
'12136028',
'12138065',
'12145009',
'12149104',
'12202162',
'12215063',
'12219061',
'12229069',
'12229092',
'12232037',
'12239069',
'12245141',
'12304054',
'12315047',
'12324101',
'12325040',
'12325089',
'12326106',
'12327119',
'12328043',
'12332129',
'12333112',
'12334084',
'12335060',
'12340060',
'12348109',
'12418036',
'12419135',
'12421034',
'12421049',
'12421076',
'12425089',
'12425090',
'12430026',
'12440108',
'12440131',
'12502046',
'12503074',
'12505153',
'12510044',
'12511071',
'12514108',
'12515078',
'12523136',
'12528060',
'12529081',
'12533042',
'12540098',
'12542037',
'12545053',
'13101004',
'13106006',
'13107007',
'13125052',
'13135056',
'13138009',
'13141096',
'13143063',
'13148010',
'13203072',
'13207040',
'13209021',
'13210103',
'13211049',
'13223095',
'13226048',
'13227100',
'13228084',
'13234040',
'13237085',
'13242109',
'13243107',
'13245053',
'13250043',
'13251042',
'13303066',
'13308102',
'13309042',
'13316104',
'13323112',
'13327083',
'13328094',
'13340052',
'13343117',
'13345059',
'13349124',
'13403102',
'13404038',
'13404111',
'13409131',
'13412056',
'13414054',
'13415042',
'13416047',
'13424127',
'13443114',
'13452026',
'13510093',
'13513117',
'13514047',
'13517093',
'13518122',
'13529065',
'13533055',
'13540050',
'13546064',
'13547043',
'13547120',
'14103070',
'14110031',
'14111035',
'14130033',
'14132057',
'14133004',
'14133035',
'14134060',
'14138036',
'14143007',
'14151002',
'14152059',
'14215045',
'14227041',
'14228113',
'14234050',
'14236115',
'14246066',
'14246154',
'14246156',
'14250149',
'14302082',
'14305109',
'14314059',
'14315096',
'14317090',
'14321070',
'14335120',
'14340058',
'14341056',
'14342051',
'14342053',
'14342112',
'14347058',
'14412086',
'14415095',
'14415103',
'14420034',
'14426041',
'14427128',
'14445068',
'14451181',
'14502122',
'14517110',
'14524046',
'14525098',
'14526092',
'14528046',
'14528048',
'14534069']
ovin_permitholde_opid_list_less_people = ['10101113', '10105032', '10110021', '10115097', '10119034', '10122049', '10130011', '10130056', '10207088', '10208002', '10209022', '10215041', '10216071', '10221100', '10223039', '10234069', '10240096', '10250068', '10318057', '10326066', '10333058', '10334126', '10338065', '10343133', '10350074', '10350075', '10401020', '10401029', '10404122', '10405094', '10409192', '10412072', '10421081', '10428096', '10430061', '10437102', '10442064', '10445037', '10452076', '10509144', '10525053', '10534094', '10534147', '10541092', '10547126', '11109055', '11111005', '11113019', '11116075', '11120027', '11121008', '11123026', '11124005', '11131033', '11139007', '11206090', '11207122', '11210110', '11211103', '11214035', '11214102', '11234138', '11306049', '11311055', '11316091', '11318058', '11328083', '11329088', '11333116', '11337069', '11344053', '11402077', '11406103', '11408010', '11414095', '11419048', '11419084', '11419085', '11423092', '11425120', '11427063', '11428048', '11429105', '11432034', '11441131', '11442044', '11442081', '11449089', '11504044', '11505109', '11505128', '11508024', '11510117', '11511055', '11517088', '11523131', '11525049', '11525072', '11530021', '11534023', '11538120', '11545151', '11550064', '11551055', '12104006', '12110036', '12113001', '12115091', '12123092', '12127082', '12130011', '12131002', '12131056', '12133001', '12133035', '12133065', '12134054', '12136028', '12138065', '12145009', '12149104', '12202162', '12215063', '12219061', '12229069', '12229092', '12232037', '12239069', '12245141', '12304054', '12315047', '12324101', '12325040', '12325089', '12326106', '12327119', '12328043', '12332129', '12333112', '12334084', '12335060', '12340060', '12348109', '12418036', '12419135', '12421034', '12421049', '12421076', '12425089', '12425090', '12430026', '12440108', '12440131', '12502046', '12503074', '12505153', '12510044', '12511071', '12514108', '12515078', '12523136', '12528060', '12529081', '12533042', '12540098', '12542037', '12545053', '13101004', '13106006', '13107007', '13125052', '13135056', '13138009', '13141096', '13143063', '13148010', '13203072', '13207040', '13209021', '13210103', '13211049', '13223095', '13226048', '13227100', '13228084', '13234040', '13237085', '13242109', '13243107', '13245053', '13250043', '13251042', '13303066', '13308102', '13309042', '13316104', '13323112', '13327083', '13328094', '13340052', '13343117', '13345059', '13349124', '13403102', '13404038', '13404111', '13409131', '13412056', '13414054', '13415042', '13416047', '13424127', '13443114', '13452026', '13510093', '13513117', '13514047', '13517093', '13518122', '13529065', '13533055', '13540050', '13546064', '13547043', '13547120', '14103070', '14110031', '14111035', '14130033', '14132057', '14133004', '14133035', '14134060', '14138036', '14143007', '14151002', '14152059', '14215045', '14227041', '14228113', '14234050', '14236115', '14246066', '14246154', '14246156', '14250149', '14302082', '14305109', '14314059', '14315096', '14317090']
def parking_regulations_residents(ovin_permitholder_opid_list_less_people):
parkingRegulations = etree.Element("parkingRegulations")
parkingRegulationResidents = etree.SubElement(parkingRegulations, 'parkingRegulationResidents')
parkingRegulationResidents.set('id','amsterdam_permitholder_regulation')
residents = etree.SubElement(parkingRegulationResidents, 'residents')
for i in ovin_permitholder_opid_list:
resident = etree.SubElement(residents, 'resident')
resident.set('id',i)
etree.dump(parkingRegulations)
return parkingRegulations
parkingRegulations = parking_regulations_residents(ovin_permitholder_opid_list)
```
## Off street
```
duration_list_18 = np.arange(0, 1441, 7)
price_list_18 = np.arange(0,47.5,0.5)
max_pay_18 = '47.5'
duration_list_19 = np.arange(0, 1441, 2)
price_list_19 = np.arange(0,51,0.17)
max_pay_19 ='51'
def lists_to_dataframe(duration_list,price_list,max_pay):
d = {'duration':duration_list,'price':price_list}
df = pd.DataFrame({ key:pd.Series(value) for key, value in d.items() })
df['price'] = df['price'].fillna(value=max_pay)
df = df.round(2)
df = df.applymap(str)
return df
df_18 = lists_to_dataframe(duration_list_18,price_list_18,max_pay_18)
df_19 = lists_to_dataframe(duration_list_19,price_list_19,max_pay_19)
def parking_regulations_offstreet(df_tariffs):
parkingRegulationPaidMaxDuration = etree.SubElement(parkingRegulations, 'parkingRegulationPaidMaxDuration')
parkingRegulationPaidMaxDuration.set('id','off_street_paid')
rates = etree.SubElement(parkingRegulationPaidMaxDuration, 'rates')
for index,row in df_tariffs.iterrows():
rate = etree.SubElement(rates, 'rate')
rate.set('duration',row['duration'])
rate.set('price',row['price'])
return parkingRegulations
parkingRegulations = parking_regulations_offstreet(df_18)
# parkingRegulationPaidMaxDuration_offstreet_18 = parking_regulations_offstreet(df_19)
```
## On street
```
duration_list = np.arange(0, 901, 60)
price_list_18 = np.arange(0,61,4)
price_list_19 = np.arange(0,91,6)
d = {'duration':duration_list,'price18':price_list_18,'price19':price_list_19}
tariffs_onstreet = pd.DataFrame(data=d)
tariffs_onstreet = tariffs_onstreet.applymap(str)
tariffs_onstreet
def parking_regulations_onstreet(df_tariffs,endtime_paid,starttime_paid,endtime_free,starttime_free,price):
# free range
parkingRegulationFreeRange = etree.SubElement(parkingRegulations, 'parkingRegulationFreeRange')
parkingRegulationFreeRange.set('id','on_street_free')
range = etree.SubElement(parkingRegulationFreeRange, 'range')
range.set('end',endtime_free)
range.set('start',starttime_free)
# paid range
parkingRegulationPaidRangeMaxDuration = etree.SubElement(parkingRegulations, 'parkingRegulationPaidRangeMaxDuration')
parkingRegulationPaidRangeMaxDuration.set('id','on_street_paid')
range = etree.SubElement(parkingRegulationPaidRangeMaxDuration, 'range')
range.set('end',endtime_paid)
range.set('start',starttime_paid)
rates = etree.SubElement(parkingRegulationPaidRangeMaxDuration, 'rates')
for index,row in df_tariffs.iterrows():
rate = etree.SubElement(rates, 'rate')
rate.set('duration',row['duration'])
rate.set('price',row[price])
return parkingRegulations
parkingRegulations = parking_regulations_onstreet(tariffs_onstreet,endtime_paid,starttime_paid,endtime_free,starttime_free,'price18')
etree.dump(parkingRegulations)
```
# Parking places
## parkingPlaceWithoutRegulations
```
d = {'numbers':np.arange(num_spaces_off_street)}
df = pd.DataFrame(data=d)
df['id_part_1']='albertCuypGarage_parkingPlace'
df["id"] = df["id_part_1"].map(str) + df["numbers"].map(str)
df = df[['id']]
df['x'] = "4.886937"
df['y'] = "52.355543"
df['length'] = '500.0'
df['width'] = '300.0'
parkingPlaces = etree.Element("parkingPlaces")
etree.dump(parkingPlaces)
def parking_spaces_off_street(df,crs):
df = df.applymap(str)
parkingPlaces = etree.Element("parkingPlaces")
for index,row in df.iterrows():
parkingPlaceWithoutRegulations = etree.SubElement(parkingPlaces,"parkingPlaceWithoutRegulations")
parkingPlaceWithoutRegulations.set('id',row['id'])
parkingPlaceWithoutRegulations.set('type','undefined')
coordinates = etree.SubElement(parkingPlaceWithoutRegulations, 'coordinates')
coordinates.set('x',row['x'])
coordinates.set('y',row['y'])
coordinates.set('crs',crs)
# link = etree.SubElement(parkingPlaceWithoutRegulations, 'link')
# link.set('fraction','0.72')
# link.set('id','182')
dimensions = etree.SubElement(parkingPlaceWithoutRegulations, 'dimensions')
dimensions.set('length',row['length'])
dimensions.set('width',row['width'])
supportedVehicles = etree.SubElement(parkingPlaceWithoutRegulations, 'supportedVehicles')
supportedVehicle = etree.SubElement(supportedVehicles, 'supportedVehicle')
supportedVehicle.set('type','car')
electricChargingStation = etree.SubElement(parkingPlaceWithoutRegulations, 'electricChargingStation')
electricChargingStation.set('available','false')
return parkingPlaces
parkingPlaces = parking_spaces_off_street(df,crs)
etree.dump(parkingPlaces)
```
## parkingPlaceWithRegulations
```
filepaths = ['/Users/miloubisseling/Documents/UvA/DataScience/Thesis/parkeervakken/Levering_201805/20180502/Zuid_parkeerhaven_CENTROIDE_20180502.csv',
'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/parkeervakken/Levering_201902/20190226/Zuid_parkeerhaven_CENTROIDE_20190226.csv'
]
inProj = Proj(init='EPSG:28992')
outProj = Proj(init='EPSG:4326')
# EPSG:28992 to EPSG:4326
def convert_coords(row):
x2,y2 = transform(inProj,outProj,row['X'],row['Y'])
return pd.Series({'newX':x2,'newY':y2})
def extract_parking_places_coordinates(path,buurtcode):
df = pd.read_csv(path,sep=';',encoding = "ISO-8859-1",decimal=',')
df[['newX','newY']] = df.apply(convert_coords,axis=1)
df = df[['PARKEER_ID','BUURTCODE','STRAATNAAM','newX','newY','SOORT','TYPE']]
df = df.loc[df['BUURTCODE'] == 'K24c']
df = df.loc[df['SOORT'] == 'FISCAAL']
return df
dataframes = [extract_parking_places_coordinates(filepath,'K24c') for filepath in filepaths]
def length_width_parking_place(df):
# if type is Vissengraat or Haaks minimal length and width is 4.5x2.4m
# else (if type is File or Langs) minimal length and width is 6x2.5m
# source: http://selectoo.nl/parkeerplaats-afmetingen-breedte-lengte.html
df['length'] = np.where((df['TYPE'] == 'Vissengraat')|(df['TYPE'] == 'Haaks'),4.5,6)
df['width'] = np.where((df['TYPE'] == 'Vissengraat')|(df['TYPE'] == 'Haaks'),2.4,2.5)
return df
dataframes_len_wid = [length_width_parking_place(df) for df in dataframes]
parking_places_may = dataframes_len_wid[0]
parking_places_feb = dataframes_len_wid[1]
# create id for parkingplace
parking_places_may['id']= parking_places_may.groupby(['PARKEER_ID'],sort=False).ngroup()
len(parking_places_may)
def parking_spaces_onstreet(df,crs):
df = df.applymap(str)
# parkingPlaces = etree.Element("parkingPlaces")
for index,row in df.iterrows():
parkingPlaceWithRegulations = etree.SubElement(parkingPlaces,"parkingPlaceWithRegulations")
id_part_1 = 'generatedParkingPlace_'
id_part_2 = row['id']
parkingPlaceWithRegulations.set('id',(id_part_1 + id_part_2))
parkingPlaceWithRegulations.set('type','undefined')
coordinates = etree.SubElement(parkingPlaceWithRegulations, 'coordinates')
coordinates.set('crs',crs)
coordinates.set('x',row['newX'])
coordinates.set('y',row['newY'])
# link = etree.SubElement(parkingPlaceWithRegulations, 'link')
# link.set('fraction','0.72')
# link.set('id','182')
dimensions = etree.SubElement(parkingPlaceWithRegulations, 'dimensions')
dimensions.set('length',row['length'])
dimensions.set('width',row['width'])
supportedVehicles = etree.SubElement(parkingPlaceWithRegulations, 'supportedVehicles')
supportedVehicle = etree.SubElement(supportedVehicles, 'supportedVehicle')
supportedVehicle.set('type','car')
electricChargingStation = etree.SubElement(parkingPlaceWithRegulations, 'electricChargingStation')
electricChargingStation.set('available','false')
operativeParkingRegulations = etree.SubElement(parkingPlaceWithRegulations,'operativeParkingRegulations')
operativeParkingRegulation = etree.SubElement(operativeParkingRegulations,'operativeParkingRegulation')
operativeParkingRegulation.set('id','on_street_paid')
operativeParkingRegulation = etree.SubElement(operativeParkingRegulations,'operativeParkingRegulation')
operativeParkingRegulation.set('id','on_street_free')
operativeParkingRegulation = etree.SubElement(operativeParkingRegulations,'operativeParkingRegulation')
operativeParkingRegulation.set('id','amsterdam_permitholder_regulation')
return parkingPlaces
parkingPlaces = parking_spaces_onstreet(parking_places_may,crs)
etree.dump(parkingPlaces)
```
# parkingFloors
```
def parking_floors(num_floors,num_spots_per_floor):
total_num_spots = num_floors * num_spots_per_floor
start = 0
stop = num_spots_per_floor
parkingFloors = etree.Element('parkingFloors')
# for i in range(num_floors):
# parkingFloor = etree.SubElement(parkingFloors,'parkingFloor')
# parkingFloor.set('id','albertCuypGarage_parkingFloor' + str(i))
# parkingFloor.set('level', str(i))
# supportedEngines = etree.SubElement(parkingFloor,'supportedEngines')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','diesel')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','electric')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','gasoline')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','hybrid')
# parkingPlaceReferences = etree.SubElement(parkingFloor,'parkingPlaceReferences')
# for j in range(start,stop):
# parkingPlaceReference = etree.SubElement(parkingPlaceReferences,'parkingPlaceReference')
# parkingPlaceReference.set('id',('albertCuypGarage_parkingPlace' + str(j)))
# start = stop
# stop = start + num_spots_per_floor
return parkingFloors
parkingFloors = parking_floors(num_floors,num_parking_places_per_floor)
```
# parkingGarages
```
def parking_garages(name,x,y,crs,height,regulation):
parkingGarages = etree.Element('parkingGarages')
# parkingGarage = etree.SubElement(parkingGarages,'parkingGarage')
# parkingGarage.set('id',name)
# coordinates = etree.SubElement(parkingGarage,'coordinates')
# coordinates.set('x',x)
# coordinates.set('y',y)
# coordinates.set('crs',crs)
# dimensions = etree.SubElement(parkingGarage,'dimensions')
# dimensions.set('height',height)
# supportedEngines = etree.SubElement(parkingGarage,'supportedEngines')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','diesel')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','electric')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','gasoline')
# supportedEngine = etree.SubElement(supportedEngines,'supportedEngine')
# supportedEngine.set('type','hybrid')
# parkingFloorReferences = etree.SubElement(parkingGarage,'parkingFloorReferences')
# parkingFloorReference = etree.SubElement(parkingFloorReferences,'parkingFloorReference')
# parkingFloorReference.set('id','albertCuypGarage_parkingFloor0')
# parkingFloorReference = etree.SubElement(parkingFloorReferences,'parkingFloorReference')
# parkingFloorReference.set('id','albertCuypGarage_parkingFloor1')
# operativeParkingRegulations = etree.SubElement(parkingGarage,'operativeParkingRegulations')
# operativeParkingRegulation = etree.SubElement(operativeParkingRegulations,'operativeParkingRegulation')
# operativeParkingRegulation.set('id',regulation)
# operativeParkingRegulation = etree.SubElement(operativeParkingRegulations,'operativeParkingRegulation')
# operativeParkingRegulation.set('id','amsterdam_permitholder_regulation')
return parkingGarages
parkingGarages = parking_garages('albertCuypGarage','4.886937','52.355543',crs,height_garage,'off_street_paid')
etree.dump(parkingGarages)
```
# onStreetParkingPlaces
```
def onstreet_parking_places(total_num_spots):
onStreetParkingPlaces = etree.Element('onStreetParkingPlaces')
parkingPlaceReferences = etree.SubElement(onStreetParkingPlaces,'parkingPlaceReferences')
for i in range(total_num_spots):
parkingPlaceReference = etree.SubElement(parkingPlaceReferences,'parkingPlaceReference')
parkingPlaceReference.set('id',('generatedParkingPlace_' + str(i)))
return onStreetParkingPlaces
onStreetParkingPlaces = onstreet_parking_places(num_spaces_on_street)
onStreetParkingPlaces
etree.dump(onStreetParkingPlaces)
```
# Write to XML file
```
with open('/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/simpark/generated/parking_may2.xml','wb') as f:
f.write(b'<?xml version="1.0" encoding="UTF-8"?><parking:parking xmlns:parking="ns://be.uhasselt.imob/parking">')
f.write(etree.tostring(parkingRegulations,pretty_print=True))
f.write(etree.tostring(parkingPlaces,pretty_print=True))
f.write(etree.tostring(parkingFloors,pretty_print=True))
f.write(etree.tostring(parkingGarages,pretty_print=True))
f.write(b'<parkingLots/>')
f.write(etree.tostring(onStreetParkingPlaces,pretty_print=True))
f.write(b'</parking:parking>')
```
| github_jupyter |
# Federated Learning with Clara Train SDK
Medical data is sensitive and needs to be protected. And even after anonymization processes,
it is often infeasible to collect and share patient data from several institutions in a centralised data lake.
This poses challenges for training machine learning algorithms, such as deep convolutional networks,
which require extensive and balanced data sets for training and validation.
Federated learning (FL) is a learning paradigm that sidesteps this difficulty:
instead of pooling the data, the machine learning process is executed locally at each participating institution and only intermediate model training updates are shared among them.
It thereby allows to train algorithms collaboratively without exchanging the underlying datasets and neatly addresses the problem of data governance and privacy that arise when pooling medical data.
There are different FL communication architectures, such as the Client-server approach via hub and spokes, a decentralized architecture via peer-to-peer or hybrid variants.
The FL tool in the Clara Train SDK is a client-server architecture,
in which a federated server manages the aggregation and distribution as shown below.
<br><br>
## Prerequisites
- None. This note book explains FL.
## Resources
You could watch the free GTC 2021 talks covering Clara Train SDK
- [Clara Train 4.0 - 101 Getting Started [SE2688]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20101%20Getting%20Started%20%5BSE2688%5D/1_0qgfrql2)
- [Clara Train 4.0 - 201 Federated Learning [SE3208]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20201%20Federated%20Learning%20%5BSE3208%5D/1_m48t6b3y)
- [What’s New in Clara Train 4.0 [D3114]](https://gtc21.event.nvidia.com/media/What%E2%80%99s%20New%20in%20Clara%20Train%204.0%20%5BD3114%5D/1_umvjidt2)
- [Take Medical AI from Concept to Production using Clara Imaging [S32482]](https://gtc21.event.nvidia.com/media/Take%20Medical%20AI%20from%20Concept%20to%20Production%20using%20Clara%20Imaging%20%20%5BS32482%5D/1_6bvnvyg7)
- [Federated Learning for Medical AI [S32530]](https://gtc21.event.nvidia.com/media/Federated%20Learning%20for%20Medical%20AI%20%5BS32530%5D/1_z26u15uk)
# Why use Clara FL
<br><img src="screenShots/WhyUseClaraFL.png" alt="Drawing" style="height: 400px;width: 600px"/><br>
### Resources
- Watch talk covering Clara Train SDK basics [S22563](https://developer.nvidia.com/gtc/2020/video/S22563)
Clara train Getting started: cover basics, BYOC, AIAA, AutoML
- GTC 2020 talk [Federated Learning for Medical Imaging: Collaborative AI without Sharing Patient Data](https://developer.nvidia.com/gtc/2020/video/s21536-vid)
- [Federated learning blog](https://blogs.nvidia.com/blog/2019/10/13/what-is-federated-learning/)
- [Federated learning blog at RSNA](https://blogs.nvidia.com/blog/2019/12/01/clara-federated-learning/)
### Resources
We encourage you to watch the free GTC 2021 talks covering Clara Train SDK
- [Clara Train 4.0 - 201 Federated Learning [SE3208]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20201%20Federated%20Learning%20%5BSE3208%5D/1_m48t6b3y)
- [Federated Learning for Medical AI [S32530]](https://gtc21.event.nvidia.com/media/Federated%20Learning%20for%20Medical%20AI%20%5BS32530%5D/1_z26u15uk)
# Overview
Federated Learning in Clara Train SDK uses a client-server architecture.
The image below gives you an overview.
For details about the components, please see our [documentation](https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v4.0/nvmidl/additional_features/federated_learning.html?highlight=federated).
The key things to note are:
* A server is responsible for **managing training, keeping best model and aggregating gradients**.
* Clients are responsible for **training local model** and sending updates (gradients) to server.
* **No data from the dataset is shared** between clients or with server.
* To ensure **privacy**, all communication with server is secured.
* Additional privacy-preserving mechanisms can be enabled.
Figure below shows these concepts and how they are communicated to the server.
<br><img src="screenShots/FLDetails.png" alt="Drawing" style="height: 400px;width: 600px"/><br>
## Server
The following diagram shows the server workflow:
<br><img src="screenShots/fl_server_workflow.png" alt="Drawing" style="width: 1000px"/><br>
A federated server is responsible for:
1. Initialising a global model at federated round 0
1. Sharing the global model with all clients
1. Synchronising model updates from multiple clients
1. Updating the global model when sufficient model updates received
## Client
The following diagram shows the client workflow:
<br><img src="screenShots/fl_client_workflow.png" alt="Drawing" style="height: 350px"/><br>
A federated client will:
1. Download the global model
1. Train the model with local training data
1. Upload `delta_w` (the difference between the updated model and the global model) to the server
# FL Challenges
In order to run a federated learning experiment, you need think about:
1. Software development:
1. Security:
1. Secure connection
2. Authentication
3. Certification ssl
2. Deadlocks: Clients join / die / re-join
4. Unstable client server connection
5. Scaling: How to run large FL experiment with 20 or 100 clients
6. With different sites having different data size, how to enable local training with multiple GPUs, also how to give weights to different clients.
7. Audit trails: clients need to know who did what, when
2. Logistics: <span style="color:red">(Most challenging)</span>
1. FL experiment is typically conducted through multiple experiments to tune hyper parameters.
How to synchronize these runs
2. Keep track of experiments across sites.
3. FL most important feature is to improve the off diagonal metric.
Clients would share results (validation metric) and NOT the data.
<span style="color:red"> This is the hardest to do </span> since you need to distribute the models from each client to the rest.
3. Research:
1. How to aggregate model weights from each site
2. Privacy for your model weight sharing
Clara software engineers have taken care of the 1st and 2nd bullets for you so researchers can focus on the 3rd bullet.
Moreover, In Clara train V3.1, FL comes with a new provisioning tool what simplifies the process.
Lets start by defining some terminologies used throughout FL discussion:
- __Study__: An FL project with preset goals (e.g. train the EXAM model) and identified participants.
- __Org__: The organization that participates in the study.
- __Site__: The computing system that runs FL application as part of the study.
There are two kinds of sites: Server and Clients.
Each client belongs to an organization.
- __Provisioning Tool__: The tool used for provisioning all participants of the study.
- __FL Server__: An application responsible for client coordination based on FL federation rules and model aggregation.
- __FL Client__: An application running on a client site that performs model training with its local datasets and collaborates with the FL Server for federated study.
- __Admin Client__: An application running on a user’s machine that allows the user to perform FL system operations with a command line interface.
- __Lead IT__: The person responsible for provisioning the participants and coordinating IT personnel from all sites for the study.
The Lead IT is also responsible for the management of the Server.
- __Site IT__: The person responsible for the management of the Site of his/her organization.
- __Lead Researcher__: The scientist who works with Site Scientists to ensure the success of the study.
- __Site Researcher__: The scientist who works with the Lead Scientist to make sure the Site is properly prepared for the study.
NOTE: in certain projects, a person could play several of the above-mentioned roles.
<br><img src="screenShots/FLWorkFlow.png" alt="Drawing" style="height: 400px"/><br>
Diagram above shows high level steps of an FL study:
1. Lead IT configures everything in a config.yaml file, runs provisioning which will generate zip packages for each client.
2. These packages contains everything a FL clients needs from starting the docker, ssl certification, etc to start and complete the FL experiment.
3. Each client starts the docker and the FL client use the provided Startup Kit.
4. Similarly the FL server starts the docker and FL server using the provided Startup Kit.
5. Finally the Admin can either use docker or pip install the admin tool, which will connect to the server and start the FL experiment
## With this in mind we have created 3 sub-notebooks:
1. [Provisioning](Provisioning.ipynb) which walks you though the configurations you set and how to run the tool
2. [Client](Client.ipynb) walks you through a FL client
3. [Admin](Admin.ipynb) walks you through how the FL admin data scientist would conduct the FL experiment once the server and clients are up and running
| github_jupyter |
# 2.04 Figure 4
---
Author: Riley X. Brady
Date: 11/19/20
This plots the distribution of biogeochemical tracers at their statistical origin relative to their 1000 m crossing. See notebook `1.03` for the calculation of tracers at their memory time origin and `1.04` for finding their ambient mixed layer temperatures.
```
%load_ext lab_black
%load_ext autoreload
%autoreload 2
import numpy as np
import xarray as xr
import proplot as plot
import gsw
import PyCO2SYS as pyco2
print(f"numpy: {np.__version__}")
print(f"xarray: {xr.__version__}")
print(f"proplot: {plot.__version__}")
print(f"gsw: {gsw.__version__}")
print(f"pyCO2: {pyco2.__version__}")
REGIONS = ["drake", "crozet", "kerguelan", "campbell"]
```
First, we start with dissolved inorganic carbon. Our model outputs DIC with units of mmol m$^{-3}$. Here, we use *in situ* density to convert to $\mu$mol kg$^{-1}$ to account for density effects.
```
dicdata = []
for region_name in REGIONS:
region = xr.open_dataset(f"../data/postproc/{region_name}.1000m.tracer.origin.nc")
# Load into memory as a numpy array.
dic = region.DIC.values
S = region.S.values
T = region.T.values
z = region.z.values * -1
# Convert to kg m-3
rho = gsw.density.rho(S, T, z)
dic = dic[~np.isnan(dic)]
rho = rho[~np.isnan(rho)]
conversion = 1000 * (1 / rho)
dicdata.append(list(dic * conversion))
```
Next, we calculate potential density reference to the surface ($\sigma_{0}$). We use this as a simple marker for water mass types.
```
sigmadata = []
for region_name in REGIONS:
data = xr.open_dataset(f"../data/postproc/{region_name}.1000m.tracer.origin.nc")
T = data.T.values
T = T[~np.isnan(T)]
S = data.S.values
S = S[~np.isnan(S)]
sigma0 = gsw.sigma0(S, T)
sigmadata.append(list(sigma0))
```
Lastly, we calculate the potential pCO$_{2}$. This is the pCO$_{2}$ the particle would have if it were warmed or cooled to the ambient temperature it eventually experiences once it upwells into the mixed layer after its last 1000 m crossing. This assumes that there are no modifications to the carbon content of the particle due to air-sea gas exchange, biological processes, mixing, etc. It's a way to relate the pCO$_{2}$ at any point in its trajectory to the outgasing or uptake potential it would have upon reaching the surface ocean.
This works, since:
F$_{\mathrm{CO}_{2}}$ = k $\cdot$ S $\cdot$ (pCO$_{2}^{O}$ - pCO$_{2}^{A}$)
In other words, F$_{\mathrm{CO}_{2}}$ $\propto$ pCO$_{2}^{o}$, and it is particularly true in our simulation since we have a fixed atmospheric pCO$_{2}^{A}$ of 360 $mu$atm.
```
pco2sigmadata = []
for region_name in REGIONS:
# Load in tracers at origin and append on the ambient temperature calculated for each
# particle.
data = xr.open_dataset(f"../data/postproc/{region_name}.1000m.tracer.origin.nc")
ambient = xr.open_dataarray(
f"../data/postproc/{region_name}.ambient.temperature.nc"
)
data["T_ambient"] = ambient
# Calculate in situ density and temperature
rho = gsw.density.rho(data.S, data.T, data.z * -1)
t_insitu = gsw.pt_from_t(data.S, data.T, 0, data.z * -1)
# Use in situ density to convert all units from mmol m-3 to umol kg-1
conversion = 1000 * (1 / rho)
# PyCO2SYS is a famous package from MATLAB. We're using the python plug-in here
# to diagnostically calculate in situ pCO2 along the particle trajectory.
pCO2 = pyco2.CO2SYS_nd(
data.ALK * conversion,
data.DIC * conversion,
1,
2,
salinity=data.S,
temperature=t_insitu,
pressure=data.z * -1,
total_silicate=data.SiO3 * conversion,
total_phosphate=data.PO4 * conversion,
)["pCO2"]
# pCO2 if brought to the ambient temperature when the given particle upwells to
# 200 m. We're just using a known lab-derived relationship that pCO2 varies by
# ~4% for a degree change in temperature.f
pCO2sigma = pCO2 * (1 + 0.0423 * (data.T_ambient - t_insitu))
# Subtract out fixed atmospheric CO2 to get a gradient with the atmosphere.
pCO2sigma -= 360
pCO2sigma = pCO2sigma.dropna("nParticles").values
pco2sigmadata.append(list(pCO2sigma))
```
## Visualize
I could probably modularize this and make it a lot nicer, but who cares? It's a plot for the paper and I like it. **NOTE**: I did a little bit of adjustment of text in Illustrator after the fact.
```
# supresses some numpy warnings in the plot.
import warnings
warnings.filterwarnings("ignore")
plot.rc["abc.border"] = False
plot.rc.fontsize = 8
f, axs = plot.subplots(
ncols=3,
figsize=(6.9, 6.9 * 3.5 / 7.48),
share=0,
)
# Some global attributes for the plot
VIOLINCOLOR = "gray5"
EXTREMEMARKER = "."
EXTREMESIZE = 1.0
BOXWIDTH = 0.1
WHISKERLW = 0.5
CAPLW = 0.5
###########
# DIC PANEL
###########
axs[1].boxplot(
dicdata,
whis=(5, 95),
marker=EXTREMEMARKER,
markersize=EXTREMESIZE,
widths=BOXWIDTH,
fillcolor="black",
mediancolor="white",
boxcolor="black",
medianlw=1,
fillalpha=1,
caplw=CAPLW,
whiskerlw=WHISKERLW,
whiskercolor="black",
capcolor="black",
)
axs[1].violinplot(
dicdata,
fillcolor=VIOLINCOLOR,
fillalpha=1,
lw=0,
edgecolor=VIOLINCOLOR,
widths=0.65,
points=100,
)
axs[1].format(
yreverse=True,
xticklabels=REGIONS,
xtickminor=False,
xrotation=45,
ylabel="dissolved inorganic carbon [$\mu$mol kg$^{-1}$]",
)
#############
# SIGMA PANEL
#############
axs[0].boxplot(
sigmadata,
whis=(5, 95),
marker=EXTREMEMARKER,
markersize=EXTREMESIZE,
widths=BOXWIDTH,
fillcolor="black",
mediancolor="white",
boxcolor="black",
medianlw=1,
fillalpha=1,
caplw=CAPLW,
whiskerlw=WHISKERLW,
whiskercolor="black",
capcolor="black",
)
axs[0].violinplot(
sigmadata,
fillcolor=VIOLINCOLOR,
fillalpha=1,
lw=0,
edgecolor=VIOLINCOLOR,
widths=0.65,
points=100,
)
axs[0].area([0.5, 5], 26.5, 27.2, color="#edf8fb", alpha=0.25, zorder=0)
axs[0].area([0.5, 5], 27.2, 27.5, color="#b3cde3", alpha=0.25, zorder=0)
axs[0].area([0.5, 5], 27.5, 27.8, color="#8c96c6", alpha=0.25, zorder=0)
axs[0].area([0.5, 5], 27.8, 27.9, color="#88419d", alpha=0.25, zorder=0)
axs[0].text(4.05, 26.55, "SAMW", color="k")
axs[0].text(4.2, 27.25, "AAIW", color="k")
axs[0].text(4.25, 27.55, "CDW", color="k")
axs[0].text(4.1, 27.85, "AABW", rotation=0, color="k")
axs[0].format(
yreverse=True,
xlim=(0.5, 5.00),
ylim=(26.5, 27.9),
xticklabels=REGIONS,
xtickminor=False,
xrotation=45,
ylabel="potential density, $\sigma_{0}$ [kg m$^{-3}$]",
)
######################
# POTENTIAL PCO2 PANEL
######################
axs[2].boxplot(
pco2sigmadata,
whis=(5, 95),
marker=EXTREMEMARKER,
markersize=EXTREMESIZE,
widths=BOXWIDTH,
fillcolor="black",
mediancolor="white",
boxcolor="black",
medianlw=1,
fillalpha=1,
caplw=CAPLW,
whiskerlw=WHISKERLW,
whiskercolor="black",
capcolor="black",
)
axs[2].violinplot(
pco2sigmadata,
fillcolor=VIOLINCOLOR,
fillalpha=1,
lw=0,
edgecolor=VIOLINCOLOR,
widths=0.65,
points=100,
)
axs[2].format(
ylim=(-150, 530),
yreverse=False,
ylabel="potential pCO$_{2}$ gradient with atmosphere [$\mu$atm]",
xticklabels=REGIONS,
xtickminor=False,
xrotation=45,
)
axs[2].area([0.5, 4.5], -300, 0, color="blue2", alpha=0.3, zorder=0)
axs[2].area([0.5, 4.5], 0, 530, color="red2", alpha=0.3, zorder=0)
axs[2].text(4.7, -200, "into ocean", rotation=90, color="k")
axs[2].text(4.7, 80, "into atmosphere", rotation=90, color="k")
axs.format(abc=True, abcstyle="(a)", abcloc="ul")
```
| github_jupyter |
# Day 9: Text Processing and Data Sample Clustering
https://github.com/Make-School-Courses/DS-2.1-Machine-Learning/blob/master/Notebooks/remote_simple_kmeans.ipynb
## Learning Outcomes
1. Transform text data into numerical vectors
2. Group or cluster the data samples we have
### By the end of class you'll be able to
- Define Bag-of-Words
- Writie K-means to group text data
## Text Vectorization
- The process to transform text data to numerical vectors
### Why do we need text vectorization?
Think back to when we learned about **Label Encoding** and **One-Hot Encoding**: We took categories (text) and transformed them into numerical values.
Text vectorization is similar in that we are taking text and turning it into something a machine can understand and manipulate by translating a word in to a unique vector of numbers. For example, we could associate the unique vector (0, 1, 0, 1) to the word queen.
**Question: What are some other use cases for text vectorization?**
### Use Cases for Text Vectorization
- Count the number of unique words in each sentence (Bag-of-Words, we'll discuss this shortly!)
- Assign weights to each word in the sentence.
- Map each word to a number (dictionary with words as key and numbers as values) and represent each sentences as the sequence of numbers
## Bag-of-Words Matrix
- Bag-of-Words (BoW) is a matrix where its rows are sentences and its columns are unique words seen across all of the sentences
### BoW Example
We have the following 4 sentences:
1. This is the first sentence.
2. This one is the second sentence.
3. And this is the third one.
4. Is this the first sentence?
Question: Given the above sentances, how many unique words are there?
A BoW matrix would look like the following, where 0 means the word does not appear in the sentence, and 1 means the word does appear in the sentence
<img src="../static/screenshots/day9-1.png">
### BoW Worksheet (7 min)
**Complete the following worksheet on your own:**
- Copy [this blank table](https://docs.google.com/presentation/d/1B7v33fPEwblhHYBCSrCvKRBZz776Df4T_t2jcPXt4k8/edit#slide=id.g74c1153bdd_0_15), and create the BoW matrix for the following sentences:
1. Data Science is the best.
2. Data Science has cool topics.
3. Are these the best topics?
4. Is Data Science the best track?
## BoW in Sklearn
We can write a function to return a BoW matrix
Below, we will see how we can build a BoW matrix by calling [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=countvectorizer#sklearn-feature-extraction-text-countvectorizer) in sklearn
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
sentences = ['This is the first sentence.',
'This one is the second sentence.',
'And this is the third one.',
'Is this the first sentence?']
vectorizer = CountVectorizer()
# create a term-document matrix: assign each word a tuple:
# first number is the sentence, and the second is the unique number that corresponds to the word
# for example, if the word "one" is assigned the number 3,
# then the word "one" that is used in the third sentence is represented by the tuple (2,3)
X = vectorizer.fit_transform(sentences)
# from the term-document matrix, create the BoW matrix
print(X.toarray())
```
## How do we get unique words?
```
# Get the unique words
print(vectorizer.get_feature_names())
```
### Activity: Worksheet --> sklearn (7 min)
Use sklearn to take the 4 sentences you used in the worksheet and create the BoW matrix using sklearn
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
sentences = ["Data Science is the best.", "Data Science has cool topics.",
"Are these the best topics?", "Is Data Science the best track?"]
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(sentences)
# from the term-document matrix, create the BoW matrix
print(X.toarray())
print(vectorizer.get_feature_names())
```
## Clustering
- Clustering is an unsupervised learning method. A cluster is a **group of data points that are grouped together due to similarities in their features**
- This is very often used because we usually **don’t have labeled data**
- K-Means clustering is a popular clustering algorithms: it **finds a fixed number (k) of clusters in a set of data.**
- The goal of any cluster algorithm is to **find groups (clusters) in the given data**
### Question: What are some use cases of clustering?
Examples of Clustering
- Cluster movie dataset -> We expect the movies which their genres are similar be clustered in the same group
- News Article Clustering -> We want the News related to science be in the same group, News related to sport be in the same group
## Demo of K-means
```
from sklearn.datasets.samples_generator import make_blobs
import matplotlib.pyplot as plt
# create a sample dataset with 300 data points and 4 cluster centers
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
# plot the data
plt.scatter(X[:, 0], X[:, 1])
# from figures import plot_kmeans_interactive
# plot_kmeans_interactive()
```
### Question: How many samples we have at each cluster?
```
from sklearn.cluster import KMeans
# k-means algorithm where k = 4
km = KMeans(n_clusters=4)
# perform k-means clustering on the previous dataset
km.fit(X)
# print the 4 cluster centers
print(km.cluster_centers_)
```
### Answer:
```
import pandas as pd
print(km.predict(X))
#then obtain the histogram of the above list
dict(pd.Series(km.predict(X)).value_counts())
```
## How to choose the optimal number (K) of clusters?
- We could always choose a high number, but we may be wasting a lot of time and resources when a smaller number would give us the same results. How do we know the best K to pick so that we are running k-means as efficiently as possible?
#### Possible (min and max cluster)
- k=1 (one big cluster)-> minimum number of cluster
- k=300 (number of samples) -> the maximum number of cluster
## The Elbow Method
We can find the optimal K by utilizing the **Elbow Method**: a method that **assigns a score to each K**. When we plot these scores, we will get a line that looks like an arm bending at the elbow. The **K value that is closest to the "elbow" point of the graph is our optimal K**
Scores can be calculated two different ways:
1. **Distortion**: the average of the squared distances from each sample to its closest cluster center. Typically, the Euclidean distance metric is used. The lower the distortion, the better the score
- For numberes 1 to k, compute the following:
- Euclidean squared distance formula: $\sum_{j=1}^{k} (a_j-b_j)^2$
- For each sample, find the squared distance between the sample and all k cluster centers, and then pick the closest center (shortest distance)
- Take the average of the above
2. Inertia: the sum of squared distances of samples to their closest cluster center. The lower the inertia, the better the score
- We'll use the same Euclidean squared distance formula for here as well.
Either scoring method is valid, and will give you the same optimal K value. Below we will look at how to implement both scoring methods:
## Distortion
```
import numpy as np
from scipy.spatial import distance
distortions = []
K = range(1, 10)
for k in K:
# fit the k-means for a given k to the data (X)
km = KMeans(n_clusters=k)
km.fit(X)
# distance.cdist finds the squared distances
# axis=1 allows us to keep the min for each sample, not jsut the min across the entire dataset
# find the closest distance for each sample to a center, and take the average
distortions.append(sum(np.min(distance.cdist(X, km.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])
# Plot the elbow: bx- = use a solid (-) blue (b) line,
# and mark the x-axis points with an x (x)
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
## Intertia
```
sum_of_squared_distances = []
K = range(1,15)
for k in K:
km = KMeans(n_clusters=k)
km.fit(X)
# inertia is an attribute of km!
# https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans
sum_of_squared_distances.append(km.inertia_)
# Plot the elbow
plt.plot(K, sum_of_squared_distances, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
## Activity - Elbow Method (7 min)¶
Using the starter code below, prove that 6 is the optimal K for clustering the data using k-means using the elbow method. You can use either Distortion or Inertia.
```
from sklearn.datasets.samples_generator import make_blobs
import matplotlib.pyplot as plt
Data, blob_y = make_blobs(n_samples=500, centers=6,
random_state=0, cluster_std=0.80)
# plot the data
plt.scatter(Data[:, 0], Data[:, 1])
import numpy as np
from scipy.spatial import distance
def get_k_distortion(data, max_range):
distortions = []
K = range(1, max_range)
for k in K:
km = KMeans(n_clusters=k)
km.fit(data)
distortions.append(sum(np.min(distance.cdist(data, km.cluster_centers_, 'euclidean'), axis=1)) / data.shape[0])
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
get_k_distortion(Data, 12)
import numpy as np
from scipy.spatial import distance
def get_k_intertia(data, max_range):
sum_of_squared_distances = []
K = range(1,max_range)
for k in K:
km = KMeans(n_clusters=k)
km.fit(data)
sum_of_squared_distances.append(km.inertia_)
plt.plot(K, sum_of_squared_distances, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
get_k_intertia(Data, 12)
```
## Activity: Combine Text Vectorization and Clustering the Texts (30 min)
**Complete the activity below in groups of 3**
- We want to cluster the given sentences
- To do this: We want to use both concepts we learned today:
- Vectorize the sentences (text-vectorization)
- Apply Kmeans to cluster our vectorized sentences
- **Note**: We want to remove stop words from our sentences (and, or, is, etc.). To do this, we add stop_words='english' to our call to CountVectorize
- **Hint**: Look at the sentences in the starter code. How would you cluster the data if you were doing the clustering? Use that number as your K to start with.
#### My Solution - Do not use LOL
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
import pandas as pd
sentences = ["This little kitty came to play when I was eating at a restaurant.",
"Merley has the best squooshy kitten belly.",
"Google Translate app is incredible.",
"If you open 100 tab in google you get a smiley face.",
"Best cat photo I've ever taken.",
"Climbing ninja cat.",
"Impressed with google map feedback.",
"Key promoter extension for Google Chrome."]
def vectorize_sentences(sentences, clusters):
# 1. Vectorize the sentences by BOW
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(sentences)
# 2. Cluster 2- Cluster the vectorized sentences in 2 groups by K-Means
km = KMeans(n_clusters = clusters)
km.fit(X)
# print(km.cluster_centers_)
# print(km.predict(X))
# 3. Obtain which group the S1 will be mapped
# print(dict(pd.Series(km.predict(X)).value_counts())) #then obtain the histogram of the above list
# 4. Do step 3 for all S1 … S8
for sentence in sentences:
y= vectorizer.transform([sentence])
prediction = km.predict(y)
print(prediction)
vectorize_sentences(sentences, 2)
```
### Milad's Solution
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
sentences = ["This little kitty came to play when I was eating at a restaurant.",
"Merley has the best squooshy kitten belly.",
"Google Translate app is incredible.",
"If you open 100 tab in google you get a smiley face.",
"Best cat photo I've ever taken.",
"Climbing ninja cat.",
"Impressed with google map feedback.",
"Key promoter extension for Google Chrome."]
def vectorize_sentences_solution(sentences):
# remove stop words from sentences (and, or, is, ...) and instantiate the Bag-of-Word
vectorizer = CountVectorizer(stop_words='english') # Highly recommend to always do this stop_words='english' = remove stop words in English
# transform sentences into numerical arrays
X = vectorizer.fit_transform(sentences) # transform to BOW matrix
# print unique words (vocabulary)
print(vectorizer.get_feature_names())
print(X.shape)
# We know there are two group of sentences -> Group 1: cats | Group 2: Google
true_k = 2
model = KMeans(n_clusters=true_k, init='k-means++') # init='k-means++' not an important parameter
model.fit(X)
# Testing our model: For a new sentence, let's see how the model will cluster it.
# first we should convert the sentence to a numerical array
Y = vectorizer.transform(["chrome browser to open."]) # vector the represent these sentences
print('Y:')
print(Y.toarray()) # all words will be 0, but Cat and Google will be 1
prediction = model.predict(Y)
print("Y Sentences Prediction", prediction)
# Let's do the same for another sentence
Y = vectorizer.transform(["My cat is hungry."])
prediction = model.predict(Y)
print(prediction)
# Lets see the model prediction for training docs
print("Sentences Prediction", model.predict(X))
vectorize_sentences_solution(sentences)
```
## Other clustering methods and comparison:
http://scikit-learn.org/stable/modules/clustering.html
## Resources:
- https://www.youtube.com/watch?v=FrmrHyOSyhE
- https://jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html
## Summary
- In order to work with text, we should transform sentences into vectors of numbers
- We learned a method for text vectorization -> Bag-of-Words (CountVectorizer)
- We will learn TFIDF Vectorizer next session
- Clustering is an unsupervised learning algorithm that obtains groups based on the geometric positions of features
- K-means is one clustering method that separates the data into K number of clusters. The Elbow method can be used to find the optimal K
## Optional: Obtain the centers (centriods) of two cluster: which words would be close to the centriods
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.cluster import KMeans
from sklearn.metrics import adjusted_rand_score
sentences = ["This little kitty came to play when I was eating at a restaurant.",
"Merley has the best squooshy kitten belly.",
"Google Translate app is incredible.",
"If you open 100 tab in google you get a smiley face.",
"Best cat photo I've ever taken.",
"Climbing ninja cat.",
"Impressed with google map feedback.",
"Key promoter extension for Google Chrome."]
# vectorizer = TfidfVectorizer(stop_words='english')
vectorizer = CountVectorizer(stop_words='english')
X = vectorizer.fit_transform(sentences)
print(vectorizer.get_feature_names())
print(X.shape)
true_k = 2
model = KMeans(n_clusters=true_k, init='k-means++')
model.fit(X)
# print('M:')
# print(model.cluster_centers_.argsort())
# print(model.cluster_centers_.argsort()[:, ::-1])
# print("Top terms per cluster:")
# order_centroids = model.cluster_centers_.argsort()[:, ::-1]
# terms = vectorizer.get_feature_names()
# for i in range(true_k):
# print("Cluster %d:" % i),
# for ind in order_centroids[i, :10]:
# print(' %s' % terms[ind]),
# print("\n")
# print("Prediction")
Y = vectorizer.transform(["chrome browser to open."])
print('Y:')
print(Y.toarray())
prediction = model.predict(Y)
print(prediction)
Y = vectorizer.transform(["My cat is hungry."])
prediction = model.predict(Y)
print(prediction)
# Lets see the model prediction for training docs
print(model.predict(X))
```
# Day 10: Naive Bayes
https://github.com/Make-School-Courses/DS-2.1-Machine-Learning/blob/master/Notebooks/remote_simple_naive_Bayes.ipynb
**During extended Day 9**
## Learning Objectives
By the end of today's class, you should be able to...
- Review Bayes'formula for conditional probability
- Apply Bayes' rule for text classification
- Write a Python function for text classification with Naive Bayes
## Text Classification
Text classification is the **process of attaching labels to bodies of text**, e.g., tax document, medical form, etc. based on the content of the text itself.
Think of your spam folder in your email. How does your email provider know that a particular message is spam or “ham” (not spam)?
#### Question: How do you tell if an email is spam or ham? What are the signs?
##### Followup: How does your process differ from a text classifier's?
## Review of conditional probability and its application on Text
- Assume this small dataset is given:
<img src="../static/screenshots/day10-1.png">
## Question: What is the probability that an email is spam? What is the probability that an email is ham?
$P(spam) = ?$
$P(ham) = ?$
## Activity: Create spam and ham dictionary
- Create two dictionaries for spam and ham where keys are unique words and values are the frequency of each word
- Example: if the word "password" shows up 4 times in the text, then in the dictionary, the key would be "password" and the value would be 4
- Create the dictionaries programatically using for loops
- Use the below text to create your dictionaries:
- spam_text= ['Send us your password', 'review us', 'Send your password', 'Send us your account']
- ham_text= ['Send us your review', 'review your password']
```
spam_text= ['Send us your password', 'review us', 'Send your password', 'Send us your account']
ham_text= ['Send us your review', 'review your password']
spam = {}
for i in spam_text:
for j in i.lower().split(' '):
if j not in spam:
spam[j] = 1
else:
spam[j] += 1
print("Spam Dictionary:")
print(spam)
print("\n")
ham = {}
for i in ham_text:
for j in i.lower().split(' '):
if j not in ham:
ham[j] = 1
else:
ham[j] += 1
print("Ham Dictionary:")
print(ham)
```
## Question: We know an email is spam, what is the probability that password be a word in it?
What is the frequency of "password" in a spam email?
Answer:
$P(password \mid spam) = 2/(3+3+3+2+1+1) = 2/13 \approx 15.38\%$
```
# or
p_password_given_spam = spam['password'] / sum(spam.values())
print(p_password_given_spam)
```
## Question: We know an email is spam, what is the probability that password be a word in it?
What is the frequency of "password" in a ham email?
Answer:
$P(password \mid ham) = 1/(1+2+1+1+2+0) = 1/7 \approx 14.29\%$
```
# or
p_password_given_ham = ham['password'] / sum(ham.values())
print(p_password_given_ham)
```
## Question: Assume we have seen the word "password" in an email, what is the probability that the email is spam?
- $P(spam \mid password) = ?$
- Hint: Use Bayes' rule and Law of Total Probability (LOTP):
- Bayes' Rule: $P(spam \mid password) = (P(password \mid spam) P(spam))/ P(password)$
- LOTP: $P(password) = P(password \mid spam) P(spam) + P(password \mid ham) P(ham)$
```
# Calculated by viewing our dataset
p_spam = spam['password'] / (spam['password'] + ham['password'])
p_ham = ham['password'] / (spam['password'] + ham['password'])
# LOTP
p_password = p_password_given_spam * p_spam + p_password_given_ham * p_ham
print("Probability of Password:", p_password)
# Bayes Rule
p_spam_given_password = p_password_given_spam * p_spam / p_password
print("Probability of spam given password:", p_spam_given_password)
```
#### End of Day 9 lecture day 10
## Naive Bayes Classifier (Math)
The Bayes Theorem : $P(spam | w_1, w_2, ..., w_n) = {P(w_1, w_2, ..., w_n | spam)P(spam)}/{P(w_1, w_2, ..., w_n)}$
**Naive Bayes assumption is that each word is independent of all other words, In reality, this is not true!** But let's try it out for our spam/ham examples:
Applying Bayes' Rule, the above relationship becomes simple for both spam and ham with the Naive Bayes assumption:
$P(spam | w_1, w_2, ..., w_n) = {P(w_1| spam)P(w_2| spam) ... P(w_n| spam)P(spam)}/{P(w_1, w_2, ..., w_n)}$
$P(ham | w_1, w_2, ..., w_n) = {P(w_1| ham)P(w_2| ham) ... P(w_n| ham)P(ham)}/{P(w_1, w_2, ..., w_n)}$
The denominator $P(w_1, w_2, ..., w_n)$ is independent of spam and ham, so we can remove it to simplify our equations, as we only care about labeling, and proportional relationships:
$P(spam | w_1, w_2, ..., w_n) \propto P(spam | w_1, w_2, ..., w_n) = {P(w_1| spam)P(w_2| spam) ... P(w_n| spam)P(spam)}$
$P(ham | w_1, w_2, ..., w_n) \propto P(ham | w_1, w_2, ..., w_n) = {P(w_1| ham)P(w_2| ham) ... P(w_n| ham)P(ham)}$
This is **easier to express if we can write it as a summation. To do so, we can take the log of both sides of the equation**, because the **log of a product is the sum of the logs.**
$logP(spam | w_1, w_2, ..., w_n) \propto {\sum_{i=1}^{n}log P(w_i| spam)+ log P(spam)}$
$logP(ham | w_1, w_2, ..., w_n) \propto {\sum_{i=1}^{n}log P(w_i| ham)+ log P(ham)}$
##### Given the above, we can therefore, say that if:
${\sum_{i=1}^{n}log P(w_i| spam)+ log P(spam)} > {\sum_{i=1}^{n}log P(w_i| ham)+ log P(ham)}$
#### then that sentence is spam. Otherwise, the sentence is ham!
## Pseudo-code for Naive Bayes for spam/ham dataset:
- Assume the following small dataset is given
- The first column is the labels of received emails
- The second column is the body of the email (sentences)
<img src="../static/screenshots/day10-2.png">
1. Based on the given dataset above, create the following two dictionaries:
Ham -> D_ham = {'Jos': 1,'ask':1, 'you':1,... }
Spam- > D_spam= {'Did': 1, 'you':3, ... }
Each dictionary representes all words for the spam and ham emails and their frequency (as the value of dictionaries)
2. For any new given sentences, having $w_1$, $w_2$, ... $w_n$ words, assuming the sentence is ham, calculate the following:
$P(w_1| ham)$, $P(w_2| ham)$, ..., $P(w_n| ham)$
$log(P(w_1| ham))$, $log(P(w_2| ham))$, ..., $log(P(w_n| ham))$
then add them all together to create one value
3. Calculate what percentage of labels are ham -> $P(ham)$ -> then take the log -> $log(P(ham))$
4. Add the value from step (2) and (3)
5. Do Steps (2) - (4) again, but assume the given new sentence is spam
6. Compare the two values. The greater value indicates which label (class) the sentence should be given
## Activity: Apply the naive Bayes to spam/ham email dataset:
In groups of 3, complete the following activity
1. Please read this article, starting at the **Naive Bayes Assumption** section: https://pythonmachinelearning.pro/text-classification-tutorial-with-naive-bayes/
2. We will use the [Spam Dataset](https://render.githubusercontent.com/view/Datasets/spam.csv)
3. In the article, for the codeblock of the fit method, which line(s) of the method calculates the probabilty of ham and spam?
4. For the same fit method, which line(s) of the method calculates the spam and ham dictionaries?
5. In the article, for the codeblock of the predict method, which line(s) compares the scores of ham or spam based on log probabilities?
We will discuss as a class after workinging in groups.
## Activity: Find the Naive Bayes core parts in the SpamDetector Class
Assume we have written the SpamDetector class from the article. Train this model from the given [Spam Dataset](https://render.githubusercontent.com/view/Datasets/spam.csv), and use it to make a prediction!
Use the starter code below, and then fill in the TODOs in the main.
#### Hints:
- you will need to use train_test_split from sklearn to obtain your training and test (prediction) data
- You will need to instantiate your SpamDetector, fit the training data to it, predict using the test values, and then measure the accuracy
- To calculate accuracy: add up all the correct predictions divided by the total number of predictions
- Use the following code to get your data ready for transforming/manipulating:
```
data = pd.read_csv('Datasets/spam.csv',encoding='latin-1')
data = data.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1)
data = data.rename(columns={"v1":'label', "v2":'text'})
print(data.head())
tags = data["label"]
texts = data["text"]
X, y = texts, tags
```
```
import os
import re
import string
import math
import pandas as pd
class SpamDetector(object):
"""Implementation of Naive Bayes for binary classification"""
# clean up our string by removing punctuation
def clean(self, s):
translator = str.maketrans("", "", string.punctuation)
return s.translate(translator)
# tokenize our string into words
def tokenize(self, text):
text = self.clean(text).lower()
return re.split("\W+", text)
# count up how many of each word appears in a list of words.
def get_word_counts(self, words):
word_counts = {}
for word in words:
word_counts[word] = word_counts.get(word, 0.0) + 1.0
return word_counts
def fit(self, X, Y):
"""Fit our classifier
Arguments:
X {list} -- list of document contents
y {list} -- correct labels
"""
self.num_messages = {}
self.log_class_priors = {}
self.word_counts = {}
self.vocab = set()
# Compute log class priors (the probability that any given message is spam/ham),
# by counting how many messages are spam/ham,
# dividing by the total number of messages, and taking the log.
n = len(X)
self.num_messages['spam'] = sum(1 for label in Y if label == 'spam')
self.num_messages['ham'] = sum(1 for label in Y if label == 'ham')
self.log_class_priors['spam'] = math.log(self.num_messages['spam'] / n )
self.log_class_priors['ham'] = math.log(self.num_messages['ham'] / n )
self.word_counts['spam'] = {}
self.word_counts['ham'] = {}
# for each (document, label) pair, tokenize the document into words.
for x, y in zip(X, Y):
c = 'spam' if y == 'spam' else 'ham'
counts = self.get_word_counts(self.tokenize(x))
# For each word, either add it to the vocabulary for spam/ham,
# if it isn’t already there, and update the number of counts.
for word, count in counts.items():
# Add that word to the global vocabulary.
if word not in self.vocab:
self.vocab.add(word)
if word not in self.word_counts[c]:
self.word_counts[c][word] = 0.0
self.word_counts[c][word] += count
# function to actually output the class label for new data.
def predict(self, X):
result = []
# Given a document...
for x in X:
counts = self.get_word_counts(self.tokenize(x))
spam_score = 0
ham_score = 0
# We iterate through each of the words...
for word, _ in counts.items():
if word not in self.vocab: continue
# ... and compute log p(w_i|Spam), and sum them all up. The same will happen for Ham
# add Laplace smoothing
# https://medium.com/syncedreview/applying-multinomial-naive-bayes-to-nlp-problems-a-practical-explanation-4f5271768ebf
log_w_given_spam = math.log( (self.word_counts['spam'].get(word, 0.0) + 1) / (self.num_messages['spam'] + len(self.vocab)) )
log_w_given_ham = math.log( (self.word_counts['ham'].get(word, 0.0) + 1) / (self.num_messages['ham'] + len(self.vocab)) )
spam_score += log_w_given_spam
ham_score += log_w_given_ham
# Then we add the log class priors...
spam_score += self.log_class_priors['spam']
ham_score += self.log_class_priors['ham']
# ... and check to see which score is bigger for that document.
# Whichever is larger, that is the predicted label!
if spam_score > ham_score:
result.append('spam')
else:
result.append('ham')
return result
# TODO: Fill in the below function to make a prediction,
# your answer should match the final number in the below output (0.9641)
if __name__ == '__main__':
pass
```
### Solution
```
if __name__ == '__main__':
from sklearn.model_selection import train_test_split
# import/clean/label your data
data = pd.read_csv('dataset/spam.csv',encoding='latin-1')
data = data.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1)
data = data.rename(columns={"v1":'label', "v2":'text'})
print(data.head())
tags = data["label"]
texts = data["text"]
# create texts and tags
X, y = texts, tags
print(len(X))
# transform text into numerical vectors
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# instantiate your SpamDetector
MNB = SpamDetector()
# fit to model, with the trained part of the dataset
MNB.fit(X_train.values, y_train.values)
print(MNB.num_messages)
# print(MNB.word_counts)
# make predictions
pred = MNB.predict(X_test.values)
true = y_test.values
# test for accuracy
accuracy = sum(1 for i in range(len(pred)) if pred[i] == true[i]) / float(len(pred))
print("{0:.4f}".format(accuracy))
```
## Activity: use sklearn CountVectorizer and MultinomialNB to spam email dataset
- Article: [Vectorization, Multinomial Naive Bayes Classifier and Evaluation](https://www.ritchieng.com/machine-learning-multinomial-naive-bayes-vectorization/)
As we've seen with previous topics, sklearn has a lot of built in functionality that can save us from writing the code from scratch. We are going to solve the same problem in the previous activity, but using sklearn!
For example, the SpamDectector class in the previous activity is an example of a **Multinomial Naive Bayes (MNB)** model. An MNB lets us know that each conditional probability we're looking at (i.e. $P(spam | w_1, w_2, ..., w_n)$) is a multinomial (several terms, polynomial) distribution, rather than another type distribution.
##### In groups of 3, complete the activity by using the provided starter code and following the steps below:
1. Split the dataset
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
2. Vectorize the dataset : vect = CountVectorizer()
3. Transform training data into a document-term matrix (BoW): X_train_dtm = vect.fit_transform(X_train)
4. Build and evaluate the model
#### Hints:
- Remember how you prepared/cleaned/labeled the dataset, created texts and tags, and split the data innto train vs test from the previous activity. You'll need to do so again here
- Review the [CountVectorizer documentation](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to see how you can transform text into numerical vectors
- Need more help? Check out this [MNB Vectorization](https://www.ritchieng.com/machine-learning-multinomial-naive-bayes-vectorization/) article and see what you can use from it.
```
## Solution
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
# from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
from sklearn import metrics
# Prepare the dataset
data = pd.read_csv('dataset/spam.csv',encoding='latin-1')
data = data.drop(["Unnamed: 2", "Unnamed: 3", "Unnamed: 4"], axis=1)
data = data.rename(columns={"v1":'label', "v2":'text'})
print(data.head())
tags = data["label"]
texts = data["text"]
# create texts and tags
X, y = texts, tags
# split the data into train vs test
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# transform text into numerical vectors
vectorizer = CountVectorizer()
X_train_dtm = vectorizer.fit_transform(X_train)
print(X_train_dtm)
# instantiate Multinomial Naive Bayes model
nb = MultinomialNB()
# fit to model, with the trained part of the dataset
nb.fit(X_train_dtm, y_train)
X_test_dtm = vectorizer.transform(X_test)
# make prediction
y_pred_class = nb.predict(X_test_dtm)
# test accurarcy of prediction
metrics.accuracy_score(y_test, y_pred_class)
```
# Day 11: TFIDF and it's application
https://github.com/Make-School-Courses/DS-2.1-Machine-Learning/blob/master/Notebooks/tf_idf_and_its_application.ipynb
### Learning Objectives
- How we can exctract keywords from corpus (collections of texts) using TF-IDF
- Explain what is TF-IDF
- Applications of keywords exctraction algorithm and Word2Vec
### Review: What are the pre-processings to apply a machine learning algorithm on text data?
1. The text must be parsed to words, called tokenization
2. Then the words need to be encoded as integers or floating point values
3. scikit-learn library offers easy-to-use tools to perform both tokenization and feature extraction of text data
## What is TF-IDF Vectorizer?
- Word counts are a good starting point, but are very basic
An alternative is to calculate word frequencies, and by far the most popular method is called TF-IDF.
**Term Frequency**: This summarizes how often a given word appears within a document
**Inverse Document Frequency**: This downscales words that appear a lot across documents
## Intuitive idea behind TF-IDF:
- If a word appears frequently in a document, it's important. Give the word a high score
- But if a word appears in many documents, it's not a unique identifier. Give the word a low score
<img src="../static/screenshots/day11-1.png">
## Activity: Obtain the keywords from TF-IDF
1. First obtain the TF-IDF matrix for given corpus
2. Do column-wise addition
3. Sort the score from highest to lowest
4. Return the associated words based on step 3
```
import numpy as np
def sort_dic_by_value(dictionary):
key_list = np.array(list(dictionary.keys()))
val_list = np.array(list(dictionary.values()))
print(val_list, key_list)
ind_sorted_val = np.argsort(val_list)[::-1]
print(ind_sorted_val)
return key_list[ind_sorted_val]
D = {'bright': 0.7, 'blue': 0.86, 'sun': 0.75}
print(sort_dic_by_value(D))
```
## Using SKLearn
```
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
import numpy as np
def keyword_sklearn(docs, k):
vectorizer = TfidfVectorizer(stop_words='english')
tfidf_matrix = vectorizer.fit_transform(docs)
print(tfidf_matrix.toarray())
print(vectorizer.get_feature_names())
tfidf_scores = np.sum(tfidf_matrix, axis=0)
tfidf_scores = np.ravel(tfidf_scores)
return sorted(dict(zip(vectorizer.get_feature_names(), tfidf_scores)).items(), key=lambda x: x[1], reverse=True)[:k]
documnets = ['The sky is bule', 'The sun is bright', 'The sun in the sky is bright', 'we can see the shining sun, the bright sun']
print(keyword_sklearn(documnets, 3))
```
## Word2Vec
- Data Scientists have assigned a vector to each english word
- This process of assignning vectors to each word is called Word2Vec
- In DS 2.4, we will learn how they accomplished Word2Vec task
- Download this huge Word2Vec file: https://nlp.stanford.edu/projects/glove/
- Do not open the extracted file
## What is the property of vectors associated to each word in Word2Vec?
- Words with similar meanings would be closer to each other in Euclidean Space
- For example if $V_{pizza}$, $V_{food}$ and $V_{sport}$ represent the vector associated to pizza, food and sport then:
${\| V_{pizza} - V_{food}}\|$ < ${\| V_{pizza} - V_{sport}}\|$
## Acitivity: Obtain the vector associated to pizza in Glove
```
import codecs
glove_dataset_path = '/Users/macbookpro15/Desktop/MakeSchool/Term6/DS2.1/classwork/dataset/glove.840B.300d.txt'
with codecs.open(glove_dataset_path, 'r') as f:
for c, r in enumerate(f):
sr = r.split()
if sr[0] == 'pizza':
print(sr[0])
print([float(i) for i in sr[1:]])
print(len([float(i) for i in sr[1:]]))
break
```
## Activity: Obtain the vectors associated to pizza, food and sport in Glove
```
import codecs
with codecs.open(glove_dataset_path, 'r') as f:
ls = {}
for c, r in enumerate(f):
sr = r.split()
if sr[0] in ['pizza', 'food', 'sport']:
ls[sr[0]] =[float(i) for i in sr[1:]]
if len(ls) == 3:
break
print(ls)
```
## Activty: Show that the vector of pizza is closer to vector of food than vector of sport
```
import numpy as np
np.linalg.norm(np.array(ls['pizza']) - np.array(ls['food']))
np.linalg.norm(np.array(ls['pizza']) - np.array(ls['sport']))
np.linalg.norm(np.array(ls['food']) - np.array(ls['sport']))
```
# Day 12: Ensemble Methods
- Ensemble Methods are machine learning algorithms that **rely on the "Wisdom of the Crowd"**
- Many weak algorithms working together do better than 1 big, monolithic algorithm
- They are two major groups for ensemble methods: **Random Forests** and **Gradient Boosted Trees**
<img src="../static/screenshots/day12-1.png">
## Random Forest
- Random Forest is a name for a **type of supervised learning**
- Random Forest is just a **collection of many small Decision Trees**
Assume we have a dataset with 10 columns, and thousands of rows. The Random forest algorithm would start by randomly selecting around 2/3 of the rows, and then randomly selecting 6 columns in the data
<img src="../static/screenshots/day12-2.png">
## Activity: Apply Random Forest to iris dataset
Read : https://www.datacamp.com/community/tutorials/random-forests-classifier-python
Finish the turorial on your own, and then answer the following questions:
- What was the feature importance as described in the tutorial: clf.feature_importances_
- Change number of estimator (n_estimators) and compare the accuracy result
```
#Import scikit-learn dataset library
from sklearn import datasets
#Load dataset
iris = datasets.load_iris()
# print the label species(setosa, versicolor,virginica)
print(iris.target_names)
# print the names of the four features
print(iris.feature_names)
# print the iris data (top 5 records)
print(iris.data[0:5])
# print the iris labels (0:setosa, 1:versicolor, 2:virginica)
print(iris.target)
# Creating a DataFrame of given iris dataset.
import pandas as pd
data=pd.DataFrame({
'sepal length':iris.data[:,0],
'sepal width':iris.data[:,1],
'petal length':iris.data[:,2],
'petal width':iris.data[:,3],
'species':iris.target
})
data.head()
# Import train_test_split function
from sklearn.model_selection import train_test_split
X=data[['sepal length', 'sepal width', 'petal length', 'petal width']] # Features
y=data['species'] # Labels
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # 70% training and 30% test
#Import Random Forest Model
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
clf.predict([[3, 5, 4, 2]])
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train,y_train)
import pandas as pd
feature_imp = pd.Series(clf.feature_importances_,index=iris.feature_names).sort_values(ascending=False)
feature_imp
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Creating a bar plot
sns.barplot(x=feature_imp, y=feature_imp.index)
# Add labels to your graph
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.legend()
plt.show()
# Import train_test_split function
from sklearn.model_selection import train_test_split
# Split dataset into features and labels
X=data[['petal length', 'petal width','sepal length']] # Removed feature "sepal length"
y=data['species']
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.70, random_state=5) # 70% training and 30% test
from sklearn.ensemble import RandomForestClassifier
#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)
#Train the model using the training sets y_pred=clf.predict(X_test)
clf.fit(X_train,y_train)
# prediction on test set
y_pred=clf.predict(X_test)
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
```
### Conclusion
In this tutorial, you have learned what random forests is, how it works, finding important features, the comparison between random forests and decision trees, advantages and disadvantages. You have also learned model building, evaluation and finding important features in scikit-learn.
## Gradient Boosting
In boosting, the trees are built sequentially such that ****each subsequent tree aims to reduce the errors of the previous tree
The tree that grows next in the sequence **will learn from an updated version of the residuals**
- Residuals: The differences between observed and predicted values of data.
## Activity: We want to build a model for a prediction problem with Boosting method
- Consider the following data, where the years of experience is predictor variable (feature) and salary (in thousand dollars) is the target
<img src="../static/screenshots/day12-3.png">
- Create a scatter plot
- Using regression trees as base learners, we can create a model to predict the salary
- As the first step, obtain the mean value of target: F0 = np.mean(Y)
- Now build the simplest decision tree regressor with the Feature as X and Y-F0 as the target: Below is the code
### Solution
**Reference**: https://www.analyticsvidhya.com/blog/2018/09/an-end-to-end-guide-to-understand-the-math-behind-xgboost/
```
from sklearn.tree import DecisionTreeRegressor
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Feature, years of work experience
X = np.array([5, 7, 12, 23, 25, 28, 29, 34, 35, 40])
# Target, salary in in thousand dollars
Y = np.array([82, 80, 103, 118, 172, 127, 204, 189, 99, 166])
plt.scatter(X,Y)
f_0 = Y.mean()
f_0
y_minus_f_0 = Y - f_0
y_minus_f_0
# Milad's solution
from sklearn.tree import DecisionTreeRegressor
import numpy as np
# from sklearn.tree import export_graphviz
# import pydotplus
# Feature, years of work experience
X = np.array([5, 7, 12, 23, 25, 28, 29, 34, 35, 40])
# Target, salary in in thousand dollars
Y = np.array([82, 80, 103, 118, 172, 127, 204, 189, 99, 166])
# Compute the mean of target and subtract from target
F0 = np.mean(Y)
print(F0)
# Build and train the simple Regression Model with DT
regre = DecisionTreeRegressor(max_depth=1)
regre.fit(X.reshape(-1, 1), (Y-F0).reshape(-1, 1))
# Draw graph
# dot_data = export_graphviz(regre, out_file=None)
# graph = pydotplus.graph_from_dot_data(dot_data)
# graph.write_png('simple_reg_tree_step1.png')
```
<img src="../static/screenshots/day12-6.png">
- As the second step: obtain h1 as the output result of decision tree regressor with X as input : F1 =F0 + h1
- As the third step: build another simple decision tree regressor with Salary as X and Y-F1 as the target
- Keep doing these steps we can predict salary, Y from years of experience X
<img src="../static/screenshots/day12-4.png">
## Pseudocode for Boosting
<img src="../static/screenshots/day12-7.png">
```
from sklearn.tree import DecisionTreeRegressor
import numpy as np
# from sklearn.tree import export_graphviz
# import pydotplus
# Feature, years of work experience
X = np.array([5, 7, 12, 23, 25, 28, 29, 34, 35, 40])
# Target, salary in in thousand dollars
Y = np.array([82, 80, 103, 118, 172, 127, 204, 189, 99, 166])
# Compute the mean of target and subtract from target
F0 = np.mean(Y)
print(F0)
# M1 ---
regre = DecisionTreeRegressor(max_depth=1)
regre.fit(X.reshape(-1, 1), (Y-F0).reshape(-1, 1))
h1 = regre.predict(X.reshape(-1,1))
print("Prediction from model 1:", h1)
# M2 ---
F1 = F0 + h1
regre = DecisionTreeRegressor(max_depth=1)
regre.fit(X.reshape(-1, 1), (Y-F1).reshape(-1, 1))
h2 = regre.predict(X.reshape(-1,1))
print("Prediction from model 2:", h2)
# M3 ---
F2 = F1 + h2
regre = DecisionTreeRegressor(max_depth=1)
regre.fit(X.reshape(-1, 1), (Y-F2).reshape(-1, 1))
h3 = regre.predict(X.reshape(-1,1))
print("Prediction from model 3:", h3)
### THIS IS WRONG, Check Milad's implementation
def boosting(X, Y, model_count):
F = np.mean(Y)
predictions = []
for i in range(model_count):
regre = DecisionTreeRegressor(max_depth=1)
regre.fit(X.reshape(-1, 1), (Y-F).reshape(-1, 1))
h = regre.predict(X.reshape(-1,1))
F = F + h
predictions.append(h)
plt.plot(X, F)
plt.scatter(X, Y)
return predictions
predictions = boosting(X, Y, 3)
for i, prediction in enumerate(predictions):
print("Model ", i, ":", prediction)
```
## Milad's Implementation of Boosting
```
# Iteratively predict Y from X using Boosting method
from sklearn.tree import DecisionTreeRegressor
import numpy as np
import matplotlib.pyplot as plt
# Feature, years of work experience
X = np.array([5, 7, 12, 23, 25, 28, 29, 34, 35, 40])
# Target, salary in in thousand dollars
Y = np.array([82, 80, 103, 118, 172, 127, 204, 189, 99, 166])
iteration = 3
F = np.zeros((iteration+1, len(Y)))
for i in range(iteration):
regre = DecisionTreeRegressor(max_depth=1)
if i == 0:
F[i] = np.mean(Y)
regre.fit(X.reshape(-1, 1), (Y-F[i]).reshape(-1, 1))
# h[i] = regre.predict(X.reshape(-1, 1)), we do not need to define separate variable for h
F[i+1] = F[i] + regre.predict(X.reshape(-1, 1))
plt.plot(X, F[-1])
plt.scatter(X, Y)
```
## Optional: Pseudocode of Boosting Algorithm:
<img src="../static/screenshots/day12-8.png">
## Xgboost
XGBoost is short for eXtreme Gradient Boosting. It is
- **One of the best way to create model**
- An open-sourced tool
- Computation in C++
- R/python/Julia interface provided
- A variant of the gradient boosting machine
- Tree-based model
- The winning model for several kaggle competitions
Apply Xgboost to boston housing dataset (https://www.datacamp.com/community/tutorials/xgboost-in-python)
Plot the feature importance
## Optional Reading: XGBoost's hyperparameters
At this point, before building the model, you should be aware of the tuning parameters that XGBoost provides. Well, there are a plethora of tuning parameters for tree-based learners in XGBoost and you can read all about them here. But the most common ones that you should know are:
**learning_rate**: step size shrinkage used to prevent overfitting. Range is [0,1]
**max_depth**: determines how deeply each tree is allowed to grow during any boosting round.
**subsample**: percentage of samples used per tree. Low value can lead to underfitting.
**colsample_bytree**: percentage of features used per tree. High value can lead to overfitting.
**n_estimators**: number of trees you want to build.
**objective**: determines the loss function to be used like reg:linear for regression problems, reg:logistic for classification problems with only decision, binary:logistic for classification problems with probability.
## Summary
- Ensemble Methods are machine learning algorithms that rely on the "Wisdom of the Crowd"
- Many weak algorithms working together do better than 1 big, monolithic algorithm
- In boosting, each tree will learn from an updated version of the residuals
- They are two major groups for ensemble methods:
- Random Forests
- Gradient Boosted Trees
- The Ensemble methods are able to obtain and rank the feature importance
## Resources:
- https://www.datacamp.com/community/tutorials/random-forests-classifier-python
- https://www.analyticsvidhya.com/blog/2018/09/an-end-to-end-guide-to-understand-the-math-behind-xgboost/
- https://www.datacamp.com/community/tutorials/xgboost-in-python
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from enmspring.graphs_bigtraj import BackboneMeanModeAgent
from enmspring.kappa_mat import KMat
from enmspring.backbone_k import BackboneResidPlot, BackboneResidPlotWithNext
big_traj_folder = '/home/ytcdata/bigtraj_fluctmatch/500ns'
df_folder = '/home/yizaochen/Documents/dna_2021_drawzone/backbone_k'
draw_folder = '/home/yizaochen/Desktop/drawzone_temp'
```
### Part 1: Initialize s-agent
```
host = 'gcgc_21mer'
interval_time = 500
s_agent = BackboneMeanModeAgent(host, big_traj_folder, interval_time)
```
### Part 2: Initialize K-Matrix
```
kmat_agent = KMat(s_agent)
```
### Part 3: Initialize Plot Agent
```
plot_agent = BackboneResidPlot(host, s_agent, kmat_agent)
```
### Part 4: Set $m$ and $n$
```
m = 1
n = s_agent.n_node # s_agent.n_node
```
### Part 5: Set d-pair
```
d_pair = {
'A': {'atomname_i': "O4'", 'atomname_j': "O5'"},
'T': {'atomname_i': "O4'", 'atomname_j': "O5'"},
'G': {'atomname_i': "O4'", 'atomname_j': "O5'"},
'C': {'atomname_i': "O4'", 'atomname_j': "O5'"}
}
```
### Part 5: Plot
```
figsize = (4, 4)
ylims = (0, 12.5)
yticks = np.arange(0, 10.1, 2.5)
assist_hlines = np.arange(0, 10.1, 2.5)
fig, d_axes = plot_agent.plot_two_strands(figsize, m, n, ylims, d_pair, yticks, assist_hlines)
plt.tight_layout()
plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_backbone_resids.svg', dpi=300, transparent=False)
plt.show()
figsize = (4, 4)
ylims = (0, 32)
yticks = range(5, 31, 5)
assist_hlines = range(5, 31, 5)
fig, d_axes = plot_agent.plot_two_strands(figsize, m, n, ylims, d_pair, yticks, assist_hlines)
plt.tight_layout()
plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_backbone_resids.svg', dpi=300, transparent=False)
plt.show()
```
### Part 6: C2'-P
```
d_pair = {
'A': {'atomname_i': "C3'", 'atomname_j': "O2P"},
'T': {'atomname_i': "C3'", 'atomname_j': "O2P"},
'G': {'atomname_i': "C3'", 'atomname_j': "O2P"},
'C': {'atomname_i': "C3'", 'atomname_j': "O2P"}
}
plot_agent = BackboneResidPlotWithNext(host, s_agent, kmat_agent)
m = 1
n = s_agent.n_node # s_agent.n_node
figsize = (4, 4)
ylims = (0, 12)
yticks = np.arange(0, 10.1, 2)
assist_hlines = np.arange(0, 10.1, 2)
fig, d_axes = plot_agent.plot_two_strands(figsize, m, n, ylims, d_pair, yticks, assist_hlines)
plt.tight_layout()
plt.savefig(f'/home/yizaochen/Desktop/drawzone_temp/{host}_backbone_resids.svg', dpi=300, transparent=False)
plt.show()
```
### D-Pairs
```
d_pair = {
'A': {'atomname_i': "C1'", 'atomname_j': "N3"},
'T': {'atomname_i': "C1'", 'atomname_j': "O2"},
'G': {'atomname_i': "C1'", 'atomname_j': "N3"},
'C': {'atomname_i': "C1'", 'atomname_j': "O2"}
}
d_pair = {
'A': {'atomname_i': "C2'", 'atomname_j': "P"},
'T': {'atomname_i': "C2'", 'atomname_j': "P"},
'G': {'atomname_i': "C2'", 'atomname_j': "P"},
'C': {'atomname_i': "C2'", 'atomname_j': "P"}
}
```
| github_jupyter |
```
import spotipy
import pandas as pd
import numpy as np
from spotipy.oauth2 import SpotifyClientCredentials #To access authorised Spotify data
lyrics = pd.read_csv("FINAL_v1.csv")
lyrics.head()
client_id = "3030c6beb3ea48d393146c3a4c559684"
client_secret = "f0e01dbe13784094b3bb532617f14765"
client_credentials_manager = SpotifyClientCredentials(client_id=client_id, client_secret=client_secret)
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager) #spotify object to access API
name = "Pusha T" #chosen artist
result = sp.search(name) #search query
result_length = len(result['tracks']['items'])
result['tracks']['items'][2]['artists']
name = "Drake" #chosen artist
result = sp.search(name) #search query
result_length = len(result['tracks']['items'])
for i in range(result_length):
print (result['tracks']['items'][i]['artists'])
print ("\n")
result['tracks']['items'][0]['artists']
result['tracks']['items'][0]['artists']
#Extract Artist's uri
artist_uri = result['tracks']['items'][0]['artists'][0]['uri']
#Pull all of the artist's albums
sp_albums = sp.artist_albums(artist_uri, album_type='album')
#Store artist's albums' names' and uris in separate lists
album_names = []
album_uris = []
for i in range(len(sp_albums['items'])):
album_names.append(sp_albums['items'][i]['name'])
album_uris.append(sp_albums['items'][i]['uri'])
album_names
album_uris
#Keep names and uris in same order to keep track of duplicate albums
def albumSongs(uri):
album = uri #assign album uri to a_name
spotify_albums[album] = {} #Creates dictionary for that specific album
#Create keys-values of empty lists inside nested dictionary for album
spotify_albums[album]['album'] = [] #create empty list
spotify_albums[album]['track_number'] = []
spotify_albums[album]['id'] = []
spotify_albums[album]['name'] = []
spotify_albums[album]['uri'] = []
tracks = sp.album_tracks(album) #pull data on album tracks
for n in range(len(tracks['items'])): #for each song track
spotify_albums[album]['album'].append(album_names[album_count]) #append album name tracked via album_count
spotify_albums[album]['track_number'].append(tracks['items'][n]['track_number'])
spotify_albums[album]['id'].append(tracks['items'][n]['id'])
spotify_albums[album]['name'].append(tracks['items'][n]['name'])
spotify_albums[album]['uri'].append(tracks['items'][n]['uri'])
spotify_albums = {}
album_count = 0
for i in album_uris: #each album
albumSongs(i)
print("Album " + str(album_names[album_count]) + " songs has been added to spotify_albums dictionary")
album_count+=1 #Updates album count once all tracks have been added
def audio_features(album):
#Add new key-values to store audio features
spotify_albums[album]['acousticness'] = []
spotify_albums[album]['danceability'] = []
spotify_albums[album]['energy'] = []
spotify_albums[album]['instrumentalness'] = []
spotify_albums[album]['liveness'] = []
spotify_albums[album]['loudness'] = []
spotify_albums[album]['speechiness'] = []
spotify_albums[album]['tempo'] = []
spotify_albums[album]['valence'] = []
spotify_albums[album]['popularity'] = []
#create a track counter
track_count = 0
for track in spotify_albums[album]['uri']:
#pull audio features per track
features = sp.audio_features(track)
#Append to relevant key-value
spotify_albums[album]['acousticness'].append(features[0]['acousticness'])
spotify_albums[album]['danceability'].append(features[0]['danceability'])
spotify_albums[album]['energy'].append(features[0]['energy'])
spotify_albums[album]['instrumentalness'].append(features[0]['instrumentalness'])
spotify_albums[album]['liveness'].append(features[0]['liveness'])
spotify_albums[album]['loudness'].append(features[0]['loudness'])
spotify_albums[album]['speechiness'].append(features[0]['speechiness'])
spotify_albums[album]['tempo'].append(features[0]['tempo'])
spotify_albums[album]['valence'].append(features[0]['valence'])
#popularity is stored elsewhere
pop = sp.track(track)
spotify_albums[album]['popularity'].append(pop['popularity'])
track_count+=1
import time
#import numpy as np
sleep_min = 2
sleep_max = 5
start_time = time.time()
request_count = 0
for i in spotify_albums:
audio_features(i)
request_count+=1
if request_count % 5 == 0:
print(str(request_count) + " playlists completed")
time.sleep(np.random.uniform(sleep_min, sleep_max))
print('Loop #: {}'.format(request_count))
print('Elapsed Time: {} seconds'.format(time.time() - start_time))
dic_df = {}
dic_df['album'] = []
dic_df['track_number'] = []
dic_df['id'] = []
dic_df['name'] = []
dic_df['uri'] = []
dic_df['acousticness'] = []
dic_df['danceability'] = []
dic_df['energy'] = []
dic_df['instrumentalness'] = []
dic_df['liveness'] = []
dic_df['loudness'] = []
dic_df['speechiness'] = []
dic_df['tempo'] = []
dic_df['valence'] = []
dic_df['popularity'] = []
for album in spotify_albums:
for feature in spotify_albums[album]:
dic_df[feature].extend(spotify_albums[album][feature])
len(dic_df['album'])
df = pd.DataFrame.from_dict(dic_df)
df
final_df = df.sort_values('popularity', ascending=False).drop_duplicates('name').sort_index()
print(len(final_df))
final_df
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import matplotlib.cm as cm
from matplotlib import pyplot as plt
from scipy import stats as st
import seaborn as sns
from IPython.core.pylabtools import figsize
import numpy.random as r
from pylab import *
from matplotlib.gridspec import GridSpec
import sys
sys.path.insert(0, '../../utils')
import splicing_utils as spu
import single_cell_plots as scp
from single_cell_plots import *
plt.rcParams["axes.edgecolor"] = "black"
plt.rcParams["axes.linewidth"] = 1
plt.rcParams["axes.facecolor"] = 'white'
import matplotlib as mpl
import numpy as np
from matplotlib import pyplot as plt
mpl.rcParams["mathtext.fontset"] = "stix"
data_dir = '/mnt/c/Users/ferna/Desktop/SingleCell/data/'
%run -i '../../utils/load_data.py'
from sklearn.decomposition import PCA
from scipy.stats import spearmanr
import rpy2
import rpy2.robjects.packages as rpackages
import rpy2.robjects as robjects
import rpy2.robjects.numpy2ri as rpyn
from statsmodels.stats.multitest import multipletests
dt = rpy2.robjects.packages.importr('diptest')
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from scipy.stats import hypergeom
def hyper_test(M, n, N, k):
hpd = hypergeom(M, n, N)
p_depleted = hpd.cdf(k)
p_enriched = hpd.sf(k-1)
return p_depleted, p_enriched
mpl.rcParams["mathtext.fontset"] = "stix"
```
# Table 1: Number of skipped exons events per dataset
We consider an alternative exon as observed if the following two premises are true:
* At least one informative junction read is observed in at least 10% of the cells.
* The observations amount to an average $\hat{\Psi}$ between 0.05 and 0.95.
```
print('Chen dataset')
print('Total observed exons:')
print(np.sum((chen_PSI.mean(axis=1) <= 0.95) & (chen_PSI.mean(axis=1) >= 0.05) & (chen_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(chen_read_counts.loc[(chen_PSI.mean(axis=1) <= 0.95) & (chen_PSI.mean(axis=1) >= 0.05) & (chen_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(),1) )
print('Lescroart dataset')
print('Total observed exons:')
print(np.sum((lescroart_PSI.mean(axis=1) <= 0.95) & (lescroart_PSI.mean(axis=1) >= 0.05) & (lescroart_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(lescroart_read_counts.loc[(lescroart_PSI.mean(axis=1) <= 0.95) & (lescroart_PSI.mean(axis=1) >= 0.05) & (lescroart_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(), 1))
print('Trapnell dataset')
print('Total observed exons:')
print(np.sum((trapnell_PSI.mean(axis=1) <= 0.95) & (trapnell_PSI.mean(axis=1) >= 0.05) & (trapnell_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(trapnell_read_counts.loc[(trapnell_PSI.mean(axis=1) <= 0.95) & (trapnell_PSI.mean(axis=1) >= 0.05) & (trapnell_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(), 1))
print('Song dataset')
print('Total observed exons:')
print(np.sum((song_PSI.mean(axis=1) <= 0.95) & (song_PSI.mean(axis=1) >= 0.05) & (song_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(song_read_counts.loc[(song_PSI.mean(axis=1) <= 0.95) & (song_PSI.mean(axis=1) >= 0.05) & (song_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(), 1))
print('Fletcher dataset')
print('Total observed exons:')
print(np.sum((das_PSI.mean(axis=1) <= 0.95) & (das_PSI.mean(axis=1) >= 0.05) & (das_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(das_read_counts.loc[(das_PSI.mean(axis=1) <= 0.95) & (das_PSI.mean(axis=1) >= 0.05) & (das_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(), 1))
print('Shalek dataset')
print('Total observed exons:')
print(np.sum((shalek_PSI.mean(axis=1) <= 0.95) & (shalek_PSI.mean(axis=1) >= 0.05) & (shalek_PSI.isna().mean(axis=1) <= 0.9)))
print('Mean reads per event')
print(round(shalek_read_counts.loc[(shalek_PSI.mean(axis=1) <= 0.95) & (shalek_PSI.mean(axis=1) >= 0.05) & (shalek_PSI.isna().mean(axis=1) <= 0.9)].mean(axis=1).mean(), 1))
```
# Extent of bimodality among exons
In this notebook, we quantify how many events qualify as biomodal. For this, we use the following ad-hoc quartile definition of bimodality:
An intermediate exon ($0.2 \leq \mu (\hat{\Psi}) \leq 0.8)$ is bimodal if the following are true:
1. The first quartile of $\hat{\Psi}$ is equal or less than 0.25.
2. The third quartile of $\hat{\Psi}$ is equal or greater than 0.75.
### Shalek bimodal events
We check some of the exons reviewed in the Shalek et al., 2013 paper that describes bimodal splicing. None of the exons we checked are bimodal by the quartile definition.
```
shalek_int_genes, shalek_int_exons = spu.get_int_events(shalek_PSI, shalek_mrna_counts, 0.05)
shalek_int_exons = [x for x in shalek_int_exons if x in mrna_per_event_shalek.index]
shalek_PSI_filtered, shalek_PSI_mrna_filtered, shalek_good_exons, mrna_filtered, reads_filtered = filter_psi(shalek_PSI,
shalek_int_exons, mrna_per_event_shalek, shalek_coverage_tab['SJ_coverage'],
shalek_read_counts, 10,
cell_min=0.5)
good_cells = shalek_PSI_filtered.columns[shalek_PSI_filtered.isna().mean() <= 0.5]
shalek_PSI_good = shalek_PSI_filtered[good_cells]
shalek_paper_bin_exon = ['Acpp_AE', 'Clec7a_other_1', 'Irgm1_1', 'Irf7_1', 'Clec4n_2', 'Sat1_nmdSE_1', 'Zfp207',
'Abi1_7', 'Srsf7_nmdSE_1', 'Psmg4_1']
for event in [x for x in shalek_paper_bin_exon if x in shalek_PSI_filtered.index]:
print(event)
print(shalek_PSI.loc[event, mrna_per_event_shalek.columns].quantile(0.25))
print(shalek_PSI.loc[event, mrna_per_event_shalek.columns].quantile(0.75))
```
This is using the MISO calculations of $\hat{\Psi}$ from the Shalek paper.
```
sra_meta = pd.read_csv(data_dir + 'shalek/shalek.meta.tab', sep='\t', index_col=0)
shalek_PSI_paper = pd.read_csv(data_dir + 'shalek/shalek.psi_paper.csv', index_col = 0)
shalek_counts_paper = pd.read_csv(data_dir + 'shalek/shalek.expression_paper.csv', index_col = 0)
shalek_pca = pd.read_csv(data_dir + 'shalek/shalek.pca.tab', index_col = 0, sep='\t')
shalek_cells = shalek_PSI_paper.columns[1:18]
for event in shalek_paper_bin_exon:
gen = event.split('_')[0]
if len(shalek_PSI_paper.loc[shalek_PSI_paper.gene==gen, shalek_cells].index) >= 1:
print(event)
print(shalek_PSI_paper.loc[shalek_PSI_paper.loc[shalek_PSI_paper.gene==gen, shalek_cells].index[0],
shalek_cells].quantile(0.25))
print(shalek_PSI_paper.loc[shalek_PSI_paper.loc[shalek_PSI_paper.gene==gen, shalek_cells].index[0],
shalek_cells].quantile(0.75))
```
### Song bimodal events
Here we compare the modality of observations as determined in the Song et al., 2017 paper, versus the quartile definition of bimodality. We also see the proportion of exons defined as bimodal (either by Song et al., or by the quartile definition that are selected by the 10 mRNA filter.
```
song_modalities = pd.read_csv(data_dir + 'song/song_event_modalities.tab', sep='\t', index_col = 0)
song_outrigger = pd.read_csv(data_dir + 'song/song_outrigger_psi.tab', index_col = 0)
outrigger_iPSC = [x for x in list(song_outrigger.index) if x[0] == 'P']
outrigger_NPC = [x for x in list(song_outrigger.index) if ((x[0] == 'C') or (x[0] == 'N'))]
outrigger_MN = [x for x in list(song_outrigger.index) if x[0] == 'M']
def get_proportions(PSI_tab, subpop, mrna_counts, mrna_per_event, read_counts, coverage_tab, psi_int = 0.1, mrna_min = 10,
outrigger_tab = '', outrigger_modalities = '', outrigger_subpop = '',
outrigger_cell_type = '', cell_min=0.5):
PSI_filtered = process_subpop(subpop, PSI_tab, mrna_counts,
mrna_per_event, read_counts, coverage_tab['SJ_coverage'], psi_int,
mrna_min, cell_min=cell_min)
PSI_observed = (PSI_tab[subpop].isna().mean(axis = 1) <= (1-cell_min))
PSI_int = (np.abs(0.5-PSI_tab[subpop].mean(axis = 1)) <= (0.5-psi_int))
PSI_unfiltered = PSI_tab.loc[(PSI_observed & PSI_int), subpop]
filtered_bimodal_by_song = 0
unfiltered_bimodal_by_song = 0
assert np.all([x in PSI_unfiltered.index for x in PSI_filtered[0].index])
suma = len(PSI_unfiltered.index)
suma_f = len(PSI_filtered[0].index)
suma_unfiltered = 0
suma_filtered_all = 0
suma_filtered = 0
suma_outrigger = 0
suma_outrigger_total = 0
for evento in PSI_unfiltered.index:
q25_unfiltered = PSI_unfiltered.loc[evento].quantile(0.25)
q75_unfiltered = PSI_unfiltered.loc[evento].quantile(0.75)
if (q25_unfiltered <= 0.25) and (q75_unfiltered >= 0.75):
suma_unfiltered += 1
if len(outrigger_modalities) >= 1:
if evento in outrigger_modalities.index:
outrigger_event = outrigger_modalities.loc[evento, 'song_annotation']
if outrigger_modalities.loc[evento, outrigger_cell_type] == 'bimodal':
unfiltered_bimodal_by_song += 1
if evento in PSI_filtered[0].index:
q25_filtered_all = PSI_tab.loc[evento, subpop].quantile(0.25)
q75_filtered_all = PSI_tab.loc[evento, subpop].quantile(0.75)
if (q25_filtered_all <= 0.25) and (q75_filtered_all >= 0.75):
suma_filtered_all += 1
q25_filtered = PSI_filtered[0].loc[evento].quantile(0.25)
q75_filtered = PSI_filtered[0].loc[evento].quantile(0.75)
if (q25_filtered <= 0.25) and (q75_filtered >= 0.75):
suma_filtered += 1
if len(outrigger_modalities) >= 1:
if evento in outrigger_modalities.index:
outrigger_event = outrigger_modalities.loc[evento, 'song_annotation']
if outrigger_modalities.loc[evento, outrigger_cell_type] == 'bimodal':
filtered_bimodal_by_song += 1
if len(outrigger_modalities) >= 1:
if evento in outrigger_modalities.index:
suma_outrigger_total += 1
outrigger_event = outrigger_modalities.loc[evento, 'song_annotation']
q25_outrigger = outrigger_tab.loc[outrigger_subpop, outrigger_event].quantile(0.25)
q75_outrigger = outrigger_tab.loc[outrigger_subpop, outrigger_event].quantile(0.75)
if (q25_outrigger <= 0.25) and (q75_outrigger >= 0.75):
suma_outrigger += 1
p_deplete = hyper_test(suma, suma_unfiltered, suma_f, suma_filtered)[0]
print('Total intermediate exons: ' + str(suma))
print('Total intermediate exons that are bimodal by quartile definition: ' + str(suma_unfiltered))
print('#################')
if len(outrigger_modalities) >= 1:
print('Intermediate exons observed in Song et al.: ' + str(suma_outrigger_total))
print('Song et al. exons that are bimodal by Song et al. definition: ' + str(unfiltered_bimodal_by_song))
print(str(round(100*unfiltered_bimodal_by_song/suma_outrigger_total))+'%')
print('Song et al. exons that are bimodal by quartile definition: ' + str(suma_outrigger))
print(str(round(100*suma_outrigger/suma_outrigger_total))+'%')
print('#################')
print('Selected exons: ' + str(suma_f))
print('Selected exons that are bimodal by Song et al. definition: ' + str(filtered_bimodal_by_song))
print(str(round(100*filtered_bimodal_by_song/suma_f))+'%')
print('Selected exons that are bimodal by quartile definition: ' + str(suma_filtered))
print(str(round(100*suma_filtered/suma_f))+'%')
print('#################')
print('Depletion of quartile bimodality in selected exon (p-value): ' + str(p_deplete))
return (suma, suma_f, suma_unfiltered, suma_filtered_all, suma_filtered, suma_outrigger, suma_outrigger_total,
p_deplete, filtered_bimodal_by_song, unfiltered_bimodal_by_song)
```
#### Percent bimodal, from the supplementary data from Song et al., 2017
Modalities as reported in **GSE85908_modalities_tidy.csv.gz**; see process_data.ipynb for the code that we used to map the exon IDs. We only use skipped exons that are reported both by Song et al and by us.
##### Percent of shared skipped exons that are reported as bimodal in the Song et al. paper
(All shared exons)
```
print('Total exons in iPSC: ' + str((len(song_modalities) - (song_modalities.iPSC == '-').sum())))
print('Bimodal exons in iPSC: ' + str((song_modalities.iPSC == 'bimodal').sum()))
print('Percentage bimodal in iPSC: ' + str(round(((song_modalities.iPSC == 'bimodal').sum() / (len(song_modalities) - (song_modalities.iPSC == '-').sum())) * 100, 2))+'%')
print('Total exons in NPC: ' + str((len(song_modalities) - (song_modalities.NPC == '-').sum())))
print('Bimodal exons in NPC: ' + str((song_modalities.NPC == 'bimodal').sum()))
print('Percentage bimodal in NPC: ' + str(round(((song_modalities.NPC == 'bimodal').sum() / (len(song_modalities) - (song_modalities.NPC == '-').sum())) * 100, 2))+'%')
print('Total exons in MN: ' + str((len(song_modalities) - (song_modalities.MN == '-').sum())))
print('Bimodal exons in MN: ' + str((song_modalities.MN == 'bimodal').sum()))
print('Percentage bimodal in MN: ' + str(round(((song_modalities.MN == 'bimodal').sum() / (len(song_modalities) - (song_modalities.MN == '-').sum())) * 100, 2))+'%')
```
##### Percent bimodal in intermediate exons
We break it down as: 1) Percent bimodal in intermediate exons according to Song et al. 2) Bimodal according to the quartile definition. 3) Percent bimodal after filtering according to Song et al. 4) Bimodal after filtering according to the quartile definition.
Here we do the analysis in the cell type as labeled in the original paper. The reason is that: 1) The modality definitions in Song et al. are assigned to cell types. 2) We were unable to match cells specifically, as their ID is not matched to the accession run ID in the supplementary materials of Song et al.
We observe that:
* The percent of exons that are labeled as bimodal by Song et al is larger for intermediate exons than for all exons.
* The percent of selected intermediate exons that are bimodal decrease after filtering for both definitions.
```
get_proportions(song_PSI, song_iPSC, song_mrna_counts, mrna_per_event_song, song_read_counts, song_coverage_tab,
psi_int = 0.2, mrna_min = 10,
outrigger_tab = song_outrigger, outrigger_modalities = song_modalities, outrigger_subpop = outrigger_iPSC,
outrigger_cell_type = 'iPSC')
get_proportions(song_PSI, song_NPC, song_mrna_counts, mrna_per_event_song, song_read_counts, song_coverage_tab,
psi_int = 0.2, mrna_min = 10,
outrigger_tab = song_outrigger, outrigger_modalities = song_modalities, outrigger_subpop = outrigger_NPC,
outrigger_cell_type = 'NPC')
get_proportions(song_PSI, song_MN, song_mrna_counts, mrna_per_event_song, song_read_counts, song_coverage_tab,
psi_int = 0.2, mrna_min = 10,
outrigger_tab = song_outrigger, outrigger_modalities = song_modalities, outrigger_subpop = outrigger_MN,
outrigger_cell_type = 'MN')
```
##### The issue with Song et al's definition of bimodality
There are some events that are called bimodal, but they certainly look more like unimodal exons with inflation of 0 and 1 values; or at the very least, included/excluded modalities.
```
song_modalities.loc['GNAS_6'].song_annotation
plt.hist(song_outrigger.loc[outrigger_iPSC,
'exon:chr20:57470667-57470739:+@exon:chr20:57473996-57474040:+@exon:chr20:57478586-57478640:+'].dropna())
plt.title('GNAS_6 in iPS cells', fontsize=20)
plt.ylabel('frequency', fontsize=20)
plt.xlabel('cell $\Psi$', fontsize=20)
plt.show()
```
#### Table of bimodality by given cell type
Comparison of the intermediate exons that are bimodal by the quartile definition, before and after filtering.
```
def test_dset(PSI_tab, subpop_list, mrna_counts, mrna_per_event, read_counts, coverage_tab, psi_int = 0.1, mrna_min = 10,
cell_min=0.5):
total = []
bimodal_total = []
filtered = []
bimodal_filtered = []
pvals = []
for subpop in subpop_list:
proport = get_proportions(PSI_tab, subpop, mrna_counts, mrna_per_event, read_counts, coverage_tab,
psi_int = psi_int, mrna_min = mrna_min, cell_min=cell_min)
total.append(proport[0])
bimodal_total.append(proport[2])
filtered.append(proport[1])
bimodal_filtered.append(proport[4]) # filtered
#bimodal_filtered.append(proport[3]) # unfiltered
pvals.append(proport[7])
return total, bimodal_total, filtered, bimodal_filtered, pvals
total = []
bimodal_total = []
filtered = []
bimodal_filtered = []
pvals =[]
cells_in_cluster = []
cells_in_cluster.extend([len(x) for x in [chen_ES2i, chen_ES, chen_Epi, chen_MN]])
cells_in_cluster.extend([len(x) for x in [lescroart_E6, lescroart_E7]])
cells_in_cluster.extend([len(x) for x in [trapnell_M00, trapnell_M24, trapnell_M48, trapnell_M72]])
cells_in_cluster.extend([len(x) for x in [song_iPSC, song_NPC, song_MN]])
cells_in_cluster.append(len(shalek_PSI.columns))
cells_in_cluster.append(len(das_PSI.columns))
chen_processed = test_dset(chen_PSI, [chen_ES2i, chen_ES, chen_Epi, chen_MN], chen_mrna_counts, mrna_per_event_chen,
chen_read_counts, chen_coverage_tab, psi_int = 0.2, mrna_min = 10)
lescroart_processed = test_dset(lescroart_PSI, [lescroart_E6, lescroart_E7], lescroart_mrna_counts, mrna_per_event_lescroart,
lescroart_read_counts, lescroart_coverage_tab, psi_int = 0.2, mrna_min = 10)
trapnell_processed = test_dset(trapnell_PSI, [trapnell_M00, trapnell_M24, trapnell_M48, trapnell_M72],
trapnell_mrna_counts, mrna_per_event_trapnell, trapnell_read_counts,
trapnell_coverage_tab, psi_int = 0.2, mrna_min = 10)
song_processed = test_dset(song_PSI, [song_iPSC, song_NPC, song_MN], song_mrna_counts, mrna_per_event_song,
song_read_counts, song_coverage_tab, psi_int = 0.2, mrna_min = 10)
shalek_processed = test_dset(shalek_PSI, [shalek_PSI.columns], shalek_mrna_counts, mrna_per_event_shalek,
shalek_read_counts, shalek_coverage_tab, psi_int = 0.2, mrna_min = 10)
das_processed = test_dset(das_PSI, [das_PSI.columns], das_mrna_counts, mrna_per_event_das,
das_read_counts, das_coverage_tab, psi_int = 0.2, mrna_min = 10)
total.extend(chen_processed[0])
total.extend(lescroart_processed[0])
total.extend(trapnell_processed[0])
total.extend(song_processed[0])
total.extend(shalek_processed[0])
total.extend(das_processed[0])
bimodal_total.extend(chen_processed[1])
bimodal_total.extend(lescroart_processed[1])
bimodal_total.extend(trapnell_processed[1])
bimodal_total.extend(song_processed[1])
bimodal_total.extend(shalek_processed[1])
bimodal_total.extend(das_processed[1])
filtered.extend(chen_processed[2])
filtered.extend(lescroart_processed[2])
filtered.extend(trapnell_processed[2])
filtered.extend(song_processed[2])
filtered.extend(shalek_processed[2])
filtered.extend(das_processed[2])
bimodal_filtered.extend(chen_processed[3])
bimodal_filtered.extend(lescroart_processed[3])
bimodal_filtered.extend(trapnell_processed[3])
bimodal_filtered.extend(song_processed[3])
bimodal_filtered.extend(shalek_processed[3])
bimodal_filtered.extend(das_processed[3])
pvals.extend(chen_processed[4])
pvals.extend(lescroart_processed[4])
pvals.extend(trapnell_processed[4])
pvals.extend(song_processed[4])
pvals.extend(shalek_processed[4])
pvals.extend(das_processed[4])
pval_adj = multipletests(pvals, method='fdr_bh')[1]
cell_type = ['mES2i', 'mES', 'Epi', 'Motor neuron', 'Heart E6.75', 'Heart E7.25',
'Myoblast 00h', 'Myoblast 24h', 'Myoblast 48h', 'Myoblast 72h',
'iPSC', 'NPC', 'Motor neuron', 'BMDC', 'Olfactory neurons']
dataset = ['Chen']*4 + ['Lescroart']*2 + ['Trapnell']*4 + ['Song']*3 + ['Shalek'] + ['Fletcher']
organism = ['Mouse']*6+['Human']*7+['Mouse']*2
bimodality_table = pd.DataFrame()
bimodality_table['dataset'] = dataset
bimodality_table['organism'] = organism
bimodality_table['cell_type'] = cell_type
bimodality_table['cells_in_cluster'] = cells_in_cluster
bimodality_table['total_exons'] = total
bimodality_table['bimodal_exons'] = bimodal_total
bimodality_table['bimodal_percent'] = [str(round(x*100, 2))+'%' for x in np.array(bimodal_total)/np.array(total)]
bimodality_table['selected_exons'] = filtered
bimodality_table['selected_bimodal'] = bimodal_filtered
bimodality_table['bimodal_percent_selected'] = [str(round(x*100, 2))+'%' for x in np.array(bimodal_filtered)/np.array(filtered)]
bimodality_table['p-val'] = pvals
bimodality_table['p-val (adj)'] = pval_adj
bimodality_table
bimodality_table.to_csv('selected_int_exons.csv', index=False, header=True)
```
#### Table of bimodality by given agglomerative clustering
```
total = []
bimodal_total = []
filtered = []
bimodal_filtered = []
pvals =[]
cells_in_cluster = []
cells_in_cluster.extend([len(x) for x in [chen_clust_filter[x][0].columns for x in range(len(chen_clust_filter))]])
cells_in_cluster.extend([len(x) for x in [lescroart_E6, lescroart_E7]])
cells_in_cluster.extend([len(x) for x in [trapnell_clust_filter[x][0].columns for x in range(len(trapnell_clust_filter))]])
cells_in_cluster.extend([len(x) for x in [song_clust_filter[x][0].columns for x in range(len(song_clust_filter))]])
cells_in_cluster.append(len(shalek_PSI.columns))
cells_in_cluster.append(len(das_PSI.columns))
chen_processed = test_dset(chen_PSI, [chen_clust_filter[x][0].columns for x in range(len(chen_clust_filter))],
chen_mrna_counts, mrna_per_event_chen,
chen_read_counts, chen_coverage_tab, psi_int = 0.2, mrna_min = 10)
lescroart_processed = test_dset(lescroart_PSI, [lescroart_E6, lescroart_E7], lescroart_mrna_counts, mrna_per_event_lescroart,
lescroart_read_counts, lescroart_coverage_tab, psi_int = 0.2, mrna_min = 10)
trapnell_processed = test_dset(trapnell_PSI, [trapnell_clust_filter[x][0].columns for x in range(len(trapnell_clust_filter))],
trapnell_mrna_counts, mrna_per_event_trapnell, trapnell_read_counts,
trapnell_coverage_tab, psi_int = 0.2, mrna_min = 10)
song_processed = test_dset(song_PSI, [song_clust_filter[x][0].columns for x in range(len(song_clust_filter))],
song_mrna_counts, mrna_per_event_song,
song_read_counts, song_coverage_tab, psi_int = 0.2, mrna_min = 10)
shalek_processed = test_dset(shalek_PSI, [shalek_PSI.columns], shalek_mrna_counts, mrna_per_event_shalek,
shalek_read_counts, shalek_coverage_tab, psi_int = 0.2, mrna_min = 10)
das_processed = test_dset(das_PSI, [das_PSI.columns], das_mrna_counts, mrna_per_event_das,
das_read_counts, das_coverage_tab, psi_int = 0.2, mrna_min = 10)
total.extend(chen_processed[0])
total.extend(lescroart_processed[0])
total.extend(trapnell_processed[0])
total.extend(song_processed[0])
total.extend(shalek_processed[0])
total.extend(das_processed[0])
bimodal_total.extend(chen_processed[1])
bimodal_total.extend(lescroart_processed[1])
bimodal_total.extend(trapnell_processed[1])
bimodal_total.extend(song_processed[1])
bimodal_total.extend(shalek_processed[1])
bimodal_total.extend(das_processed[1])
filtered.extend(chen_processed[2])
filtered.extend(lescroart_processed[2])
filtered.extend(trapnell_processed[2])
filtered.extend(song_processed[2])
filtered.extend(shalek_processed[2])
filtered.extend(das_processed[2])
bimodal_filtered.extend(chen_processed[3])
bimodal_filtered.extend(lescroart_processed[3])
bimodal_filtered.extend(trapnell_processed[3])
bimodal_filtered.extend(song_processed[3])
bimodal_filtered.extend(shalek_processed[3])
bimodal_filtered.extend(das_processed[3])
pvals.extend(chen_processed[4])
pvals.extend(lescroart_processed[4])
pvals.extend(trapnell_processed[4])
pvals.extend(song_processed[4])
pvals.extend(shalek_processed[4])
pvals.extend(das_processed[4])
pval_adj = multipletests(pvals, method='fdr_bh')[1]
cell_type = ['ES', 'Epi, early', 'Epi, late', 'Neuron, early', 'Neuron, late', 'Heart E6.75', 'Heart E7.25',
'Myoblast 00h', 'Myoblast 24h', 'Myoblast 48h', 'Myoblast 72h',
'iPSC', 'NPC', 'Motor neuron', 'BMDC', 'Olfactory neurons']
dataset = ['Chen']*5 + ['Lescroart']*2 + ['Trapnell']*4 + ['Song']*3 + ['Shalek'] + ['Fletcher']
organism = ['Mouse']*7+['Human']*7+['Mouse']*2
bimodality_table = pd.DataFrame()
bimodality_table['dataset'] = dataset
bimodality_table['organism'] = organism
bimodality_table['cell_type'] = cell_type
bimodality_table['cells_in_cluster'] = cells_in_cluster
bimodality_table['total_exons'] = total
bimodality_table['bimodal_exons'] = bimodal_total
bimodality_table['bimodal_percent'] = [str(round(x*100, 2))+'%' for x in np.array(bimodal_total)/np.array(total)]
bimodality_table['selected_exons'] = filtered
bimodality_table['selected_bimodal'] = bimodal_filtered
bimodality_table['bimodal_percent_selected'] = [str(round(x*100, 2))+'%' for x in np.array(bimodal_filtered)/np.array(filtered)]
bimodality_table['p-val'] = pvals
bimodality_table['p-val (adj)'] = pval_adj
bimodality_table
bimodality_table.to_csv('selected_int_exons_agg_clusters.csv', index=False, header=True)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
data = pd.read_csv('features_30_sec.csv')
data.head()
dataset = data[data['label'].isin(['blues', 'classical', 'country', 'disco', 'hiphop', 'jazz', 'metal', 'pop', 'reggae', 'rock'])].drop(['filename','length'],axis=1)
dataset.iloc[:, :-15].head()
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn import preprocessing
y = LabelEncoder().fit_transform(dataset.iloc[:,-1])
y.shape
X = StandardScaler().fit_transform(np.array(dataset.iloc[:, :-15], dtype = float))
X.shape
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.30,random_state=42)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
import tensorflow.keras as keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout, BatchNormalization
from keras.layers import LSTM
# Model 1 (FNN)
# defining our regression model
n_cols = dataset.iloc[:, :-15].shape[1]
def regression_model_1():
# structure of our model
model = Sequential()
model.add(Dense(256, activation='relu', input_shape=(n_cols,)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(64, activation='relu',))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(10,activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
earlystop = EarlyStopping(patience=10)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy',
patience=10,
verbose=1,
)
Callbacks = [earlystop, learning_rate_reduction]
#build the model
# model_1 = regression_model_1()
#fit the model
# model_1.fit(X_train,y_train, callbacks=Callbacks , validation_data=(X_test,y_test) ,epochs=100,batch_size=150)
# model_1.save('Keras_reg_30sec_10.h5')
from keras.models import load_model
model = load_model('Keras_reg_30sec_10.h5')
predictions = model.predict_classes(X_test)
score = model.evaluate(X_test,y_test, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
print(y_test)
print(predictions)
from sklearn.metrics import confusion_matrix
cf_matrix = confusion_matrix(y_test,predictions)
print(cf_matrix)
import seaborn as sns
%matplotlib inline
classes=['blues', 'classical', 'country', 'disco', 'hiphop', 'jazz', 'metal', 'pop', 'reggae', 'rock']
sns.heatmap(cf_matrix, annot=True , cmap='Blues',xticklabels=classes,yticklabels=classes)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Optimizers in TensorFlow Probability
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Optimizers_in_TensorFlow_Probability"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Optimizers_in_TensorFlow_Probability.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Abstract
In this colab we demonstrate how to use the various optimizers implemented in TensorFlow Probability.
## Dependencies & Prerequisites
```
#@title Import { display-mode: "form" }
%matplotlib inline
import contextlib
import functools
import os
import time
import numpy as np
import pandas as pd
import scipy as sp
from six.moves import urllib
from sklearn import preprocessing
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
```
## BFGS and L-BFGS Optimizers
Quasi Newton methods are a class of popular first order optimization algorithm. These methods use a positive definite approximation to the exact Hessian to find the search direction.
The Broyden-Fletcher-Goldfarb-Shanno
algorithm ([BFGS](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm)) is a specific implementation of this general idea. It is applicable and is the method of choice for medium sized problems
where the gradient is continuous everywhere (e.g. linear regression with an $L_2$ penalty).
[L-BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) is a limited-memory version of BFGS that is useful for solving larger problems whose Hessian matrices cannot be computed at a reasonable cost or are not sparse. Instead of storing fully dense $n \times n$ approximations of Hessian matrices, they only save a few vectors of length $n$ that represent these approximations implicitly.
```
#@title Helper functions
CACHE_DIR = os.path.join(os.sep, 'tmp', 'datasets')
def make_val_and_grad_fn(value_fn):
@functools.wraps(value_fn)
def val_and_grad(x):
return tfp.math.value_and_gradient(value_fn, x)
return val_and_grad
@contextlib.contextmanager
def timed_execution():
t0 = time.time()
yield
dt = time.time() - t0
print('Evaluation took: %f seconds' % dt)
def np_value(tensor):
"""Get numpy value out of possibly nested tuple of tensors."""
if isinstance(tensor, tuple):
return type(tensor)(*(np_value(t) for t in tensor))
else:
return tensor.numpy()
def run(optimizer):
"""Run an optimizer and measure it's evaluation time."""
optimizer() # Warmup.
with timed_execution():
result = optimizer()
return np_value(result)
```
### L-BFGS on a simple quadratic function
```
# Fix numpy seed for reproducibility
np.random.seed(12345)
# The objective must be supplied as a function that takes a single
# (Tensor) argument and returns a tuple. The first component of the
# tuple is the value of the objective at the supplied point and the
# second value is the gradient at the supplied point. The value must
# be a scalar and the gradient must have the same shape as the
# supplied argument.
# The `make_val_and_grad_fn` decorator helps transforming a function
# returning the objective value into one that returns both the gradient
# and the value. It also works for both eager and graph mode.
dim = 10
minimum = np.ones([dim])
scales = np.exp(np.random.randn(dim))
@make_val_and_grad_fn
def quadratic(x):
return tf.reduce_sum(scales * (x - minimum) ** 2, axis=-1)
# The minimization routine also requires you to supply an initial
# starting point for the search. For this example we choose a random
# starting point.
start = np.random.randn(dim)
# Finally an optional argument called tolerance let's you choose the
# stopping point of the search. The tolerance specifies the maximum
# (supremum) norm of the gradient vector at which the algorithm terminates.
# If you don't have a specific need for higher or lower accuracy, leaving
# this parameter unspecified (and hence using the default value of 1e-8)
# should be good enough.
tolerance = 1e-10
@tf.function
def quadratic_with_lbfgs():
return tfp.optimizer.lbfgs_minimize(
quadratic,
initial_position=tf.constant(start),
tolerance=tolerance)
results = run(quadratic_with_lbfgs)
# The optimization results contain multiple pieces of information. The most
# important fields are: 'converged' and 'position'.
# Converged is a boolean scalar tensor. As the name implies, it indicates
# whether the norm of the gradient at the final point was within tolerance.
# Position is the location of the minimum found. It is important to check
# that converged is True before using the value of the position.
print('L-BFGS Results')
print('Converged:', results.converged)
print('Location of the minimum:', results.position)
print('Number of iterations:', results.num_iterations)
```
### Same problem with BFGS
```
@tf.function
def quadratic_with_bfgs():
return tfp.optimizer.bfgs_minimize(
quadratic,
initial_position=tf.constant(start),
tolerance=tolerance)
results = run(quadratic_with_bfgs)
print('BFGS Results')
print('Converged:', results.converged)
print('Location of the minimum:', results.position)
print('Number of iterations:', results.num_iterations)
```
## Linear Regression with L1 penalty: Prostate Cancer data
Example from the Book: *The Elements of Statistical Learning, Data Mining, Inference, and Prediction* by Trevor Hastie, Robert Tibshirani and Jerome Friedman.
Note this is an optimization problem with L1 penalty.
### Obtain dataset
```
def cache_or_download_file(cache_dir, url_base, filename):
"""Read a cached file or download it."""
filepath = os.path.join(cache_dir, filename)
if tf.io.gfile.exists(filepath):
return filepath
if not tf.io.gfile.exists(cache_dir):
tf.io.gfile.makedirs(cache_dir)
url = url_base + filename
print("Downloading {url} to {filepath}.".format(url=url, filepath=filepath))
urllib.request.urlretrieve(url, filepath)
return filepath
def get_prostate_dataset(cache_dir=CACHE_DIR):
"""Download the prostate dataset and read as Pandas dataframe."""
url_base = 'http://web.stanford.edu/~hastie/ElemStatLearn/datasets/'
return pd.read_csv(
cache_or_download_file(cache_dir, url_base, 'prostate.data'),
delim_whitespace=True, index_col=0)
prostate_df = get_prostate_dataset()
```
### Problem definition
```
np.random.seed(12345)
feature_names = ['lcavol', 'lweight', 'age', 'lbph', 'svi', 'lcp',
'gleason', 'pgg45']
# Normalize features
scalar = preprocessing.StandardScaler()
prostate_df[feature_names] = pd.DataFrame(
scalar.fit_transform(
prostate_df[feature_names].astype('float64')))
# select training set
prostate_df_train = prostate_df[prostate_df.train == 'T']
# Select features and labels
features = prostate_df_train[feature_names]
labels = prostate_df_train[['lpsa']]
# Create tensors
feat = tf.constant(features.values, dtype=tf.float64)
lab = tf.constant(labels.values, dtype=tf.float64)
dtype = feat.dtype
regularization = 0 # regularization parameter
dim = 8 # number of features
# We pick a random starting point for the search
start = np.random.randn(dim + 1)
def regression_loss(params):
"""Compute loss for linear regression model with L1 penalty
Args:
params: A real tensor of shape [dim + 1]. The zeroth component
is the intercept term and the rest of the components are the
beta coefficients.
Returns:
The mean square error loss including L1 penalty.
"""
params = tf.squeeze(params)
intercept, beta = params[0], params[1:]
pred = tf.matmul(feat, tf.expand_dims(beta, axis=-1)) + intercept
mse_loss = tf.reduce_sum(
tf.cast(
tf.losses.mean_squared_error(y_true=lab, y_pred=pred), tf.float64))
l1_penalty = regularization * tf.reduce_sum(tf.abs(beta))
total_loss = mse_loss + l1_penalty
return total_loss
```
### Solving with L-BFGS
Fit using L-BFGS. Even though the L1 penalty introduces derivative discontinuities, in practice, L-BFGS works quite well still.
```
@tf.function
def l1_regression_with_lbfgs():
return tfp.optimizer.lbfgs_minimize(
make_val_and_grad_fn(regression_loss),
initial_position=tf.constant(start),
tolerance=1e-8)
results = run(l1_regression_with_lbfgs)
minimum = results.position
fitted_intercept = minimum[0]
fitted_beta = minimum[1:]
print('L-BFGS Results')
print('Converged:', results.converged)
print('Intercept: Fitted ({})'.format(fitted_intercept))
print('Beta: Fitted {}'.format(fitted_beta))
```
### Solving with Nelder Mead
The [Nelder Mead method](https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method) is one of the most popular derivative free minimization methods. This optimizer doesn't use gradient information and makes no assumptions on the differentiability of the target function; it is therefore appropriate for non-smooth objective functions, for example optimization problems with L1 penalty.
For an optimization problem in $n$-dimensions it maintains a set of
$n+1$ candidate solutions that span a non-degenerate simplex. It successively modifies the simplex based on a set of moves (reflection, expansion, shrinkage and contraction) using the function values at each of the vertices.
```
# Nelder mead expects an initial_vertex of shape [n + 1, 1].
initial_vertex = tf.expand_dims(tf.constant(start, dtype=dtype), axis=-1)
@tf.function
def l1_regression_with_nelder_mead():
return tfp.optimizer.nelder_mead_minimize(
regression_loss,
initial_vertex=initial_vertex,
func_tolerance=1e-10,
position_tolerance=1e-10)
results = run(l1_regression_with_nelder_mead)
minimum = results.position.reshape([-1])
fitted_intercept = minimum[0]
fitted_beta = minimum[1:]
print('Nelder Mead Results')
print('Converged:', results.converged)
print('Intercept: Fitted ({})'.format(fitted_intercept))
print('Beta: Fitted {}'.format(fitted_beta))
```
## Logistic Regression with L2 penalty
For this example, we create a synthetic data set for classification and use the L-BFGS optimizer to fit the parameters.
```
np.random.seed(12345)
dim = 5 # The number of features
n_obs = 10000 # The number of observations
betas = np.random.randn(dim) # The true beta
intercept = np.random.randn() # The true intercept
features = np.random.randn(n_obs, dim) # The feature matrix
probs = sp.special.expit(
np.matmul(features, np.expand_dims(betas, -1)) + intercept)
labels = sp.stats.bernoulli.rvs(probs) # The true labels
regularization = 0.8
feat = tf.constant(features)
lab = tf.constant(labels, dtype=feat.dtype)
@make_val_and_grad_fn
def negative_log_likelihood(params):
"""Negative log likelihood for logistic model with L2 penalty
Args:
params: A real tensor of shape [dim + 1]. The zeroth component
is the intercept term and the rest of the components are the
beta coefficients.
Returns:
The negative log likelihood plus the penalty term.
"""
intercept, beta = params[0], params[1:]
logit = tf.matmul(feat, tf.expand_dims(beta, -1)) + intercept
log_likelihood = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(
labels=lab, logits=logit))
l2_penalty = regularization * tf.reduce_sum(beta ** 2)
total_loss = log_likelihood + l2_penalty
return total_loss
start = np.random.randn(dim + 1)
@tf.function
def l2_regression_with_lbfgs():
return tfp.optimizer.lbfgs_minimize(
negative_log_likelihood,
initial_position=tf.constant(start),
tolerance=1e-8)
results = run(l2_regression_with_lbfgs)
minimum = results.position
fitted_intercept = minimum[0]
fitted_beta = minimum[1:]
print('Converged:', results.converged)
print('Intercept: Fitted ({}), Actual ({})'.format(fitted_intercept, intercept))
print('Beta:\n\tFitted {},\n\tActual {}'.format(fitted_beta, betas))
```
## Batching support
Both BFGS and L-BFGS support batched computation, for example to optimize a single function from many different starting points; or multiple parametric functions from a single point.
### Single function, multiple starting points
Himmelblau's function is a standard optimization test case. The function is given by:
$$f(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2$$
The function has four minima located at:
- (3, 2),
- (-2.805118, 3.131312),
- (-3.779310, -3.283186),
- (3.584428, -1.848126).
All these minima may be reached from appropriate starting points.
```
# The function to minimize must take as input a tensor of shape [..., n]. In
# this n=2 is the size of the domain of the input and [...] are batching
# dimensions. The return value must be of shape [...], i.e. a batch of scalars
# with the objective value of the function evaluated at each input point.
@make_val_and_grad_fn
def himmelblau(coord):
x, y = coord[..., 0], coord[..., 1]
return (x * x + y - 11) ** 2 + (x + y * y - 7) ** 2
starts = tf.constant([[1, 1],
[-2, 2],
[-1, -1],
[1, -2]], dtype='float64')
# The stopping_condition allows to further specify when should the search stop.
# The default, tfp.optimizer.converged_all, will proceed until all points have
# either converged or failed. There is also a tfp.optimizer.converged_any to
# stop as soon as the first point converges, or all have failed.
@tf.function
def batch_multiple_starts():
return tfp.optimizer.lbfgs_minimize(
himmelblau, initial_position=starts,
stopping_condition=tfp.optimizer.converged_all,
tolerance=1e-8)
results = run(batch_multiple_starts)
print('Converged:', results.converged)
print('Minima:', results.position)
```
### Multiple functions
For demonstration purposes, in this example we simultaneously optimize a large number of high dimensional randomly generated quadratic bowls.
```
np.random.seed(12345)
dim = 100
batches = 500
minimum = np.random.randn(batches, dim)
scales = np.exp(np.random.randn(batches, dim))
@make_val_and_grad_fn
def quadratic(x):
return tf.reduce_sum(input_tensor=scales * (x - minimum)**2, axis=-1)
# Make all starting points (1, 1, ..., 1). Note not all starting points need
# to be the same.
start = tf.ones((batches, dim), dtype='float64')
@tf.function
def batch_multiple_functions():
return tfp.optimizer.lbfgs_minimize(
quadratic, initial_position=start,
stopping_condition=tfp.optimizer.converged_all,
max_iterations=100,
tolerance=1e-8)
results = run(batch_multiple_functions)
print('All converged:', np.all(results.converged))
print('Largest error:', np.max(results.position - minimum))
```
| github_jupyter |
YAML support is provided by PyYAML at http://pyyaml.org/. This notebook depends on it.
```
import yaml
```
The following cell provides an initial example of a *note* in our system.
A *note* is nothing more than a YAML document. The idea of notetaking is to keep it simple, so a note should make no assumptions about formatting whatsoever.
In our current thinking, we have the following sections:
- title: an optional title (text)
- tags: one or more keywords (text, sequence of text, no nesting)
- mentions: one or more mentions (text, sequence of text, no nesting)
- outline: one or more items (text, sequence of text, nesting is permitted)
- dates (numeric text, sequence, must follow established historical ways of representing dates)
- text (text from the source as multiline string)
- bibtex, ris, or inline (text for the bibliographic item; will be syntax checked)
- bibkey (text, a hopefully unique identifier for referring to this source in other Zettels)
- cite: Used to cite a bibkey from the same or other notes. In addition, the citation may be represented as a list, where the first item is the bibkey and subsequent items are pages or ranges of page numbers. See below for a good example of how this will work.
- note (any additional details that you wish to hide from indexing)
In most situations, freeform text is permitted. If you need to do crazy things, you must put quotes around the text so YAML can process it. However, words separated by whitespace and punctuation seems to work fine in most situations.
These all are intended to be string data, so there are no restrictions on what can be in any field; however, we will likely limit tags, mentions, dates in some way as we go forward. Fields such as bibtex, ris, or inline are also subject to validity checking.
Print the document to the console (nothing special here).
```
myFirstZettel="""
title: First BIB Note for Castells
tags:
- Castells
- Network Society
- Charles Babbage is Awesome
- Charles Didn't do Everything
mentions:
- gkt
- dbdennis
dates: 2016
cite:
- Castells Rise 2016
- ii-iv
- 23-36
outline:
- Introduction
- - Computers
- People
- Conclusions
- - Great Ideas of Computing
text: |
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam eleifend est sed diam maximus rutrum. Quisque sit amet imperdiet odio, id tristique libero. Aliquam viverra convallis mauris vel tristique. Cras ac dolor non risus porttitor molestie vel at nisi. Donec vitae finibus quam. Phasellus vehicula urna sed nibh condimentum, ultrices interdum velit eleifend. Nam suscipit dolor eu rutrum fringilla. Sed pulvinar purus purus, sit amet venenatis enim convallis a. Duis fringilla nisl sit amet erat lobortis dictum. Nunc fringilla arcu nec ex blandit, a gravida purus commodo. Vivamus lacinia tellus dui, vel maximus lacus ornare id.
Vivamus euismod justo sit amet luctus bibendum. Integer non mi ullamcorper enim fringilla vulputate sit amet in urna. Nullam eu sodales ipsum. Curabitur id convallis ex. Duis a condimentum lorem. Nulla et urna massa. Duis in nibh eu elit lobortis vehicula. Mauris congue mauris mollis metus lacinia, ut suscipit mi egestas. Donec luctus ante ante, eget viverra est mollis vitae.
Vivamus in purus in erat dictum scelerisque. Aliquam dictum quis ligula ac euismod. Mauris elementum metus vel scelerisque feugiat. Vivamus bibendum massa eu pellentesque sodales. Nulla nec lacus dolor. Donec scelerisque, nibh sed placerat gravida, nunc turpis tristique nibh, ac feugiat enim massa ut eros. Nulla finibus, augue egestas hendrerit accumsan, tellus augue tempor eros, in sagittis dolor turpis nec mi. Nunc fringilla mi non malesuada aliquet.
bibkey:
Castells Rise 1996
bibtex: |
@book{castells_rise_1996,
address = {Cambridge, Mass.},
series = {Castells, {Manuel}, 1942- {Information} age . v},
title = {The rise of the network society},
isbn = {978-1-55786-616-5},
language = {eng},
publisher = {Blackwell Publishers},
author = {Castells, Manuel},
year = {1996},
keywords = {Information networks., Information society., Information technology Economic aspects., Information technology Social aspects., Technology and civilization.}
}
note:
George likes this new format.
"""
print(myFirstZettel)
```
This shows how to load just the YAML portion of the document, resulting in a Python dictionary data structure. Observe that the Python dictionary has { key : value, ... }. So we can extract the YAML fields from the Python dictionary data structure.
Notice that when you write a YAML list of mentions, there is a nested Python list ['gkt', 'dbdennis'].
```
doc = yaml.load(myFirstZettel)
```
Closing the loop, the following shows how to *iterate* the keys of the data structure.
```
for key in doc.keys():
print(key, "=", doc[key])
```
And this shows how to get any particular item of interest. In this case, we're extracting the *bibtex* key so we can do something with the embedded BibTeX (e.g. print it).
```
print(doc['bibkey'])
print(doc['bibtex'])
```
Adapted from http://stackoverflow.com/questions/12472338/flattening-a-list-recursively. There really must be a nicer way to do stuff like this. I will rewrite this using a walker so we can have custom processing of the list items.
```
def flatten(item):
if type(item) != type([]):
return [str(item)]
if item == []:
return item
if isinstance(item[0], list):
return flatten(item[0]) + flatten(item[1:])
return item[:1] + flatten(item[1:])
flatten("George was here")
flatten(['A', ['B', 'C'], ['D', ['E']]])
```
Now we are onto some `sqlite3` explorations.
Ordinarily, I would use some sort of mapping framework to handle database operations. However, it's not clear the FTS support is part of any ORM (yet). I will continue to research but since there is likely only one table, it might not be worth the trouble.
Next we will actually add the Zettel to the database and do a test query. Almost there.
```
import sqlite3
# This is for showing data structures only.
import pprint
printer = pprint.PrettyPrinter(indent=2)
class SQLiteFTS(object):
def __init__(self, db_name, table_name, field_names):
self.db_name = db_name
self.conn = sqlite3.connect(db_name)
self.cursor = self.conn.cursor()
self.table_name = table_name
self.fts_field_names = field_names
self.fts_field_refs = ['?'] * len(self.fts_field_names) # for sqlite insert template generation
self.fts_field_init = [''] * len(self.fts_field_names)
self.fts_fields = dict(zip(self.fts_field_names, self.fts_field_refs))
self.fts_default_record = dict(zip(self.fts_field_names, self.fts_field_init))
def bind(self, doc):
self.record = self.fts_default_record.copy()
for k in doc.keys():
if k in self.record.keys():
self.record[k] = doc[k]
else:
print("Unknown fts field %s" % k)
self.record.update(doc)
def drop_table(self):
self.conn.execute("DROP TABLE IF EXISTS %s" % self.table_name)
def create_table(self):
sql_fields = ",".join(self.fts_default_record.keys())
print("CREATE VIRTUAL TABLE zettels USING fts4(%s)" % sql_fields)
self.conn.execute("CREATE VIRTUAL TABLE zettels USING fts4(%s)" % sql_fields)
def insert_into_table(self):
sql_params = ",".join(self.fts_fields.values())
#printer.pprint(self.record)
#printer.pprint(self.record.values())
sql_insert_values = [ ",".join(flatten(value)) for value in list(self.record.values())]
print("INSERT INTO zettels VALUES (%s)" % sql_params)
print(self.record.keys())
printer.pprint(sql_insert_values)
self.conn.execute("INSERT INTO zettels VALUES (%s)" % sql_params, sql_insert_values)
def done(self):
self.conn.commit()
self.conn.close()
sql = SQLiteFTS('zettels.db', 'zettels', ['title', 'tags', 'mentions', 'outline', 'cite', 'dates', 'summary', 'text', 'bibkey', 'bibtex', 'ris', 'inline', 'note' ])
#doc_keys = list(doc.keys())
#doc_keys.sort()
#rec_keys = list(sql.record.keys())
#rec_keys.sort()
#print("doc keys %s" % doc_keys)
#print("record keys %s" % rec_keys)
sql.drop_table()
sql.create_table()
printer.pprint(doc)
sql.bind(doc)
sql.insert_into_table()
sql.done()
#sql_insert_values = [ str(field) for field in sql.record.values()]
#print(sql_insert_values)
#print(record)
with open("xyz.txt") as datafile:
text = datafile.read()
print(text)
bibkey = 'blahblahblah'
bibtex = text
import yaml
from collections import OrderedDict
class quoted(str): pass
def quoted_presenter(dumper, data):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
yaml.add_representer(quoted, quoted_presenter)
class literal(str): pass
def literal_presenter(dumper, data):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
yaml.add_representer(literal, literal_presenter)
def ordered_dict_presenter(dumper, data):
return dumper.represent_dict(data.items())
yaml.add_representer(OrderedDict, ordered_dict_presenter)
d = OrderedDict(bibkey=bibkey, bibtex=literal(bibtex))
print(yaml.dump(d))
```
| github_jupyter |
# 1. LGDE.com 일별 지표생성 실습 1일차 (정답)
#### 주피터 노트북 단축키 (Windows 환경)
| 단축키 | 설명 | 기타 |
| --- | --- | --- |
| Alt+Enter | 현재 셀 실행 + 다음 셀 추가 | 초기 개발 시에 주로 사용 |
| Shift+Enter | 현재 셀 실행 + 다음 셀 이동 | 전체 테스트 시에 주로 사용 |
| Ctrl+Enter | 현재 셀 실행 + 이동 안함 | 하나씩 점검 혹은 디버깅 시에 사용 |
| Ctrl+/ | 주석 적용 및 해제 | Shift 키로 여러 줄을 선택하고 주석 및 해제 사용 |
| Ctrl+s | 전체 저장 | - |
### 주피터 노트북 유의사항
* 모든 셀은 Code, Markdown, Raw 3가지 유형이 존재하며, **파이썬 코드 실행**은 반드시 ***Code*** 모드에서 수행되어야 합니다
* 현재 셀의 실행이 무한 루프에 빠지거나 너무 오래 걸리는 경우 상단 Menu 에서 ***Kernel - Interrupt Kernel*** 메뉴를 통해 현재 셀의 작업만 중지할 수 있습니다
* 메모리 혹은 다양한 이슈로 인해 제대로 동작하지 않는 경우에는 상단 Menu 에서 ***Kernel - Restart Kernel..*** 메뉴를 통해 재시작할 수 있습니다
## 5. 수집된 데이터 탐색
### 5-1. 스파크 세션 생성
```
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
# 공통 데이터 위치
home_jovyan = "/home/jovyan"
work_data = f"{home_jovyan}/work/data"
work_dir=!pwd
work_dir = work_dir[0]
# 로컬 환경 최적화
spark.conf.set("spark.sql.shuffle.partitions", 5) # the number of partitions to use when shuffling data for joins or aggregations.
spark.conf.set("spark.sql.streaming.forceDeleteTempCheckpointLocation", "true")
spark
user25 = spark.read.parquet("user/20201025")
user25.printSchema()
user25.show(truncate=False)
display(user25)
purchase25 = spark.read.parquet("purchase/20201025")
purchase25.printSchema()
display(purchase25)
access25 = spark.read.option("inferSchema", "true").json("access/20201025")
access25.printSchema()
display(access25)
```
### 5-2. 수집된 고객, 매출 및 접속 임시 테이블 생성
```
user25.createOrReplaceTempView("user25")
purchase25.createOrReplaceTempView("purchase25")
access25.createOrReplaceTempView("access25")
spark.sql("show tables '*25'")
```
### 5-3. SparkSQL을 이용하여 테이블 별 데이터프레임 생성하기
```
u_signup_condition = "u_signup >= '20201025' and u_signup < '20201026'"
user = spark.sql("select u_id, u_name, u_gender from user25").where(u_signup_condition)
user.createOrReplaceTempView("user")
p_time_condition = "p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'"
purchase = spark.sql("select from_unixtime(p_time) as p_time, p_uid, p_id, p_name, p_amount from purchase25").where(p_time_condition)
purchase.createOrReplaceTempView("purchase")
access = spark.sql("select a_id, a_tag, a_timestamp, a_uid from access25")
access.createOrReplaceTempView("access")
spark.sql("show tables")
```
### 5-4. 생성된 테이블을 SQL 문을 이용하여 탐색하기
```
whereCondition = "u_gender = '남'"
spark.sql("select * from user").where(whereCondition)
selectClause = "select * from purchase where p_amount > 2000000"
spark.sql(selectClause)
groupByClause="select a_id, count(1) from access group by a_id"
spark.sql(groupByClause)
```
## 6. 기본 지표 생성
### 6-1. DAU (Daily Activer User) 지표를 생성하세요
```
display(access)
distinctAccessUser = "select count(distinct a_uid) as DAU from access"
dau = spark.sql(distinctAccessUser)
display(dau)
```
### 6-2. DPU (Daily Paying User) 지표를 생성하세요
```
display(purchase)
distinctPayingUser = "select count(distinct p_uid) as PU from purchase"
pu = spark.sql(distinctPayingUser)
display(pu)
```
### 6-3. DR (Daily Revenue) 지표를 생성하세요
```
display(purchase)
sumOfDailyRevenue = "select sum(p_amount) as DR from purchase"
dr = spark.sql(sumOfDailyRevenue)
display(dr)
```
### 6-4. ARPU (Average Revenue Per User) 지표를 생성하세요
```
v_dau = dau.collect()[0]["DAU"]
v_pu = pu.collect()[0]["PU"]
v_dr = dr.collect()[0]["DR"]
print("ARPU : {}".format(v_dr / v_dau))
```
### 6-5. ARPPU (Average Revenue Per Paying User) 지표를 생성하세요
```
print("ARPPU : {}".format(v_dr / v_pu))
```
## 7. 고급 지표 생성
### 7-1. 디멘젼 테이블을 설계 합니다
### 7-2. 오픈 첫 날 접속한 모든 고객 및 접속 횟수를 가진 데이터프레임을 생성합니다
```
access.printSchema()
countOfAccess = "select a_uid, count(a_uid) as a_count from access group by a_uid order by a_uid asc"
accs = spark.sql(countOfAccess)
display(accs)
```
### 7-3. 일 별 이용자 별 총 매출 금액과, 구매 횟수를 가지는 데이터프레임을 생성합니다
```
purchase.printSchema()
sumOfCountAndAmount = "select p_uid, count(p_uid) as p_count, sum(p_amount) as p_amount from purchase group by p_uid order by p_uid asc"
amts = spark.sql(sumOfCountAndAmount)
display(amts)
```
### 7-4. 이용자 정보와 구매 정보와 조인합니다
```
accs.printSchema()
amts.printSchema()
joinCondition = accs.a_uid == amts.p_uid
joinHow = "left_outer"
dim1 = accs.join(amts, joinCondition, joinHow)
dim1.printSchema()
display(dim1.orderBy(asc("a_uid")))
```
### 7-5. 고객 정보를 추가합니다
```
dim1.printSchema()
user.printSchema()
joinCondition = dim1.a_uid == user.u_id
joinHow = "left_outer"
dim2 = dim1.join(user, joinCondition, joinHow)
dim2.printSchema()
display(dim2.orderBy(asc("a_uid")))
```
### 7-6. 중복되는 ID 컬럼은 제거하고, 숫자 필드에 널값은 0으로 기본값을 넣어줍니다
```
dim2.printSchema()
dim3 = dim2.drop("p_uid", "u_id")
fillDefaultValue = { "p_count":0, "p_amount":0 }
dim4 = dim3.na.fill(fillDefaultValue)
dim4.printSchema()
display(dim4.orderBy(asc("a_uid")))
```
### 7-7. 생성된 유저 테이블을 재사용 가능하도록 컬럼 명을 변경합니다
```
dim4.printSchema()
dim5 = (
dim4
.withColumnRenamed("a_uid", "d_uid")
.withColumnRenamed("a_count", "d_acount")
.withColumnRenamed("p_amount", "d_pamount")
.withColumnRenamed("p_count", "d_pcount")
.withColumnRenamed("u_name", "d_name")
.withColumnRenamed("u_gender", "d_gender")
.drop("a_uid", "a_count", "p_amount", "p_count", "u_name", "u_gender")
.select("d_uid", "d_name", "d_gender", "d_acount", "d_pamount", "d_pcount")
)
display(dim5.orderBy(asc("d_uid")))
```
### 7-8. 최초 구매 유저 정보를 추가합니다
```
purchase.printSchema()
selectFirstPurchaseTime = "select p_uid, min(p_time) as p_time from purchase group by p_uid"
first_purchase = spark.sql(selectFirstPurchaseTime)
dim6 = dim5.withColumn("d_first_purchase", lit(None))
dim6.printSchema()
exprFirstPurchase = expr("case when d_first_purchase is null then p_time else d_first_purchase end")
dim7 = (
dim6.join(first_purchase, dim5.d_uid == first_purchase.p_uid, "left_outer")
.withColumn("first_purchase", exprFirstPurchase)
.drop("d_first_purchase", "p_uid", "p_time")
.withColumnRenamed("first_purchase", "d_first_purchase")
)
dimension = dim7.orderBy(asc("d_uid"))
dimension.printSchema()
display(dimension)
```
### 7-9. 생성된 디멘젼을 저장소에 저장합니다
```
dimension.printSchema()
target_dir="dimension/dt=20201025"
dimension.write.mode("overwrite").parquet(target_dir)
```
### 7-10. 생성된 디멘젼을 다시 읽어서 출력합니다
```
newDimension = spark.read.parquet(target_dir)
newDimension.printSchema()
display(newDimension)
```
### 7-11. 오늘 생성된 지표를 MySQL 테이블로 저장합니다
```
print("DT:{}, DAU:{}, PU:{}, DR:{}".format("2020-10-25", v_dau, v_pu, v_dr))
today = "2020-10-25"
lgde_origin = spark.read.jdbc("jdbc:mysql://mysql:3306/testdb", "testdb.lgde", properties={"user": "sqoop", "password": "sqoop"}).where(col("dt") < lit(today))
lgde_today = spark.createDataFrame([(today, v_dau, v_pu, v_dr)], ["DT", "DAU", "PU", "DR"])
lgde = lgde_origin.union(lgde_today)
lgde.write.mode("overwrite").jdbc("jdbc:mysql://mysql:3306/testdb", "testdb.lgde", properties={"user": "sqoop", "password": "sqoop"})
```
| github_jupyter |
## What does Data Preprocessing mean?
Data preprocessing is a data mining technique that involves transforming raw data into an understandable format. Real-world data is often incomplete, inconsistent, and/or lacking in certain behaviors or trends, and is likely to contain many errors. Data collection methods are loosely controlled and hence data collected has a wide range, impossible combinations and missing values. The quality of data affects the various analysis and learning of the model. Data preprocessing is a proven method of resolving such issues. Data preprocessing prepares raw data for further processing.
Data goes through a series of steps during preprocessing:
1. Data Cleaning: Data is cleansed through processes such as filling in missing values, smoothing the noisy data, or resolving the inconsistencies in the data.
2. Data Integration: Data with different representations are put together and conflicts within the data are resolved.
3. Data Transformation: Data is normalized, aggregated and generalized.
4. Data Reduction: This step aims to present a reduced representation of the data.
## Preprocessing in Python
In python, scikit-learn library has a pre-built functionality under sklearn.preprocessing which allows us to deal with cleaning, transformation and integration of the data. The pandas library also helps us in dealing with the missing values and the outliars in the dataset.
```
#import the general libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#get the dataset
test = pd.read_csv('test.csv')
train = pd.read_csv('train.csv')
#Combine test and train into one file
train['source']='train'
test['source']='test'
dataset = pd.concat([train, test],ignore_index=True)
#Building a dataset based on the hypothesis generated
dataset = dataset[['source', 'Neighborhood', 'BldgType', 'HouseStyle', 'OverallQual',
'YearBuilt', 'YearRemodAdd', 'TotalBsmtSF','CentralAir', '1stFlrSF',
'2ndFlrSF', 'GrLivArea', 'TotRmsAbvGrd', 'GarageCars',
'KitchenAbvGr', 'YrSold', 'FullBath','HalfBath', 'SalePrice', ]]
dataset.head()
#Check missing values
dataset.isnull().sum()
dataset.describe()
```
### Dealing with missing values and outliers
From the desciption of the dataset(including the values in each attribute) we have a wide observation and values to deal with.
1. The minimum number of kitchen, full and half bath in the house is zero, there is no house with no kitchen and at least one full bath
2. We also have minimum GroundLiving area as 334 sq feet which is too small for a house!
3. Missing Basement surface area
##### 1. Minimum number of Kitchen and full bath can not be zero!
```
Kitchen_zero = dataset
Kitchen_zero_points = Kitchen_zero['KitchenAbvGr'] == 0
Kitchen_zero[Kitchen_zero_points]
```
Every house has at least one kitchen. Rather than assuming anything, I have replaced the zero value in the kitchen attribute to the mode value of the column. While dealing with missing and outliar values, we can replace the value with either mode, median or mean. Here, taking the mode (usually we prefer to work with mode or mean value for continous variables) is more suitable as it will leave us with a whole number value for the attribute and avoid the trouble of getting a non realistic integer value.
The statistics library in python provides us the functionality to find the mode of the necessary values.
```
#replace zero with mode
from statistics import mode
dataset['KitchenAbvGr'] = dataset['KitchenAbvGr'].replace(0,mode(dataset['KitchenAbvGr']))
bathroom_zero = dataset
bathroom_zero_points = bathroom_zero['FullBath'] == 0
bathroom_zero[bathroom_zero_points]
```
Definations:
1. Full = Tub, sink, wc
2. 3/4 = Shower without tub, sink, wc
3. 1/2 = Sink and wc
From the definition, it is very clear that any house can have no half bath but will always have a full bath (not taking into consideration some apts in modern days which claim to have no baths just to make them more cheaper!). And hence I have considered replacing the zero values in full baths to value of 1.
```
dataset['FullBath'] = dataset['FullBath'].replace(0,1)
```
##### 2. A house with minimum area - 334 sq feet!
```
minimum_area_house = dataset
minimum_area_house_points = minimum_area_house['GrLivArea'] == 334
minimum_area_house[minimum_area_house_points]
```
Here, understanding the area becomes really important, I went through a few sites to understand of features of the house which is as small as 330 square feet. The houses do exist with this small area and the amenties, like the number of floors, number of kitchen, bedrooms and fullbath do meet with the searchings I went through. A typical small house is expected to have few amenties and lesser number of rooms with smaller sizes. I also wanted to look for houses which are actually smaller(GrLivArea < 500 sq feet) in size from the dataset.
```
area = minimum_area_house['GrLivArea'] < 500
minimum_area_house[area]
```
I have decided to not consider the small area of the house as outlier
##### 3. Null values in basement area and garagecars in the house
```
basement_dataset = dataset
basement_dataset[pd.isnull(basement_dataset['TotalBsmtSF'])]
year_built = basement_dataset['YearBuilt'] == 1946
remod_year = basement_dataset['YearRemodAdd'] == 1950
area = basement_dataset['GrLivArea'] < 900
basement_dataset[year_built & remod_year & area]
```
Here, replacing the null value of Total Basement area for the house with mean would not add knowledge to the data and infact, there are chances we would add noise to the dataset.Hence, observing the data of the houses built in year 1946 and houses remodelled in year 1950 along with houses with living area less than 900 gives us a clear picture of how the basement area was influenced in those years with factors of neighborhood and living area into consideration.
It would be wiser to replace the basement area with the same as house area as observed from other data points.
```
dataset['TotalBsmtSF'] = dataset['TotalBsmtSF'].fillna(dataset['GrLivArea'])
garage_car = dataset
garage_car[pd.isnull(garage_car['GarageCars'])]
neighborhood = garage_car['Neighborhood'] == 'IDOTRR'
Bldg_Type = garage_car['BldgType'] == '1Fam'
HouseStyle = garage_car['HouseStyle'] == '2Story'
#the dataset
gc = garage_car[neighborhood & Bldg_Type & HouseStyle]
gc
```
Here, replacing the null value of Garage cars for the house with mean would not add knowledge to the data and infact, there are chances we would add noise to the dataset.Hence, observing the data of the a 1Family, 2Story houses built in the neighborhood gives us a picture of the number of cars the garage would accomodate.
I have replaced the null values with mode of the houses with similar observations
```
from statistics import mode
dataset['GarageCars'] = dataset['GarageCars'].fillna(mode(gc['GarageCars']))
#dataset with no null or outliars
dataset.head()
```
Replacing all the values in Neighborhood, Building Type and House Style to an understandable format
```
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['CollgCr'], 'College Creek')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Blmngtn'], 'Bloomington Heights')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Blueste'], 'Bluestem')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['BrDale'], 'Briardale')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['BrkSide'], 'Brookside')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['ClearCr'], 'Clear Creek')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Crawfor'], 'Crawford')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['IDOTRR'], 'Iowa DOT and Rail Road')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['MeadowV'], 'Meadow Village')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Mitchel'], 'Mitchell')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['NAmes'], 'North Ames')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['NoRidge'], 'Northridge')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['NPkVill'], 'Northpark Villa')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['NridgHt'], 'Northridge Heights')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['NWAmes'], 'Northwest Ames')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['OldTown'], 'Old Town')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['SWISU'], 'South & West of Iowa State University')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['SawyerW'], 'Sawyer West')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Somerst'], 'Somerset')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['StoneBr'], 'Stone Brook')
dataset['Neighborhood'] = dataset['Neighborhood'].replace(['Timber'], 'Timberland')
dataset['BldgType'] = dataset['BldgType'].replace(['1Fam'], 'Single-family Detached')
dataset['BldgType'] = dataset['BldgType'].replace(['2FmCon'], 'Two-family Conversion')
dataset['BldgType'] = dataset['BldgType'].replace(['Duplx'], 'Duplex')
dataset['BldgType'] = dataset['BldgType'].replace(['TwnhsE'], 'Townhouse End Unit')
dataset['BldgType'] = dataset['BldgType'].replace(['TwnhsI'], 'Townhouse Inside Unit')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['1Story'], 'One story')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['1.5Fin'], '1.5 story: 2nd level finished')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['1.5Unf'], '1.5 story: 2nd level unfinished')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['2Story'], 'Two story')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['2.5Fin'], '2.5f story: 2nd level finished')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['2.5Unf'], '2.5 story: 2nd level unfinished')
dataset['HouseStyle'] = dataset['HouseStyle'].replace(['SFoyer'], 'Split Foyer')
dataset['HouseStyle'] = dataset['HouseStyle'].replace([' SLvl'], 'Split Level')
#train_dataset for Relationship
train_dataset = dataset
train_dataset = train_dataset.loc[train_dataset['source']=="train"]
#Dataset for BI
dataset.to_csv("BI_train.csv")
SalePrice_data = train_dataset
Cost_per_sf = train_dataset
#Learning the relationships
SalePrice_data_distribution = pd.DataFrame(SalePrice_data[['SalePrice']])
SalePrice_data_distribution.plot(kind="density",
figsize=(10,10),
xlim=(0,900000), title='Density graph of Sales Price')
plt.xlabel('Sale Price')
plt.ylabel('Density')
plt.show()
print ('Mean of Sales:')
print (SalePrice_data_distribution.mean())
print ('Standard deviation of Sales:')
print (SalePrice_data_distribution.std())
print ('Median of Sales:')
print (SalePrice_data_distribution.median())
Cost_per_sf.loc[:,'CostPerSquareFeet'] = Cost_per_sf['SalePrice']/Cost_per_sf['GrLivArea']
print (Cost_per_sf['CostPerSquareFeet'].median())
```
The Sale Price has a normal distribution with a deviation of `$`79,442 dollars. It is right skewed indicating the density of houses being much less at the greater end of Sale Price.Sale Price median according to our analysis is `$`163,000 and it is increased to `$`174,000(<i>source:zillow and trulia</i>) today with statistics indicating that there has been 6.9% increase in the median of sale prices over the past year. According to Zillow, the median list price in Ames is `$`175 per sq feet whereas it was `$`120 over a period of 2006 - 2010
```
#Relation of gr area and Sale Price
#Plot gr liv and Sale Price
fig, ax = plt.subplots()
ax.scatter(x = train_dataset['GrLivArea'], y = train_dataset['SalePrice'])
plt.ylabel('SalePrice', fontsize=13)
plt.xlabel('GrLivArea', fontsize=13)
plt.show()
#deleting points
train_dataset = train_dataset.drop(train_dataset[(train_dataset['GrLivArea']>4000) & (train_dataset['SalePrice']<300000)].index)
```
Linear relation between Sale Price and GrLivArea can be clearly drawn from the plot. We can see some outliars here, some houses with extremely large area have very low sale prices which have been removed to make our model more robust.
I will be using this further for creating my first model with a linear regression between area of the house and the sale price
```
#Plot basement area and Sale Price
fig, ax = plt.subplots()
ax.scatter(x = train_dataset['TotalBsmtSF'], y = train_dataset['SalePrice'])
plt.ylabel('SalePrice', fontsize=13)
plt.xlabel('Basement Area', fontsize=13)
plt.show()
```
Basement area and sale price have a linear relation with a slope much more steeper indicating that for a unit change in basement area there is a higher change in saleprice. We also observe an outlier here and it's important to get rid of it.
```
#deleting points
train_dataset = train_dataset.drop(train_dataset[(train_dataset['TotalBsmtSF']>6000) & (train_dataset['SalePrice']<300000)].index)
#Plot basement area and Sale Price
fig, ax = plt.subplots()
ax.scatter(x = train_dataset['YearBuilt'], y = train_dataset['SalePrice'])
plt.ylabel('SalePrice', fontsize=13)
plt.xlabel('Year Built', fontsize=13)
plt.show()
```
The house prices observe a drastic increase after 2000 and seems like the house buying trend has also increased compared to the earlier years.
```
#all plot
sns.set()
cols = ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath','HalfBath', 'TotRmsAbvGrd' ,'YearBuilt', 'SalePrice']
sns.pairplot(train_dataset[cols], size = 2.5)
plt.show()
```
We can draw the following conclusions from the above plot:
1. The sale price increases with the overall quality
2. Area and sale price have a linear relation
3. Higher the number of cars in a garage, a value is added to the sale price but that trend changes when the number of cars exceeds 3 in count.
#### Label Encoding
The data set will contain categorical variables. These variables are typically stored as text values which represent various traits. The challenge is determining how to use this data in the analysis. Many machine learning algorithms can support categorical values without further manipulation but there are many more algorithms that do not. The scikit-learn library provide several approaches that can be applied to transform the categorical data into suitable numeric values. Label encoding is simply converting each value in a column to a number. get_dummies further convert this to columns with a 1 or 0 corresponding to the correct value
```
#Preprocess the data-encoding using Label Encoder
from sklearn.preprocessing import LabelEncoder
labelencoder_X = LabelEncoder()
dataset.iloc[:, 1] = labelencoder_X.fit_transform(dataset.iloc[:, 1])
dataset.iloc[:, 2] = labelencoder_X.fit_transform(dataset.iloc[:, 2])
dataset.iloc[:, 3] = labelencoder_X.fit_transform(dataset.iloc[:, 3])
dataset.iloc[:, 8] = labelencoder_X.fit_transform(dataset.iloc[:, 8])
#apply one hot encoding
dataset = pd.get_dummies(dataset, columns=['Neighborhood','BldgType','HouseStyle','CentralAir'])
dataset.head()
#Divide into test and train:
train_modified = dataset.loc[dataset['source']=="train"]
test_modified = dataset.loc[dataset['source']=="test"]
test_modified.drop(['source'],axis=1,inplace=True)
train_modified.drop(['source'],axis=1,inplace=True)
train_modified.to_csv("modified_train.csv")
test_modified.to_csv("modified_test.csv")
```
### Link to [predictive modelling](https://github.com/hmangrola/Predicting-House-Prices-Ames-Iowa/blob/master/Predictive%20modelling.ipynb) notebook
The notebook is licensed (license found on github repository)
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', 50)
df_pop = pd.read_csv('../../data/interim/POPULATION_ESTIMATES_2013_to_2019.csv')
df_venues = pd.read_csv('../../data/interim/CRYTO_VENUES_USA.csv')
df_pop.head(1)
df_venue_by_county = df_venues.groupby(by=['state','county']).size().reset_index(name='num_venues')
df_pop_venue = pop.merge(df_venue_by_county, on=['state','county'], how='left')
df_pop_venue = df_pop_venue.fillna(0)
df_pop_venue.head(10)
plt.scatter(df_pop_venue.POP_ESTIMATE_2019, df_pop_venue.num_venues)
plt.title('2019 Population vs Total Venues')
plt.xlabel('Population')
plt.ylabel('Number of Venues')
# We should also determine the venues per 10,000
df_pop_venue['VENUE_PER_10K'] = df_pop_venue.num_venues / (df_pop_venue.POP_ESTIMATE_2019/10000)
df_pop_venue.sort_values(by='VENUE_PER_10K', ascending=False)[0:25]
#pop_venue['VENUE_PER_CAPITA'].plot.box()
df_pop_venue[df_pop_venue['VENUE_PER_10K']>0]['VENUE_PER_10K'].plot.box(vert=False)
plt.xlabel('Venues Per 10k')
df_pop_venue[df_pop_venue['VENUE_PER_10K']>0]['VENUE_PER_10K'].describe()
pop_venue.corr()
for column in df_pop_venue.columns:
print(column)
df_pop_venue['AVG_BIRTH_RATE'] = ( df_pop_venue.R_birth_2013 + \
df_pop_venue.R_birth_2014 + \
df_pop_venue.R_birth_2015 + \
df_pop_venue.R_birth_2016 + \
df_pop_venue.R_birth_2017 + \
df_pop_venue.R_birth_2018 + \
df_pop_venue.R_birth_2019 ) / 7 \
df_pop_venue['AVG_DEATH_RATE'] = ( df_pop_venue.R_death_2013 + \
df_pop_venue.R_death_2014 + \
df_pop_venue.R_death_2015 + \
df_pop_venue.R_death_2016 + \
df_pop_venue.R_death_2017 + \
df_pop_venue.R_death_2018 + \
df_pop_venue.R_death_2019 ) / 7 \
df_pop_venue['AVG_NATURAL_INC'] = (df_pop_venue.R_NATURAL_INC_2013 + df_pop_venue.R_NATURAL_INC_2014 + df_pop_venue.R_NATURAL_INC_2015 + df_pop_venue.R_NATURAL_INC_2016 + df_pop_venue.R_NATURAL_INC_2017 + df_pop_venue.R_NATURAL_INC_2018 + df_pop_venue.R_NATURAL_INC_2019 ) / 7
df_pop_venue['AVG_INTL_MIG'] = (df_pop_venue.R_INTERNATIONAL_MIG_2013 + df_pop_venue.R_INTERNATIONAL_MIG_2014 + df_pop_venue.R_INTERNATIONAL_MIG_2015 + df_pop_venue.R_INTERNATIONAL_MIG_2016 + df_pop_venue.R_INTERNATIONAL_MIG_2017 + df_pop_venue.R_INTERNATIONAL_MIG_2018 + df_pop_venue.R_INTERNATIONAL_MIG_2019) / 7
df_pop_venue['AVG_DOMESTIC_MIG'] = (df_pop_venue.R_DOMESTIC_MIG_2013 + df_pop_venue.R_DOMESTIC_MIG_2014 + df_pop_venue.R_DOMESTIC_MIG_2015 + df_pop_venue.R_DOMESTIC_MIG_2016 + df_pop_venue.R_DOMESTIC_MIG_2017 + df_pop_venue.R_DOMESTIC_MIG_2018 + df_pop_venue.R_DOMESTIC_MIG_2019) / 7
df_pop_venue['AVG_NET_MIG'] = ( df_pop_venue.R_NET_MIG_2013 + df_pop_venue.R_NET_MIG_2014 + df_pop_venue.R_NET_MIG_2015 + df_pop_venue.R_NET_MIG_2016 + df_pop_venue.R_NET_MIG_2017 + df_pop_venue.R_NET_MIG_2018 + df_pop_venue.R_NET_MIG_2019) / 7
df_avg_venue = df_pop_venue[['state','county','num_venues','AVG_BIRTH_RATE','AVG_DEATH_RATE','AVG_NATURAL_INC','AVG_INTL_MIG','AVG_DOMESTIC_MIG','AVG_NET_MIG']]
corr = df_avg_venue.corr()
corr.style.background_gradient(cmap='coolwarm')
plt.scatter(df_avg_venue.AVG_INTL_MIG, df_avg_venue.num_venues)
plt.title('Internation Migration vs Venues')
plt.xlabel('Mean Internation Migration: 2013 to 2019')
plt.ylabel('Number of Venues')
df_avg_venue[df_avg_venue.num_venues>50].sort_values(by='AVG_INTL_MIG',ascending=False)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
from bokeh.io import output_notebook, show
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource
from bokeh.models.tools import HoverTool
from bokeh.models import Range1d
from math import atan2, pi, sqrt, pow
from scipy.stats import linregress
output_notebook()
def GEH (x,y):
# calculate GEH statistic
try:
return sqrt(2*(pow(x-y,2))/(x+y))
except ZeroDivisionError:
return 0
def combine (df, path, RDS, hour):
# prepare the input dataframe for the Bokeh graph
# remove null values from balanced count results
df = df.dropna(subset = [count])
try:
df = df[df.IS_COUNTED_VALUE != 0]
except:
pass
# correct column names in preparation for bokeh hover tools
df = df.rename(index=str, columns={'$TURN:FROMNODENO': 'FROMNODENO',
'VOLVEHPRT(AP)': 'VOLPERSPRT'})
# apply GEH statistic calculation to count and modelled volumes
df['GEH'] = df.apply(lambda x: GEH(x['VOLPERSPRT'], x[RDS + hour]), axis=1)
# calculate glyoh colour based on GEH band
df['COLOUR'] = np.where(df['GEH']<5, '#a8c686', np.where(df['GEH']>10,'#e4572e','#f3a712'))
df.to_csv(path)
return df
#combine(count, att, 2, save_path)
def qreg(RDS, hour):
# plot a quick regression curve in seaborn
sns.lmplot(x=RDS + hour, y='VOLPERSPRT', data = combine(att, save_path, RDS, hour))
def geh5():
x = df[df["GEH"]>5].count()[0]
y = len(df)
z = (y-x)/y
return z
def geh10():
x = df[df["GEH"]>10].count()[0]
y = len(df)
z = (y-x)/y
return z
def rsq(RDS, hour):
slope, intercept, r_value, p_value, std_err = linregress(df[RDS + hour], df['VOLPERSPRT'])
return linregress(df[RDS + hour], df['VOLPERSPRT'])
RDS = 'RDS_2018_'
hour = '0745'
run = '47'
att_path = 'C:/Users/shafeeq.mollagee/OneDrive - Aurecon Group/GIPTN Traffic Modelling/04 - CBD Modelling/08 - Micro Model/01 - CBD Visum Model/CBD Visum Model/%s_%s.att' % (hour, run)
att = pd.read_table(att_path, sep = ";", header=32)
save_path = 'D:/001_Projects/01 - GIPTN/07 - CBD Micro Model/CBD Visum Model/Visum%s.csv' % ('Volumes')
count = RDS + hour
#qreg(RDS, hour)
df = combine(att, save_path, RDS, hour)
regression = np.polyfit(df[RDS + hour], df['VOLPERSPRT'], 1)
r_x, r_y = zip(*((i, i*regression[0] + regression[1]) for i in range(len(df))))
yDiff = r_y[len(df)-1] - r_y[0]
xDiff = r_x[len(df)-1] - r_x[0]
ang = atan2(yDiff, xDiff)
source = ColumnDataSource(df)
p = figure(width=550, height=550)
p.circle(x=RDS + hour, y='VOLPERSPRT',
source=source,
size=10, color='COLOUR', alpha=0.5)
p.title.text = 'Modelled vs Balanced Observed Counts by GEH'
p.xaxis.axis_label = 'Balanced Observed Volume'
p.yaxis.axis_label = 'Modelled Volume'
hover = HoverTool()
hover.tooltips=[
('From', '@FROMNODENO'),
('Via', '@VIANODENO'),
('To', '@TONODENO'),
('Modelled Volume', '@VOLPERSPRT'),
('Counted Volume', '@%s%s' % (RDS, hour)),
('GEH Statistic', '@GEH')
]
p.add_tools(hover)
p.line(r_x, r_y, color="#669bbc", line_width=1.25)
p.ray(x=[1, r_x[0]],
y=[1, r_y[0]],
length=0,
angle=[pi/4, ang],
color=["#29335c", "#669bbc"],
line_width=[2, 1.25])
rang = max(df['VOLPERSPRT'].max(), df[RDS + hour].max())
p.y_range = Range1d(0, rang)
p.x_range = Range1d(0, rang)
show(p)
print('GEH5 = ',geh5())
print('GEH10 = ', geh10())
print('Angle = ', ang)
slope, intercept, r_value, p_value, std_err = rsq(RDS, hour)
print('RSquare = ', float(r_value)**2)
hour = '0745'
run = '_47'
att_path = 'D:\\001_Projects\\01 - GIPTN\\07 - CBD Micro Model\\CBD Visum Model\\Paths_%s%s.att' % (hour, run)
paths = pd.read_table(att_path, sep = ";", header=11)
save_path = 'D:\\001_Projects\\01 - GIPTN\\07 - CBD Micro Model\\CBD Visum Model\\%s.csv' % ('Paths_Matrix')
paths = paths[['$PRTPATH:ORIGZONENO', 'DESTZONENO', 'VOL(AP)']]
paths = paths.groupby(['$PRTPATH:ORIGZONENO', 'DESTZONENO']).sum().reset_index()
paths = paths.pivot(index = '$PRTPATH:ORIGZONENO', columns = 'DESTZONENO', values = 'VOL(AP)')
paths.to_csv(save_path)
df[df["GEH"]>5].count()
df.to_csv('D:/AM2.csv')
```
| github_jupyter |
```
from google_drive_downloader import GoogleDriveDownloader as gdd
gdd.download_file_from_google_drive(file_id='1PS3QGjJr08WMzSMxpJMnsS-BdMLVHuMw',
dest_path='training_data/flood-train-images.zip',
unzip=True)
gdd.download_file_from_google_drive(file_id='10-SPXAI9IQ_VOtSFRDIDH4Ij7sg6-cdG',
dest_path='training_data/flood-train-labels.zip',
unzip=True)
# gdd.download_file_from_google_drive(file_id='1jzBBCUndcFBVeIw_8hYxbwWDHTEL2b-K',
# dest_path='training_data/flood-training-metadata.csv',
# unzip=True)
# Note to self : Drive causes some mutation to the csv. Hence using wget for direct download.
!wget ‐‐directory-prefix=training_data https://drivendata-prod.s3.amazonaws.com/data/81/public/flood-training-metadata.csv?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIARVBOBDCYVI2LMPSY%2F20210829%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210829T070917Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=794ee129cca2af2b9038228e2be8cbd3b62770161c39e2d0d9c04aa5a53b9d88
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# This is where our downloaded images and metadata live locally
DATA_PATH = Path("training_data")
train_metadata = pd.read_csv(
DATA_PATH / "flood-training-metadata.csv", parse_dates=["scene_start"],encoding='ISO-8859–1'
)
train_metadata.head()
train_metadata.shape
train_metadata.chip_id.nunique()
# Path style access for pandas -- can be installed with `pip install pandas_path`
!pip install pandas_path
from pandas_path import path
train_metadata["feature_path"] = (
str(DATA_PATH / "train_features")
/ train_metadata.image_id.path.with_suffix(".tif").path
)
train_metadata["label_path"] = (
str(DATA_PATH / "train_labels")
/ train_metadata.chip_id.path.with_suffix(".tif").path
)
!pip install rasterio
import rasterio
# Examine an arbitrary image
image_path = train_metadata.feature_path[0]
with rasterio.open(image_path) as img:
metadata = img.meta
bounds = img.bounds
data = img.read(1) # read a single band
metadata
with rasterio.open(image_path) as img:
numpy_mask = img.read(1, masked=True)
numpy_mask
import warnings
warnings.filterwarnings("ignore")
# Helper functions for visualizing Sentinel-1 images
def scale_img(matrix):
"""
Returns a scaled (H, W, D) image that is visually inspectable.
Image is linearly scaled between min_ and max_value, by channel.
Args:
matrix (np.array): (H, W, D) image to be scaled
Returns:
np.array: Image (H, W, 3) ready for visualization
"""
# Set min/max values
min_values = np.array([-23, -28, 0.2])
max_values = np.array([0, -5, 1])
# Reshape matrix
w, h, d = matrix.shape
matrix = np.reshape(matrix, [w * h, d]).astype(np.float64)
# Scale by min/max
matrix = (matrix - min_values[None, :]) / (
max_values[None, :] - min_values[None, :]
)
matrix = np.reshape(matrix, [w, h, d])
# Limit values to 0/1 interval
return matrix.clip(0, 1)
def create_false_color_composite(path_vv, path_vh):
"""
Returns a S1 false color composite for visualization.
Args:
path_vv (str): path to the VV band
path_vh (str): path to the VH band
Returns:
np.array: image (H, W, 3) ready for visualization
"""
# Read VV/VH bands
with rasterio.open(path_vv) as vv:
vv_img = vv.read(1)
with rasterio.open(path_vh) as vh:
vh_img = vh.read(1)
# Stack arrays along the last dimension
s1_img = np.stack((vv_img, vh_img), axis=-1)
# Create false color composite
img = np.zeros((512, 512, 3), dtype=np.float32)
img[:, :, :2] = s1_img.copy()
img[:, :, 2] = s1_img[:, :, 0] / s1_img[:, :, 1]
return scale_img(img)
def display_random_chip(random_state):
"""
Plots a 3-channel representation of VV/VH polarizations as a single chip (image 1).
Overlays a chip's corresponding water label (image 2).
Args:
random_state (int): random seed used to select a chip
Returns:
plot.show(): chip and labels plotted with pyplot
"""
f, ax = plt.subplots(1, 2, figsize=(9, 9))
# Select a random chip from train_metadata
random_chip = train_metadata.chip_id.sample(random_state=random_state).values[0]
chip_df = train_metadata[train_metadata.chip_id == random_chip]
# Extract paths to image files
vv_path = chip_df[chip_df.polarization == "vv"].feature_path.values[0]
vh_path = chip_df[chip_df.polarization == "vh"].feature_path.values[0]
label_path = chip_df.label_path.values[0]
# Create false color composite
s1_img = create_false_color_composite(vv_path, vh_path)
# Visualize features
ax[0].imshow(s1_img)
ax[0].set_title("S1 Chip", fontsize=14)
# Load water mask
with rasterio.open(label_path) as lp:
lp_img = lp.read(1)
# Mask missing data and 0s for visualization
label = np.ma.masked_where((lp_img == 0) | (lp_img == 255), lp_img)
# Visualize water label
ax[1].imshow(s1_img)
ax[1].imshow(label, cmap="cool", alpha=1)
ax[1].set_title("S1 Chip with Water Label", fontsize=14)
plt.tight_layout(pad=5)
plt.show()
display_random_chip(7)
import random
random.seed(9) # set a seed for reproducibility
# Sample 3 random floods for validation set
flood_ids = train_metadata.flood_id.unique().tolist()
val_flood_ids = random.sample(flood_ids, 3)
val_flood_ids
val = train_metadata[train_metadata.flood_id.isin(val_flood_ids)]
train = train_metadata[~train_metadata.flood_id.isin(val_flood_ids)]
# Helper function for pivoting out paths by chip
def get_paths_by_chip(image_level_df):
"""
Returns a chip-level dataframe with pivoted columns
for vv_path and vh_path.
Args:
image_level_df (pd.DataFrame): image-level dataframe
Returns:
chip_level_df (pd.DataFrame): chip-level dataframe
"""
paths = []
for chip, group in image_level_df.groupby("chip_id"):
vv_path = group[group.polarization == "vv"]["feature_path"].values[0]
vh_path = group[group.polarization == "vh"]["feature_path"].values[0]
paths.append([chip, vv_path, vh_path])
return pd.DataFrame(paths, columns=["chip_id", "vv_path", "vh_path"])
# Separate features from labels
val_x = get_paths_by_chip(val)
val_y = val[["chip_id", "label_path"]].drop_duplicates().reset_index(drop=True)
train_x = get_paths_by_chip(train)
train_y = train[["chip_id", "label_path"]].drop_duplicates().reset_index(drop=True)
# Confirm approx. 1/3 of images are in the validation set
len(val_x) / (len(val_x) + len(train_x)) * 100
import torch
class FloodDataset(torch.utils.data.Dataset):
"""Reads in images, transforms pixel values, and serves a
dictionary containing chip ids, image tensors, and
label masks (where available).
"""
def __init__(self, x_paths, y_paths=None, transforms=None):
self.data = x_paths
self.label = y_paths
self.transforms = transforms
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
# Loads a 2-channel image from a chip-level dataframe
img = self.data.loc[idx]
with rasterio.open(img.vv_path) as vv:
vv_path = vv.read(1)
with rasterio.open(img.vh_path) as vh:
vh_path = vh.read(1)
x_arr = np.stack([vv_path, vh_path], axis=-1)
# Min-max normalization
min_norm = -77
max_norm = 26
x_arr = np.clip(x_arr, min_norm, max_norm)
x_arr = (x_arr - min_norm) / (max_norm - min_norm)
# Apply data augmentations, if provided
if self.transforms:
x_arr = self.transforms(image=x_arr)["image"]
x_arr = np.transpose(x_arr, [2, 0, 1])
# Prepare sample dictionary
sample = {"chip_id": img.chip_id, "chip": x_arr}
# Load label if available - training only
if self.label is not None:
label_path = self.label.loc[idx].label_path
with rasterio.open(label_path) as lp:
y_arr = lp.read(1)
# Apply same data augmentations to label
if self.transforms:
y_arr = self.transforms(image=y_arr)["image"]
sample["label"] = y_arr
return sample
import albumentations
# These transformations will be passed to our model class
training_transformations = albumentations.Compose(
[
albumentations.RandomCrop(256, 256),
albumentations.RandomRotate90(),
albumentations.HorizontalFlip(),
albumentations.VerticalFlip(),
]
)
class XEDiceLoss(torch.nn.Module):
"""
Computes (0.5 * CrossEntropyLoss) + (0.5 * DiceLoss).
"""
def __init__(self):
super().__init__()
self.xe = torch.nn.CrossEntropyLoss(reduction="none")
def forward(self, pred, true):
valid_pixel_mask = true.ne(255) # valid pixel mask
# Cross-entropy loss
temp_true = torch.where((true == 255), 0, true) # cast 255 to 0 temporarily
xe_loss = self.xe(pred, temp_true)
xe_loss = xe_loss.masked_select(valid_pixel_mask).mean()
# Dice loss
pred = torch.softmax(pred, dim=1)[:, 1]
pred = pred.masked_select(valid_pixel_mask)
true = true.masked_select(valid_pixel_mask)
dice_loss = 1 - (2.0 * torch.sum(pred * true)) / (torch.sum(pred + true) + 1e-7)
return (0.5 * xe_loss) + (0.5 * dice_loss)
def intersection_and_union(pred, true):
"""
Calculates intersection and union for a batch of images.
Args:
pred (torch.Tensor): a tensor of predictions
true (torc.Tensor): a tensor of labels
Returns:
intersection (int): total intersection of pixels
union (int): total union of pixels
"""
valid_pixel_mask = true.ne(255) # valid pixel mask
true = true.masked_select(valid_pixel_mask).to("cpu")
pred = pred.masked_select(valid_pixel_mask).to("cpu")
# Intersection and union totals
intersection = np.logical_and(true, pred)
union = np.logical_or(true, pred)
return intersection.sum(), union.sum()
!pip install pytorch_lightning
!pip install segmentation_models_pytorch
import pytorch_lightning as pl
import segmentation_models_pytorch as smp
class FloodModel(pl.LightningModule):
def __init__(self, hparams):
super(FloodModel, self).__init__()
self.hparams.update(hparams)
self.save_hyperparameters()
self.backbone = self.hparams.get("backbone", "resnet34")
self.weights = self.hparams.get("weights", "imagenet")
self.learning_rate = self.hparams.get("lr", 1e-3)
self.max_epochs = self.hparams.get("max_epochs", 1000)
self.min_epochs = self.hparams.get("min_epochs", 6)
self.patience = self.hparams.get("patience", 4)
self.num_workers = self.hparams.get("num_workers", 2)
self.batch_size = self.hparams.get("batch_size", 32)
self.x_train = self.hparams.get("x_train")
self.y_train = self.hparams.get("y_train")
self.x_val = self.hparams.get("x_val")
self.y_val = self.hparams.get("y_val")
self.output_path = self.hparams.get("output_path", "model-outputs")
self.gpu = self.hparams.get("gpu", False)
self.transform = training_transformations
# Where final model will be saved
self.output_path = Path.cwd() / self.output_path
self.output_path.mkdir(exist_ok=True)
# Track validation IOU globally (reset each epoch)
self.intersection = 0
self.union = 0
# Instantiate datasets, model, and trainer params
self.train_dataset = FloodDataset(
self.x_train, self.y_train, transforms=self.transform
)
self.val_dataset = FloodDataset(self.x_val, self.y_val, transforms=None)
self.model = self._prepare_model()
self.trainer_params = self._get_trainer_params()
## Required LightningModule methods ##
def forward(self, image):
# Forward pass
return self.model(image)
def training_step(self, batch, batch_idx):
# Switch on training mode
self.model.train()
torch.set_grad_enabled(True)
# Load images and labels
x = batch["chip"]
y = batch["label"].long()
if self.gpu:
x, y = x.cuda(non_blocking=True), y.cuda(non_blocking=True)
# Forward pass
preds = self.forward(x)
# Calculate training loss
criterion = XEDiceLoss()
xe_dice_loss = criterion(preds, y)
# Log batch xe_dice_loss
self.log(
"xe_dice_loss",
xe_dice_loss,
on_step=True,
on_epoch=True,
prog_bar=True,
logger=True,
)
return xe_dice_loss
def validation_step(self, batch, batch_idx):
# Switch on validation mode
self.model.eval()
torch.set_grad_enabled(False)
# Load images and labels
x = batch["chip"]
y = batch["label"].long()
if self.gpu:
x, y = x.cuda(non_blocking=True), y.cuda(non_blocking=True)
# Forward pass & softmax
preds = self.forward(x)
preds = torch.softmax(preds, dim=1)[:, 1]
preds = (preds > 0.5) * 1
# Calculate validation IOU (global)
intersection, union = intersection_and_union(preds, y)
self.intersection += intersection
self.union += union
# Log batch IOU
batch_iou = intersection / union
self.log(
"iou", batch_iou, on_step=True, on_epoch=True, prog_bar=True, logger=True
)
return batch_iou
def train_dataloader(self):
# DataLoader class for training
return torch.utils.data.DataLoader(
self.train_dataset,
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=True,
pin_memory=True,
)
def val_dataloader(self):
# DataLoader class for validation
return torch.utils.data.DataLoader(
self.val_dataset,
batch_size=self.batch_size,
num_workers=0,
shuffle=False,
pin_memory=True,
)
def configure_optimizers(self):
# Define optimizer
optimizer = torch.optim.Adam(self.model.parameters(), lr=self.learning_rate)
# Define scheduler
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="max", factor=0.5, patience=self.patience
)
scheduler = {
"scheduler": scheduler,
"interval": "epoch",
"monitor": "val_loss",
} # logged value to monitor
return [optimizer], [scheduler]
def validation_epoch_end(self, outputs):
# Calculate IOU at end of epoch
epoch_iou = self.intersection / self.union
# Reset metrics before next epoch
self.intersection = 0
self.union = 0
# Log epoch validation IOU
self.log("val_loss", epoch_iou, on_epoch=True, prog_bar=True, logger=True)
return epoch_iou
## Convenience Methods ##
def _prepare_model(self):
unet_model = smp.Unet(
encoder_name=self.backbone,
encoder_weights=self.weights,
in_channels=2,
classes=2,
)
if self.gpu:
unet_model.cuda()
return unet_model
def _get_trainer_params(self):
# Define callback behavior
checkpoint_callback = pl.callbacks.ModelCheckpoint(
dirpath=self.output_path,
monitor="val_loss",
mode="max",
verbose=True,
)
early_stop_callback = pl.callbacks.early_stopping.EarlyStopping(
monitor="val_loss",
patience=(self.patience * 3),
mode="max",
verbose=True,
)
# Specify where TensorBoard logs will be saved
self.log_path = Path.cwd() / self.hparams.get("log_path", "tensorboard-logs")
self.log_path.mkdir(exist_ok=True)
logger = pl.loggers.TensorBoardLogger(self.log_path, name="benchmark-model")
trainer_params = {
"callbacks": [checkpoint_callback, early_stop_callback],
"max_epochs": self.max_epochs,
"min_epochs": self.min_epochs,
"default_root_dir": self.output_path,
"logger": logger,
"gpus": None if not self.gpu else 1,
"fast_dev_run": self.hparams.get("fast_dev_run", False),
"num_sanity_val_steps": self.hparams.get("val_sanity_checks", 0),
}
return trainer_params
def fit(self):
# Set up and fit Trainer object
self.trainer = pl.Trainer(**self.trainer_params)
self.trainer.fit(self)
hparams = {
# Required hparams
"x_train": train_x,
"x_val": val_x,
"y_train": train_y,
"y_val": val_y,
# Optional hparams
"backbone": "resnet34",
"weights": "imagenet",
"lr": 1e-3,
"min_epochs": 6,
"max_epochs": 1000,
"patience": 4,
"batch_size": 32,
"num_workers": 0,
"val_sanity_checks": 0,
"fast_dev_run": False,
"output_path": "model-outputs",
"log_path": "tensorboard_logs",
"gpu": torch.cuda.is_available(),
}
flood_model = FloodModel(hparams=hparams)
flood_model.fit()
flood_model.trainer_params["callbacks"][0].best_model_score
submission_path = Path("benchmark-pytorch")
submission_path.mkdir(exist_ok=True)
submission_assets_path = submission_path / "assets"
submission_assets_path.mkdir(exist_ok=True)
weight_path = submission_assets_path / "flood_model.pt"
torch.save(flood_model.state_dict(), weight_path)
%%file benchmark-pytorch/flood_model.py
import numpy as np
import pytorch_lightning as pl
import rasterio
import segmentation_models_pytorch as smp
import torch
class FloodModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.model = smp.Unet(
encoder_name="resnet34",
encoder_weights=None,
in_channels=2,
classes=2,
)
def forward(self, image):
# Forward pass
return self.model(image)
def predict(self, vv_path, vh_path):
# Switch on evaluation mode
self.model.eval()
torch.set_grad_enabled(False)
# Create a 2-channel image
with rasterio.open(vv_path) as vv:
vv_img = vv.read(1)
with rasterio.open(vh_path) as vh:
vh_img = vh.read(1)
x_arr = np.stack([vv_img, vh_img], axis=-1)
# Min-max normalization
min_norm = -77
max_norm = 26
x_arr = np.clip(x_arr, min_norm, max_norm)
x_arr = (x_arr - min_norm) / (max_norm - min_norm)
# Transpose
x_arr = np.transpose(x_arr, [2, 0, 1])
x_arr = np.expand_dims(x_arr, axis=0)
# Perform inference
preds = self.forward(torch.from_numpy(x_arr))
preds = torch.softmax(preds, dim=1)[:, 1]
preds = (preds > 0.5) * 1
return preds.detach().numpy().squeeze().squeeze()
%%file benchmark-pytorch/main.py
import os
from pathlib import Path
from loguru import logger
import numpy as np
from tifffile import imwrite
from tqdm import tqdm
import torch
import typer
from flood_model import FloodModel
ROOT_DIRECTORY = Path("/codeexecution")
SUBMISSION_DIRECTORY = ROOT_DIRECTORY / "submission"
ASSETS_DIRECTORY = ROOT_DIRECTORY / "assets"
DATA_DIRECTORY = ROOT_DIRECTORY / "data"
INPUT_IMAGES_DIRECTORY = DATA_DIRECTORY / "test_features"
# Make sure the smp loader can find our torch assets because we don't have internet!
os.environ["TORCH_HOME"] = str(ASSETS_DIRECTORY / "torch")
def make_prediction(chip_id, model):
"""
Given a chip_id, read in the vv/vh bands and predict a water mask.
Args:
chip_id (str): test chip id
Returns:
output_prediction (arr): prediction as a numpy array
"""
logger.info("Starting inference.")
try:
vv_path = INPUT_IMAGES_DIRECTORY / f"{chip_id}_vv.tif"
vh_path = INPUT_IMAGES_DIRECTORY / f"{chip_id}_vh.tif"
output_prediction = model.predict(vv_path, vh_path)
except Exception as e:
logger.error(f"No bands found for {chip_id}. {e}")
raise
return output_prediction
def get_expected_chip_ids():
"""
Use the test features directory to see which images are expected.
"""
paths = INPUT_IMAGES_DIRECTORY.glob("*.tif")
# Return one chip id per two bands (VV/VH)
ids = list(sorted(set(path.stem.split("_")[0] for path in paths)))
return ids
def main():
"""
For each set of two input bands, generate an output file
using the `make_predictions` function.
"""
logger.info("Loading model")
# Explicitly set where we expect smp to load the saved resnet from just to be sure
torch.hub.set_dir(ASSETS_DIRECTORY / "torch/hub")
model = FloodModel()
model.load_state_dict(torch.load(ASSETS_DIRECTORY / "flood_model.pt"))
logger.info("Finding chip IDs")
chip_ids = get_expected_chip_ids()
if not chip_ids:
typer.echo("No input images found!")
raise typer.Exit(code=1)
logger.info(f"Found {len(chip_ids)} test chip_ids. Generating predictions.")
for chip_id in tqdm(chip_ids, miniters=25):
output_path = SUBMISSION_DIRECTORY / f"{chip_id}.tif"
output_data = make_prediction(chip_id, model).astype(np.uint8)
imwrite(output_path, output_data, dtype=np.uint8)
logger.success(f"Inference complete.")
if __name__ == "__main__":
typer.run(main)
!cp -R ~/.cache/torch benchmark-pytorch/assets/
!tree benchmark-pytorch/
# Remember to avoid including the inference dir itself
!cd benchmark-pytorch && zip -r ../submission.zip *
!du -h submission.zip
```
| github_jupyter |
# 11 ODE integrators: Verlet
```
from importlib import reload
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.style.use('ggplot')
import integrators2
reload(integrators2)
```
## Velocity Verlet
Use expansion *forward* and *backward* (!) in time (Hamiltons (i.e. Newton without friction) equations are time symmetric)
\begin{align}
r(t + \Delta r) &\approx r(t) + \Delta t\, v(t) + \frac{1}{2m} \Delta t^2 F(t)\\
r(t) &\approx r(t + \Delta t) - \Delta t\, v(t + \Delta t) + \frac{1}{2m} \Delta t^2 F(t+\Delta t)
\end{align}
Solve for $v$:
\begin{align}
v(t+\Delta t) &\approx v(t) + \frac{1}{2m} \Delta t \big(F(t) + F(t+\Delta t)\big)
\end{align}
Complete **Velocity Verlet** integrator consists of the first and third equation.
In practice, split into three steps (calculate the velocity at the half time step):
\begin{align}
v(t+\frac{\Delta t}{2}) &= v(t) + \frac{\Delta t}{2} \frac{F(t)}{m} \\
r(t + \Delta r) &= r(t) + \Delta t\, v(t+\frac{\Delta t}{2})\\
v(t+\Delta t) &= v(t+\frac{\Delta t}{2}) + \frac{\Delta t}{2} \frac{F(t+\Delta t)}{m}
\end{align}
When writing production-level code, remember to re-use $F(t+\Delta t)$ als the "new" starting $F(t)$ in the next iteration (and don't recompute).
### Integration of planetary motion
Gravitational potential energy:
$$
U(r) = -\frac{GMm}{r}
$$
with $r$ the distance between the two masses $m$ and $M$.
#### Central forces
$$
U(\mathbf{r}) = f(r) = f(\sqrt{\mathbf{r}\cdot\mathbf{r}})\\
\mathbf{F} = -\nabla U(\mathbf{r}) = -\frac{\partial f(r)}{\partial r} \, \frac{\mathbf{r}}{r}
$$
#### Force of gravity
\begin{align}
\mathbf{F} &= -\frac{G m M}{r^2} \hat{\mathbf{r}}\\
\hat{\mathbf{r}} &= \frac{1}{\sqrt{x^2 + y^2}} \left(\begin{array}{c} x \\ y \end{array}\right)
\end{align}
#### Integrate simple planetary orbits
Set $M = 1$ (one solar mass) and $m = 3.003467×10^{-6}$ (one Earth mass in solar masses) and try initial conditions
\begin{alignat}{1}
x(0) &= 1,\quad y(0) &= 0\\
v_x(0) &= 0,\quad v_y(0) &= 6.179
\end{alignat}
Note that we use the following units:
* length in astronomical units (1 AU = 149,597,870,700 m )
* mass in solar masses (1 $M_☉ = 1.988435×10^{30}$ kg)
* time in years (1 year = 365.25 days, 1 day = 86400 seconds)
In these units, the gravitational constant is $G = 4\pi^2$ (in SI units $G = 6.674×10^{-11}\, \text{N}\cdot\text{m}^2\cdot\text{kg}^{-2}$).
```
M_earth = 3.003467e-6
M_sun = 1.0
G_grav = 4*np.pi**2
def F_gravity(r, m=M_earth, M=M_sun):
rr = np.sum(r*r)
rhat = r/np.sqrt(rr)
return -G_grav*m*M/rr * rhat
def U_gravity(r, m=M_earth, M=M_sun):
return -G_grav*m*M/np.sqrt(np.sum(r*r))
```
Let's now integrate the equations of motions under gravity with the **Velocity Verlet** algorithm:
```
def planet_orbit(r0=np.array([1, 0]), v0=np.array([0, 6.179]), mass=M_earth, dt=0.001, t_max=1):
"""2D planetary motion with velocity verlet"""
dim = len(r0)
assert len(v0) == dim
nsteps = int(t_max/dt)
r = np.zeros((nsteps, dim))
v = np.zeros_like(r)
r[0, :] = r0
v[0, :] = v0
# start force evaluation for first step
Ft = F_gravity(r[0], m=mass)
for i in range(nsteps-1):
vhalf = v[i] + 0.5*dt * Ft/mass
r[i+1, :] = r[i] + dt * vhalf
Ftdt = F_gravity(r[i+1], m=mass)
v[i+1] = vhalf + 0.5*dt * Ftdt/mass
# new force becomes old force
Ft = Ftdt
return r, v
r, v = planet_orbit(dt=0.1, t_max=10)
rx, ry = r.T
ax = plt.subplot(1,1,1)
ax.set_aspect(1)
ax.plot(rx, ry)
```
These are not closed orbits (as we would expect from a $1/r$ potential). But it gets much besser when stepsize is reduced to 0.01 (just rerun the code with `dt = 0.01` and replot):
```
r, v = planet_orbit(dt=0.01, t_max=10)
rx, ry = r.T
ax = plt.subplot(1,1,1)
ax.set_aspect(1)
ax.plot(rx, ry)
```
## Velocity Verlet vs RK4: Energy conservation
Assess the stability of `rk4` and `Velocity Verlet` by checking energy conservation over longer simulation times.
The file `integrators2.py` contains almost all code that you will need.
### Implement gravity force in `integrators2.py`
Add `F_gravity` to the `integrators2.py` module. Use the new function `unitvector()`.
### Planetary orbits with `integrators2.py`
```
r0 = np.array([1, 0])
v0 = np.array([0, 6.179])
import integrators2
from importlib import reload
reload(integrators2)
```
Use the new function `integrators2.integrate_newton_2d()` to integrate 2d coordinates.
#### RK4
```
trk4, yrk4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,
h=0.01,
force=integrators2.F_gravity,
integrator=integrators2.rk4)
rxrk4, ryrk4 = yrk4[:, 0, 0], yrk4[:, 0, 1]
ax = plt.subplot(1,1,1)
ax.set_aspect(1)
ax.plot(rxrk4, ryrk4)
integrators2.analyze_energies(trk4, yrk4, integrators2.U_gravity, m=M_earth)
print("Energy conservation RK4 for {} steps: {}".format(
len(trk4),
integrators2.energy_conservation(trk4, yrk4, integrators2.U_gravity, m=M_earth)))
```
#### Euler
```
te, ye = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,
h=0.01,
force=F_gravity,
integrator=integrators2.euler)
rex, rey = ye[:, 0].T
ax = plt.subplot(1,1,1)
ax.plot(rx, ry, label="RK4")
ax.plot(rex, rey, label="Euler")
ax.legend(loc="best")
ax.set_aspect(1)
integrators2.analyze_energies(te, ye, integrators2.U_gravity, m=M_earth)
print("Energy conservation Euler for {} steps: {}".format(
len(te),
integrators2.energy_conservation(te, ye, integrators2.U_gravity, m=M_earth)))
```
*Euler* is just awful... but we knew that already.
#### Velocity Verlet
```
tv, yv = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,
h=0.01,
force=F_gravity,
integrator=integrators2.velocity_verlet)
rxv, ryv = yv[:, 0].T
ax = plt.subplot(1,1,1)
ax.set_aspect(1)
ax.plot(rxv, ryv, label="velocity Verlet")
ax.plot(rxrk4, ryrk4, label="RK4")
ax.legend(loc="best")
integrators2.analyze_energies(tv, yv, integrators2.U_gravity, m=M_earth)
print("Energy conservation Velocity Verlet for {} steps: {}".format(
len(tv),
integrators2.energy_conservation(tv, yv, integrators2.U_gravity, m=M_earth)))
```
*Velocity Verlet* only has moderate accuracy, especially when compared to *RK4*.
However, let's look at energy conservation over longer times:
#### Longer time scale stability
Run RK4 and Velocity Verlet for longer.
```
tv2, yv2 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,
h=0.01,
force=F_gravity,
integrator=integrators2.velocity_verlet)
print("Energy conservation Velocity Verlet for {} steps: {}".format(
len(tv2),
integrators2.energy_conservation(tv2, yv2, integrators2.U_gravity, m=M_earth)))
t4, y4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,
h=0.01,
force=F_gravity,
integrator=integrators2.rk4)
print("Energy conservation RK4 for {} steps: {}".format(
len(t4),
integrators2.energy_conservation(t4, y4, integrators2.U_gravity, m=M_earth)))
```
Velocity Verlet shows **good long-term stability** but relative low precision. On the other hand, RK4 has high precision but the accuracy decreases over time.
* Use a *Verlet* integrator when energy conservation is important and long term stability is required (e.g. molecular dynamics simulations). It is generally recommended to use an integrator that conserves some of the inherent symmetries and structures of the governing physical equations (e.g. for Hamilton's equations of motion, time reversal symmetry and the symplectic and area-preserving structure).
* Use *RK4* for high short-term accuracy (but may be difficult to know what "short term" should mean) or when solving general differential equations.
| github_jupyter |
# lhorizon example 1: where was GALEX in May 2003?
Imagine that you are examining a portion of the observational data record of the GALEX space telescope from May 2003 and you realize that there is an anomaly that might be explicable by a barycentric time offset. A SPK SPICE kernel for GALEX may exist somewhere, but you do not know where. Horizons contains detailed information about the positions of many orbital and ground intrument platforms, and ```lhorizon``` can help you quickly figure out where GALEX was during this period.
This is a relatively short time period at relatively coarse resolution. If you realize that you need higher resolution or if you'd like to do larger queries -- ones with more than about 70K rows -- take a look at the bulk query functions in mars_sun_angle.ipynb.
```
# run these imports if you'd like the code to function
from lhorizon import LHorizon
from lhorizon.lhorizon_utils import utc_to_tdb
# horizons code for the SSB
solar_system_barycenter = '500@0'
coordinate_origin = solar_system_barycenter
# horizons knows the name "GALEX". Its Horizons numeric id, -127783, could also be used.
galex_horizons_id = "GALEX"
target_body_id = galex_horizons_id
# Time units are not consistent across different types of Horizons queries. in particular,
# times for vectors queries are in TDB, which in this case is about 64 seconds later than UTC.
# lhorizon.lhorizon_utils provides a function to convert from UTC to TDB. it works for dates later
# than 1972. for dates earlier than 1972, use spiceypy or astropy.time.
start = '2003-05-01T00:00:00'
stop = '2003-05-15T01:00:00'
step = "5m"
start_tdb = utc_to_tdb(start).isoformat()
stop_tdb = utc_to_tdb(stop).isoformat()
# make a LHorizon with these values.
galex_icrf = LHorizon(
galex_horizons_id,
coordinate_origin,
epochs = {
'start': start_tdb,
'stop': stop_tdb,
'step': step
},
query_type='VECTORS'
)
# fetch these data and concatenate them into a pandas dataframe.
# the LHorizon.table() method grabs a selection of columns from
# the Horizons response, regularizes units to meters and
# seconds, and makes some column names clearer or more tractable.
# if you want the full, unvarnished collection of values returned by Horizons
# with no modifications other than whitespace removal,
# use the LHorizons.dataframe() method instead.
vector_table = galex_icrf.table()
# note that the coordinate system in this particular query is ICRF
# of the most conventional kind -- measured from the solar system barycenter,
# geometric states uncorrected for light time or stellar aberration.
# columns are:
# time_tdb: time -- still in the TDB scale
# x, y, z: components of position vector in m
# vx, vy, vz: components of velocity vector in m/s
# dist: distance in m
# velocity: velocity in m/s
vector_table
# since this is a pandas dataframe, it can be easily manipulated in Python. If you'd rather work with it
# in some other way, it can also be easily written to CSV.
vector_table.to_csv("vector_table " + start + " to " + stop + ".csv", index=None)
```
| github_jupyter |
```
#### This notebook is for analysis performed upon the tweets.
# Note added 2018-02-16: Seems to have been a working set of tools here which may be
# adapted to our use.... Not sure
%cd twitteranalysis
from environment import *
#General tools
import sys
import locale
import json
import time
from random import shuffle
import itertools #For set operations
# from urllib2 import URLError
from datetime import time
from datetime import date
import string
import shelve
#Data storage
import redis
import couchdb
from couchdb.design import ViewDefinition
#Display
from CustomDisplayTools import Table
from IPython.display import HTML
%config InlineBackend.figure_format = 'svg'
#Network x
import networkx as nx
from networkx.algorithms import bipartite as bi
# Pandas
from pandas import DataFrame, Series
import pandas as pd
pd.options.display.max_rows = 999 #let pandas dataframe listings go long
#Plotting
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
#Twitter specific
%cd twittermining
import twitter
# import twitter_text # easy_install twitter-text-py
# from TwitterLogin import login
# # from TwitterUtilities import makeTwitterRequest
# # t = login()
# import TwitterSQLService
# #%%bash couchdb
# import TwitterServiceClasses as TSC
# import TwitterSearcher as TS
# import TwitterDataProcessors as TDP
# #Data manipulation and cleaning
# from TwitterDataProcessors import Tags
# from TwitterDataProcessors import Extractors
# from TwitterDataProcessors import TagsFromSearch
%cd twitteranalysis
#SqlAlchemy
import sqlalchemy
from sqlalchemy import create_engine
import sphinxapi
from sqlalchemy import create_engine
# Create your connection.
# engine = create_engine('mysql://root:''@localhost:3306/twitter_data')
#engine = create_engine('mysql://testuser:testpass@localhost/twitter_data')
```
# Load raw data
```
#Load tweets from db
%cd twitteranalysis
%run TweetDAOs.py
loader = TweetTextGetter()
tweets = loader.load_tweets()
len(tweets)
```
#### From couchdb
```
##From couchdb
from TwitterServiceClasses import CouchService
tweetids = []
hashtags = []
tweet_tuples = []
errors = []
for tweet in CouchService('compiled').db.query("""function(doc){emit (doc.id, doc.entities.hashtags);}"""):
tags = []
if tweet.value != None:
for tag in tweet.value:
try:
#Make lowercase and convert to string
tag_cleaned = str(tag['text'])
tag_cleaned = tag_cleaned.lower()
#Make tuple of tweetid and tag
tweet_tuples.append((tweet.key, tag_cleaned))
#Keep track of hashtags and tweetids
hashtags.append(tag_cleaned)
tweetids.append(tweet.key)
except:
errors.append(tweet.key)
#Make the list of tweet ids unique
tweetids = list(set(tweetids))
#Make the list of hashtags unique
hashtags = list(set(hashtags))
print '%i errors in processing tweets' % len(errors)
print '%i tweets successfully processed' % len(tweetids)
print '%i hashtags identified' % len(hashtags)
####From mysql
##Loaded from mysql
TEST = False; LOCAL = True
class TweetTuples(TwitterSQLService.SQLService):
"""
Fetches all tuples for graph making from sql database
"""
def __init__(self, test=False, local=True):
TwitterSQLService.SQLService.__init__(self, test, local)
self.query = """SELECT tweetID, h.hashtag FROM tweetsXtags t INNER JOIN hashtags h ON t.tagID = h.tagID"""
self.val = []
self.returnAll()
self.tweet_tuples = []
self.hashtags = []
self.tweetids = []
for t in list(self.results):
#process hashtag
tag_cleaned = str(t['hashtag'])
tag_cleaned = tag_cleaned.lower()
self.hashtags.append(tag_cleaned)
#process tweetids
tweetid = t['tweetID']
self.tweetids.append(tweetid)
#process tuple
tweet_tuple = (tweetid, tag_cleaned)
self.tweet_tuples.append(tweet_tuple)
self.hashtags = list(set(self.hashtags)) #Make list of unique hashtags
self.tweetids = list(set(self.tweetids)) #Make list of unique ids
print "%s tuples loaded \n %s unique hashtags \n %s unique ids" % (len(self.tweet_tuples), len(self.hashtags), len(self.tweetids))
tt = TweetTuples(test=TEST, local=LOCAL)
#Make the graph
g = nx.Graph()
g.add_edges_from(tt.tweet_tuples)
print nx.info(g)
```
# Tweet text analysis
```
#Holds words to ignore etc
%run ConstantsAndUtilities.py
ignore = Ignore()
merge = Merge()
#Timing etc tools (e.g., @timefn)
%run OptimizationTools.py
#Make wordbag
start_time = time.time()
%run TextTools.py
bagmaker = TweetTextWordBagMaker()
bagmaker.add_to_ignorelist(ignore.get_list())
bagmaker.add_to_ignorelist(nltk.corpus.stopwords.words('english'))
bagmaker.new_process(tweets)
print "Execution time:", time.time()-start_time
```
# Network analysis
```
#s = shelve.open('test_data/wordbag')
#s['wordbag'] = bagmaker.masterbag
#s.close()
#s = shelve.open('test_data/tweet_tuples')
#s['tweet_tuples'] = bagmaker.tweet_tuples
#s.close()
#s = shelve.open('test_data/word_freqs')
#s['word_freqs'] = word_frequencies
#s.close()
%run TextStats.py
wf = WordFreq(bagmaker.masterbag)
%timeit word_frequencies = wf.compute_frequency_of_words_in_bag()
```
#Load graph files
```
#Full graph
#fullgraph = nx.read_gpickle('all_tweets_pickle')
#Bigraph
tweetnet = nx.read_gpickle('twitter_graph_data/bigraph_full_pickle')
```
## Data cleaning tools
### Prune graph
Prune out tags which are known to be irrelevant and the result of overly broad search terms
```
%run -i GraphEditingTools.py
%run -i GraphTools.py
####Merge conditions
Merge nodes containing abbreviations and other aliases for conditions
#Merge nodes
before = nx.info(tweetnet)
tweetnet = merge_from_list(tweetnet)
after = nx.info(tweetnet)
print "Before merge: " + before + '\n'
print "After merge: " + after
#Terms by category
sports = ['bodybuilding', 'football', 'crossfit',
'gym','fitness', 'keepmovingkeepfit', 'running', 'run', 'sport', 'sports', 'train', 'workout', 'yoga', ]
bodymod = ['piercing', 'tattoo', 'ink']
days = ['sunday', 'friday', 'monday' ]
fibromerged = merge_nodes(tweetnet, ['fibro', 'fibromyalgia'], 'Fibromyalgia')
nx.info(fibromerged)
```
## Make bi-modal networks
### Make binet projection
```
nx.info(tweetnet)
##Save original files
import TwitterGEXF as TG #custom gexf saver
today = date.today()
filename = 'twitter_graph_data/%s_tweet_bigraphFULL.gexf' % date.today()
#TG.write_gexf(tweetnet, filename)
##Make the projected graph
#tweetnet = bi.weighted_projected_graph(g, tt.hashtags, ratio=False)
```
# Graph analysis
One of the first questions is which tags are most highly connected to others?
That is, we need to determine the *degree* of each tag, i.e., how many other distinct tags does it co-occur with at least once. Networkx provides an algorithim to calculate this.
```
tag_degrees = tweetnet.degree()
```
In order to find the most popular tags, we need to sort the mapping (from nodes to degrees) which tag_degrees contains
```
#Apply the sorting function
sorted_degree = sorted_degree_map(tag_degrees)
#Top 10 hashtags by degree
sorted_degree[:50]
#plt.hist(tag_degrees.values(), 50, normed=True) #Display histogram of node degrees in 100 bins
h = plt.hist(tag_degrees.values(), 100) #Display histogram of node degrees in 100 bins
plt.loglog(h[1][1:],h[0]) #Plot same histogram in log-log space
```
This shows that the hashtags are very unevenly distributed. It will thus help to prune away the tags which are only connected to one other tag.
#### Prune maingraph
By trimming we went from the original
```
nx.info(tweetnet)
```
to the more managable
```
trimmed = prune_below_degree(tweetnet, 100)
nx.info(trimmed)
trimmed_degrees = trimmed.degree() #sort by degree
metrsorted_degree = sorted_degree_map(trimmed_degrees)
plt.hist(trimmed_degrees.values(), 100) #Display histogram of node degrees in 100 bins
```
#### Prune more radically
```
#Decorator version
def text_normalize(otherfunction, *args, **kwargs):
"""Decorator to capitalize the string as it goes by."""
def j(*args, **kwargs):
assert(type(args[0]) is str)
args[0] = args[0].strip().capitalize()
otherfunction(*args, **kwargs)
return j
@text_normalize
def f(t):
print("%s" % t)
f('cat')
supertrimmed = prune_below_degree(tweetnet, 500)
nx.info(supertrimmed)
supertrimmed_degrees = supertrimmed.degree()
plt.hist(supertrimmed_degrees.values(), 100) #Display histogram of node degrees in 100 bins
supertrimmed_sorted_degree = sorted_degree_map(supertrimmed_degrees)
d = DataFrame(supertrimmed_sorted_degree)
d.set_index([0], inplace=True)
Table.display(d)
```
#Egograph analysis
####Make egographs
```
####Load egographs
paingraph = load('2014-06-16', 'pain_egograph') #Load pain ego graph
nx.info(paingraph)
migrainegraph = load('2014-06-16', 'migraine_egograph')
nx.info(migrainegraph)
arthritisgraph = load('2014-06-16', 'arthritis_egograph')
nx.info(arthritisgraph)
```
####Properties of egographs
```
migrainegraph = prune_below_degree(migrainegraph, 10 )
nx.info(migrainegraph)
arthritisgraph = prune_below_degree(arthritisgraph, 10)
nx.info(arthritisgraph)
#Try running stats on ego graph with ego removed
arthritis_sans_ego = arthritisgraph.copy()
arthritis_sans_ego.remove_nodes_from(['arthritis'])
nx.info(arthritis_sans_ego)
arthritis_sans_weights = [math.log(edata['weight']) for f,t,edata in arthritis_sans_ego.edges(data=True)]
arthritis_sans_weights = [x for x in arthritis_sans_weights if x > 2]
plt.hist(arthritis_sans_weights, bins=100)
arthritis_sans_weights[:5]
trimmed_arth_sans_weights = [x for x in arthritis_sans_weights if x > 2]
plt.hist(trimmed_arth_sans_weights, bins=100)
#arthritis_edge_weights = [math.log(edata['weight']) for f,t,edata in arthritisgraph.edges(data=True)]
arthritis_edge_weights = [edata['weight'] for f,t,edata in arthritisgraph.edges(data=True)]
print "count of arthritis edge weights: %s" % len(arthritis_edge_weights)
trimmed_arth_weights = [x for x in arthritis_edge_weights if x > 2]
plt.hist(trimmed_arth_weights, bins=100)
```
#Measurements of graph properties
```
#calc closeness centrality
%run GraphCalcTools.py
####Calculate and save measurements
#Calc clustering coefficients
calc_and_save_clustering_coefficient(arthritisgraph, 'arthritis')
calc_and_save_clustering_coefficient(migrainegraph, 'migraine')
#Calc closeness
calc_and_save_closeness_centrality(arthritisgraph, 'arthritis')
#Calc betweenness centrality
calc_and_save_betweeneness_centrality(arthritisgraph, 'arthritis')
calc_and_save_betweeneness_centrality(migrainegraph, 'migraine')
####Load saved measurements
#Load betweenness
migraine_betweenness = load_betweenness_centrality('migraine', '2014-06-16')
arthritis_betweenness = load_betweenness_centrality('arthritis', '2014-06-16')
#Make into dataframe summarizing
mterms = []; aterms = []
[mterms.append(i[0]) for i in migraine_betweenness]
[aterms.append(i[0]) for i in arthritis_betweenness]
terms = set(mterms + aterms)
#Convert to dictionaries
mdict = dict(migraine_betweenness)
adict = dict(arthritis_betweenness)
#Make dataframe
cc = []
for t in terms:
cc.append({'term' : t, 'migraine' : mdict.get(t), 'arthritis' : adict.get(t)})
betweenness = DataFrame(cc)
betweenness.set_index(['term'], inplace=True)
betweenness.dropna(how='all', inplace=True)
#Load closeness
migraine_closeness = load_closeness_centrality('migraine', '2014-06-16')
arthritis_closeness = load_closeness_centrality('arthritis', '2014-06-16')
#Make into dataframe summarizing
mterms = []; aterms = []
[mterms.append(i[0]) for i in migraine_closeness]
[aterms.append(i[0]) for i in arthritis_closeness]
terms = set(mterms + aterms)
#Convert to dictionaries
mdict = dict(migraine_closeness)
adict = dict(arthritis_closeness)
#Make dataframe
cc = []
for t in terms:
cc.append({'term' : t, 'migraine' : mdict.get(t), 'arthritis' : adict.get(t)})
closeness = DataFrame(cc)
closeness.set_index(['term'], inplace=True)
closeness.dropna(how='all', inplace=True)
closeness.hist(bins=100)
d = dict(mdict.items() + adict.items())
ba = DataFrame(betweenness.arthritis.copy())
ba.dropna(inplace=True)
ba.sort(columns=['arthritis'], ascending=False, inplace=True)
#Sort migraine on betweenness
bm = DataFrame(betweenness.migraine.copy())
bm.dropna(inplace=True)
bm.sort(columns=['migraine'], ascending=False, inplace=True)
migraine_degrees = migrainegraph.degree() #sort by degree
plt.hist(migraine_degrees.values(), 100)
#migrainemetrsorted_degree = sorted_degree_map(migraine_degrees)
###Properties of nodes and edges
trimmed.get_edge_data('fibromyalgia', 'migraine')
trimmed.get_edge_data('fibro', 'migraine')
trimmed.get_edge_data('fibro', 'headache')
nx.info(trimmed, 'tcot')
nx.info(trimmed, 'ms')
```
#Groups of substring uses
There are a lot of tags which won't be counted if just look at, e.g., 'fibro'. For instance, 'fibrolife'. What to do about them?
```
dreload(TwitterSQLService)
QS = TwitterSQLService.QueryShell()
likeFibro = QS.runquery("""SELECT DISTINCT tweetID FROM tweetsXtags txt
INNER JOIN hashtags h ON txt.tagID = h.tagID WHERE h.hashtag LIKE '%%fibro%%'""")
likeFibro = [x['tweetID'] for x in likeFibro]
#Get tweets which are fibro or fibromyalgia
isFibro = QS.runquery("""SELECT DISTINCT tweetID FROM tweetsXtags txt
INNER JOIN hashtags h ON txt.tagID = h.tagID WHERE h.hashtag = %s OR h.hashtag = %s""", ['fibro', 'fibromyalgia'])
isFibro = [x['tweetID'] for x in isFibro]
notCotag = list(set(likeFibro) - set(isFibro))
fibro = list(set(isFibro + notCotag))
print """%s tweets have hashtags containing the substring 'fibro' \n
%s hashtags are either 'fibro' or 'fibromyalgia \n
%s contain the substring but not the full tag""" %(len(likeFibro), len(isFibro), len(notCotag))
likeMigraine = QS.runquery("""SELECT DISTINCT tweetID FROM tweetsXtags txt
INNER JOIN hashtags h ON txt.tagID = h.tagID WHERE h.hashtag LIKE '%%migraine%%'""")
likeMigraine = [x['tweetID'] for x in likeMigraine]
#Get tweets which are fibro or fibromyalgia
isMigraine = QS.runquery("""SELECT DISTINCT tweetID FROM tweetsXtags txt
INNER JOIN hashtags h ON txt.tagID = h.tagID WHERE h.hashtag = %s OR h.hashtag = %s""", ['migraine', 'migraines'])
isMigraine = [x['tweetID'] for x in isMigraine]
notCotag = list(set(likeMigraine) - set(isMigraine))
migraine = list(set(isMigraine + notCotag))
print """%s tweets have hashtags containing the substring 'migraine' \n
%s hashtags are either 'migraine' or 'migraines \n
%s contain the substring but not the full tag""" %(len(likeMigraine), len(isMigraine), len(notCotag))
```
# Simple hashtag frequencies
```
query = """SELECT h.hashtag, count(txt.tweetID) AS tagFreq
FROM tweetsXtags txt INNER JOIN hashtags h ON h.tagID = txt.tagID
GROUP BY hashtag ORDER BY tagFreq DESC"""
q = pd.read_sql_query(query, engine)
q
```
#Check whether tweets have been consistently gathered
```
def date_reformat(x):
s = x.split()
new = "%s-%s-%s" % (s[5], s[1], s[2])
return new
QS = TwitterSQLService.QueryShell()
tweet_times = DataFrame(QS.runquery("""SELECT created_at AS DayCollected FROM tweets"""))
tt = DataFrame(tweet_times['DayCollected'].apply(date_reformat))
ttg = tt.groupby('DayCollected').DayCollected.count()
ttg.plot(x_compat=True, figsize=(15,5), title="New tweetIDs collected by day")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/conference-submitter/jax-md/blob/master/notebooks/flocking.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright JAX MD Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
#@title Imports & Utils
# Imports
!pip install -q git+https://www.github.com/conference-submitter/jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import vmap
from jax import lax
from jax.experimental.vectorize import vectorize
from functools import partial
from collections import namedtuple
import base64
import IPython
from google.colab import output
import os
from jax_md import space, smap, energy, minimize, quantity, simulate, partition, util
from jax_md.util import f32
# Plotting
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
dark_color = [56 / 256] * 3
light_color = [213 / 256] * 3
axis_color = 'white'
def format_plot(x='', y='', grid=True):
ax = plt.gca()
ax.spines['bottom'].set_color(axis_color)
ax.spines['top'].set_color(axis_color)
ax.spines['right'].set_color(axis_color)
ax.spines['left'].set_color(axis_color)
ax.tick_params(axis='x', colors=axis_color)
ax.tick_params(axis='y', colors=axis_color)
ax.yaxis.label.set_color(axis_color)
ax.xaxis.label.set_color(axis_color)
ax.set_facecolor(dark_color)
plt.grid(grid)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().patch.set_facecolor(dark_color)
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
# Progress Bars
from IPython.display import HTML, display
import time
def ProgressIter(iter_fun, iter_len=0):
if not iter_len:
iter_len = len(iter_fun)
out = display(progress(0, iter_len), display_id=True)
for i, it in enumerate(iter_fun):
yield it
out.update(progress(i + 1, iter_len))
def progress(value, max):
return HTML("""
<progress
value='{value}'
max='{max}',
style='width: 45%'
>
{value}
</progress>
""".format(value=value, max=max))
normalize = lambda v: v / np.linalg.norm(v, axis=1, keepdims=True)
# Rendering
renderer_code = IPython.display.HTML('''
<canvas id="canvas"></canvas>
<script>
Rg = null;
Ng = null;
var current_scene = {
R: null,
N: null,
is_loaded: false,
frame: 0,
frame_count: 0,
boid_vertex_count: 0,
boid_buffer: [],
predator_vertex_count: 0,
predator_buffer: [],
disk_vertex_count: 0,
disk_buffer: null,
box_size: 0
};
google.colab.output.setIframeHeight(0, true, {maxHeight: 5000});
async function load_simulation() {
buffer_size = 400;
max_frame = 800;
result = await google.colab.kernel.invokeFunction(
'notebook.GetObstacles', [], {});
data = result.data['application/json'];
if(data.hasOwnProperty('Disk')) {
current_scene = put_obstacle_disk(current_scene, data.Disk);
}
for (var i = 0 ; i < max_frame ; i += buffer_size) {
console.log(i);
result = await google.colab.kernel.invokeFunction(
'notebook.GetBoidStates', [i, i + buffer_size], {});
data = result.data['application/json'];
current_scene = put_boids(current_scene, data);
}
current_scene.is_loaded = true;
result = await google.colab.kernel.invokeFunction(
'notebook.GetPredators', [], {});
data = result.data['application/json'];
if (data.hasOwnProperty('R'))
current_scene = put_predators(current_scene, data);
result = await google.colab.kernel.invokeFunction(
'notebook.GetSimulationInfo', [], {});
current_scene.box_size = result.data['application/json'].box_size;
}
function initialize_gl() {
const canvas = document.getElementById("canvas");
canvas.width = 640;
canvas.height = 640;
const gl = canvas.getContext("webgl2");
if (!gl) {
alert('Unable to initialize WebGL.');
return;
}
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.clearColor(0.2, 0.2, 0.2, 1.0);
gl.enable(gl.DEPTH_TEST);
const shader_program = initialize_shader(
gl, VERTEX_SHADER_SOURCE_2D, FRAGMENT_SHADER_SOURCE_2D);
const shader = {
program: shader_program,
attribute: {
vertex_position: gl.getAttribLocation(shader_program, 'vertex_position'),
},
uniform: {
screen_position: gl.getUniformLocation(shader_program, 'screen_position'),
screen_size: gl.getUniformLocation(shader_program, 'screen_size'),
color: gl.getUniformLocation(shader_program, 'color'),
},
};
gl.useProgram(shader_program);
const half_width = 200.0;
gl.uniform2f(shader.uniform.screen_position, half_width, half_width);
gl.uniform2f(shader.uniform.screen_size, half_width, half_width);
gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);
return {gl: gl, shader: shader};
}
var loops = 0;
function update_frame() {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
if (!current_scene.is_loaded) {
window.requestAnimationFrame(update_frame);
return;
}
var half_width = current_scene.box_size / 2.;
gl.uniform2f(shader.uniform.screen_position, half_width, half_width);
gl.uniform2f(shader.uniform.screen_size, half_width, half_width);
if (current_scene.frame >= current_scene.frame_count) {
if (!current_scene.is_loaded) {
window.requestAnimationFrame(update_frame);
return;
}
loops++;
current_scene.frame = 0;
}
gl.enableVertexAttribArray(shader.attribute.vertex_position);
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.boid_buffer[current_scene.frame]);
gl.uniform4f(shader.uniform.color, 0.0, 0.35, 1.0, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.boid_vertex_count);
if(current_scene.predator_buffer.length > 0) {
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.predator_buffer[current_scene.frame]);
gl.uniform4f(shader.uniform.color, 1.0, 0.35, 0.35, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.predator_vertex_count);
}
if(current_scene.disk_buffer) {
gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.disk_buffer);
gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);
gl.vertexAttribPointer(
shader.attribute.vertex_position,
2,
gl.FLOAT,
false,
0,
0
);
gl.drawArrays(gl.TRIANGLES, 0, current_scene.disk_vertex_count);
}
current_scene.frame++;
if ((current_scene.frame_count > 1 && loops < 5) ||
(current_scene.frame_count == 1 && loops < 240))
window.requestAnimationFrame(update_frame);
if (current_scene.frame_count > 1 && loops == 5 && current_scene.frame < current_scene.frame_count - 1)
window.requestAnimationFrame(update_frame);
}
function put_boids(scene, boids) {
const R = decode(boids['R']);
const R_shape = boids['R_shape'];
const theta = decode(boids['theta']);
const theta_shape = boids['theta_shape'];
function index(i, b, xy) {
return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy;
}
var steps = R_shape[0];
var boids = R_shape[1];
var dimensions = R_shape[2];
if(dimensions != 2) {
alert('Can only deal with two-dimensional data.')
}
// First flatten the data.
var buffer_data = new Float32Array(boids * 6);
var size = 8.0;
for (var i = 0 ; i < steps ; i++) {
var buffer = gl.createBuffer();
for (var b = 0 ; b < boids ; b++) {
var xi = index(i, b, 0);
var yi = index(i, b, 1);
var ti = i * boids + b;
var Nx = size * Math.cos(theta[ti]); //N[xi];
var Ny = size * Math.sin(theta[ti]); //N[yi];
buffer_data.set([
R[xi] + Nx, R[yi] + Ny,
R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,
R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx,
], b * 6);
}
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.boid_buffer.push(buffer);
}
scene.boid_vertex_count = boids * 3;
scene.frame_count += steps;
return scene;
}
function put_predators(scene, boids) {
// TODO: Unify this with the put_boids function.
const R = decode(boids['R']);
const R_shape = boids['R_shape'];
const theta = decode(boids['theta']);
const theta_shape = boids['theta_shape'];
function index(i, b, xy) {
return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy;
}
var steps = R_shape[0];
var boids = R_shape[1];
var dimensions = R_shape[2];
if(dimensions != 2) {
alert('Can only deal with two-dimensional data.')
}
// First flatten the data.
var buffer_data = new Float32Array(boids * 6);
var size = 18.0;
for (var i = 0 ; i < steps ; i++) {
var buffer = gl.createBuffer();
for (var b = 0 ; b < boids ; b++) {
var xi = index(i, b, 0);
var yi = index(i, b, 1);
var ti = theta_shape[1] * i + b;
var Nx = size * Math.cos(theta[ti]);
var Ny = size * Math.sin(theta[ti]);
buffer_data.set([
R[xi] + Nx, R[yi] + Ny,
R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,
R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx,
], b * 6);
}
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.predator_buffer.push(buffer);
}
scene.predator_vertex_count = boids * 3;
return scene;
}
function put_obstacle_disk(scene, disk) {
const R = decode(disk.R);
const R_shape = disk.R_shape;
const radius = decode(disk.D);
const radius_shape = disk.D_shape;
const disk_count = R_shape[0];
const dimensions = R_shape[1];
if (dimensions != 2) {
alert('Can only handle two-dimensional data.');
}
if (radius_shape[0] != disk_count) {
alert('Inconsistent disk radius count found.');
}
const segments = 32;
function index(o, xy) {
return o * R_shape[1] + xy;
}
var buffer_data = new Float32Array(disk_count * segments * 6);
for (var i = 0 ; i < disk_count ; i++) {
var xi = index(i, 0);
var yi = index(i, 1);
for (var s = 0 ; s < segments ; s++) {
const th = 2 * s / segments * Math.PI;
const th_p = 2 * (s + 1) / segments * Math.PI;
const rad = radius[i] * 0.8;
buffer_data.set([
R[xi], R[yi],
R[xi] + rad * Math.cos(th), R[yi] + rad * Math.sin(th),
R[xi] + rad * Math.cos(th_p), R[yi] + rad * Math.sin(th_p),
], i * segments * 6 + s * 6);
}
}
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);
scene.disk_vertex_count = disk_count * segments * 3;
scene.disk_buffer = buffer;
return scene;
}
// SHADER CODE
const VERTEX_SHADER_SOURCE_2D = `
// Vertex Shader Program.
attribute vec2 vertex_position;
uniform vec2 screen_position;
uniform vec2 screen_size;
void main() {
vec2 v = (vertex_position - screen_position) / screen_size;
gl_Position = vec4(v, 0.0, 1.0);
}
`;
const FRAGMENT_SHADER_SOURCE_2D = `
precision mediump float;
uniform vec4 color;
void main() {
gl_FragColor = color;
}
`;
function initialize_shader(
gl, vertex_shader_source, fragment_shader_source) {
const vertex_shader = compile_shader(
gl, gl.VERTEX_SHADER, vertex_shader_source);
const fragment_shader = compile_shader(
gl, gl.FRAGMENT_SHADER, fragment_shader_source);
const shader_program = gl.createProgram();
gl.attachShader(shader_program, vertex_shader);
gl.attachShader(shader_program, fragment_shader);
gl.linkProgram(shader_program);
if (!gl.getProgramParameter(shader_program, gl.LINK_STATUS)) {
alert(
'Unable to initialize shader program: ' +
gl.getProgramInfoLog(shader_program)
);
return null;
}
return shader_program;
}
function compile_shader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
alert('An error occured compiling shader: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
// SERIALIZATION UTILITIES
function decode(sBase64, nBlocksSize) {
var chrs = atob(atob(sBase64));
var array = new Uint8Array(new ArrayBuffer(chrs.length));
for(var i = 0 ; i < chrs.length ; i++) {
array[i] = chrs.charCodeAt(i);
}
return new Float32Array(array.buffer);
}
// RUN CELL
load_simulation();
gl_and_shader = initialize_gl();
var gl = gl_and_shader.gl;
var shader = gl_and_shader.shader;
update_frame();
</script>
''')
def encode(R):
return base64.b64encode(onp.array(R, onp.float32).tobytes())
def render(box_size, states, obstacles=None, predators=None):
if isinstance(states, Boids):
R = np.reshape(states.R, (1,) + states.R.shape)
theta = np.reshape(states.theta, (1,) + states.theta.shape)
elif isinstance(states, list):
if all([isinstance(x, Boids) for x in states]):
R, theta = zip(*states)
R = onp.stack(R)
theta = onp.stack(theta)
if isinstance(predators, list):
R_predators, theta_predators, *_ = zip(*predators)
R_predators = onp.stack(R_predators)
theta_predators = onp.stack(theta_predators)
def get_boid_states(start, end):
R_, theta_ = R[start:end], theta[start:end]
return IPython.display.JSON(data={
"R_shape": R_.shape,
"R": encode(R_),
"theta_shape": theta_.shape,
"theta": encode(theta_)
})
output.register_callback('notebook.GetBoidStates', get_boid_states)
def get_obstacles():
if obstacles is None:
return IPython.display.JSON(data={})
else:
return IPython.display.JSON(data={
'Disk': {
'R': encode(obstacles.R),
'R_shape': obstacles.R.shape,
'D': encode(obstacles.D),
'D_shape': obstacles.D.shape
}
})
output.register_callback('notebook.GetObstacles', get_obstacles)
def get_predators():
if predators is None:
return IPython.display.JSON(data={})
else:
return IPython.display.JSON(data={
'R': encode(R_predators),
'R_shape': R_predators.shape,
'theta': encode(theta_predators),
'theta_shape': theta_predators.shape
})
output.register_callback('notebook.GetPredators', get_predators)
def get_simulation_info():
return IPython.display.JSON(data={
'frames': R.shape[0],
'box_size': box_size
})
output.register_callback('notebook.GetSimulationInfo', get_simulation_info)
return renderer_code
```
#### **Warning**: At the moment you must actually run the cells of the notebook to see the visualizations. After running the simulations in this notebook, you have to wait a moment (5 - 30 seconds) for rendering.
# Flocks, Herds, and Schools: A Distributed Behavioral Model
We will go over the paper, ["Flocks, Herds, and Schools: A Distributed Behavioral Model"]((https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=E252054B1C02D387E8C20827CB414543?doi=10.1.1.103.7187&rep=rep1&type=pdf)) published by C. W. Reynolds in SIGGRAPH 1987. The paper itself is fantastic and, as far as a description of flocking is concerned, there is little that we can offer. Therefore, rather than go through the paper directly, we will use [JAX](https://www.github.com/google/jax) and [JAX, MD](https://www.github.com/google/jax-md) to interactively build a simulation similar to Reynolds' in colab. To simplify our discussion, we will build a two-dimensional version of Reynolds' simulation.
In nature there are many examples in which large numbers of animals exhibit complex collective motion (schools of fish, flocks of birds, herds of horses, colonies of ants). In his seminal paper, Reynolds introduces a model of such collective behavior (henceforth refered to as "flocking") based on simple rules that can be computed locally for each entity (referred to as a "boid") in the flock based on its environment. This paper is written in the context of computer graphics and so Reynolds is going for biologically inspired simulations that look right rather than accuracy in any statistical sense. Ultimately, Reynolds measures success in terms of "delight" people find in watching the simulations; we will use a similar metric here.
Note, we recommend running this notebook in "Dark" mode.
## Boids
Reynolds is interested in simulating bird-like entities that are described by a position, $R$, and an orientation, $\theta$. This state can optionally augmented with extra information (for example, hunger or fear). We can define a Boids type that stores data for a collection of boids as two arrays. `R` is an `ndarray` of shape `[boid_count, spatial_dimension]` and `theta` is an ndarray of shape `[boid_count]`. An individual boid is an index into these arrays. It will often be useful to refer to the vector orientation of the boid $N = (\cos\theta, \sin\theta)$.
```
Boids = namedtuple('Boids', ['R', 'theta'])
```
We can instantiate a collection of boids randomly in a box of side length $L$. We will use [periodic boundary conditions](https://en.wikipedia.org/wiki/Periodic_boundary_conditions) for our simulation which means that boids will be able to wrap around the sides of the box. To do this we will use the `space.periodic` command in [JAX, MD](https://github.com/google/jax-md#spaces-spacepy).
```
# Simulation Parameters:
box_size = 800.0 # A float specifying the side-length of the box.
boid_count = 200 # An integer specifying the number of boids.
dim = 2 # The spatial dimension in which we are simulating.
# Create RNG state to draw random numbers (see LINK).
rng = random.PRNGKey(0)
# Define periodic boundary conditions.
displacement, shift = space.periodic(box_size)
# Initialize the boids.
rng, R_rng, theta_rng = random.split(rng, 3)
boids = Boids(
R = box_size * random.uniform(R_rng, (boid_count, dim)),
theta = random.uniform(theta_rng, (boid_count,), maxval=2. * np.pi)
)
display(render(box_size, boids))
```
## Dynamics
Now that we have defined our boids, we have to imbue them with some rules governing their motion. Reynolds notes that in nature flocks do not seem to have a maximum size, but instead can keep acquiring new boids and grow without bound. He also comments that each boid cannot possibly be keeping track of the entire flock and must, instead, be focusing on its local neighborhood. Reynolds then proposes three simple, local, rules that boids might try to follow:
1. **Alignment:** Boids will try to align themselves in the direction of their neighbors.
2. **Avoidance:** Boids will avoid colliding with their neighbors.
3. **Cohesion:** Boids will try to move towards the center of mass of their neighbors.
In his exposition, Reynolds is vague about the details for each of these rules and so we will take some creative liberties. We will try to phrase this problem as an energy model, so our goal will be to write down an "energy" function (similar to a "loss") $E(R, \theta)$ such that low-energy configurations of boids satisfy each of the three rules above.
\
We will write the total energy as a sum of three terms, one for each of the rules above:
$$E(R, \theta) = E_{\text{Align}}(R, \theta) + E_{\text{Avoid}}(R, \theta) + E_{\text{Cohesion}}(R,\theta)$$
We will go through each of these rules separately below starting with alignment. Of course, any of these terms could be replaced by a learned solution.
\
Once we have an energy defined in this way, configurations of boids that move along low energy trajectories might display behavior that looks appealing. However, we still have a lot of freedom to decide how we want to define dynamics over the boids. Reynolds says he uses overdamped dynamics and so we will do something similar. In particular, we will update the position of the boids so that they try to move to minimize their energy. Simultaneously , we assume that the boids are swimming (or flying / walking). We choose a particularly simple model of this to start with and assume that the boids move at a fixed speed, $v$, along whatever direction they are pointing. We will use simple forward-Euler integration. This gives an update step,
$${R_i}' = R_i + \delta t(v\hat N_i - \nabla_{R_i}E(R, \theta))$$
where $\delta t$ is a timestep that we are allowed to choose. We will often refer to the force, $F^{R_i} = -\nabla_{R_i} E(R, \hat N)$ as the negative gradient of the energy with respect to the position of the $i$'th boid.
\
We will update the orientations of the boids turn them towards "low energy" directions. To do this we will once again use a simple forward-Euler scheme,
$$
\theta'_i = \theta_i - \delta t\nabla_{\theta_i}E(R,\theta)
$$
This is just one choice of dynamics, but there are probably many that would work equally well! Feel free to play around with it. One easy improvement that one could imagine making would be to use a more sophisticated integrator. We include a Runge-Kutta 4 integrator at the top of the notebook for an adventurous reader.
\
To see what this looks like before we define any interactions, we can run a simulation with $E(R,\theta) = 0$ by first defining an `update` function that takes a boids state to a new boids state.
```
@vmap
def normal(theta):
return np.array([np.cos(theta), np.sin(theta)])
def dynamics(energy_fn, dt, speed):
@jit
def update(_, state):
R, theta = state['boids']
dstate = quantity.force(energy_fn)(state)
dR, dtheta = dstate['boids']
n = normal(state['boids'].theta)
state['boids'] = Boids(shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
return state
return update
```
Now we can run a simulation and save the boid positions to a `boids_buffer` which will just be a list.
```
update = dynamics(energy_fn=lambda state: 0., dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
### Alignment
While the above simulation works and our boids are moving happily along, it is not terribly interesting. The first thing that we can add to this simulation is the alignment rule. When writing down these rules, it is often easier to express them for a single pair of boids and then use JAX's [automatic vectorization](https://github.com/google/jax#auto-vectorization-with-vmap) via `vmap` to extend them to our entire simulation.
Given a pair of boids $i$ and $j$ we would like to choose an energy function that is minimized when they are pointing in the same direction. As discussed above, one of Reynolds' requirements was locality: boids should only interact with nearby boids. To do this, we introduce a cutoff $D_{\text{Align}}$ and ignore pairs of boids such that $\|\Delta R_{ij}\| > D_{\text{Align}}$ where $\Delta R_{ij} = R_i - R_j$. To make it so boids react smoothly we will have the energy start out at zero when $\|R_i - R_j\| = D_{\text{Align}}$ and increase smoothly as they get closer. Together, these simple ideas lead us to the following proposal,
$$\epsilon_{\text{Align}}(\Delta R_{ij}, \hat N_i, \hat N_j) = \begin{cases}\frac{J_{\text{Align}}}\alpha\left (1 - \frac{\|\Delta R_{ij}\|}{D_{\text{Align}}}\right)^\alpha(1 - \hat N_1 \cdot \hat N_2)^2 & \text{if $\|\Delta R_{ij}\| < D$}\\ 0 & \text{otherwise}\end{cases}$$
This energy will be maximized when $N_1$ and $N_2$ are anti-aligned and minimized when $N_1 = N_2$. In general, we would like our boids to turn to align themselves with their neighbors rather than shift their centers to move apart. Therefore, we'll insert a stop-gradient into the displacement.
```
def align_fn(dR, N_1, N_2, J_align, D_align, alpha):
dR = lax.stop_gradient(dR)
dr = space.distance(dR) / D_align
energy = J_align / alpha * (1. - dr) ** alpha * (1 - np.dot(N_1, N_2)) ** 2
return np.where(dr < 1.0, energy, 0.)
```
We can plot the energy for different alignments as well as different distances between boids. We see that the energy goes to zero for large distances and when the boids are aligned.
```
#@title Alignment Energy
N_1 = np.array([1.0, 0.0])
angles = np.linspace(0, np.pi, 60)
N_2 = vmap(lambda theta: np.array([np.cos(theta), np.sin(theta)]))(angles)
distances = np.linspace(0, 1, 5)
dRs = vmap(lambda r: np.array([r, 0.]))(distances)
fn = partial(align_fn, J_align=1., D_align=1., alpha=2.)
energy = vmap(vmap(fn, (None, None, 0)), (0, None, None))(dRs, N_1, N_2)
for d, e in zip(distances, energy):
plt.plot(angles, e, label='r = {}'.format(d), linewidth=3)
plt.xlim([0, np.pi])
format_plot('$\\theta$', '$E(r, \\theta)$')
plt.legend()
finalize_plot()
```
We can now our simulation with the alignment energy alone.
```
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=0.5, D_align=45., alpha=3.)
# Map the align energy over all pairs of boids. While both applications
# of vmap map over the displacement matrix, each acts on only one normal.
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return 0.5 * np.sum(E_align(dR, N, N))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
Now the boids align with one another and already the simulation is displaying interesting behavior!
### Avoidance
We can incorporate an avoidance rule that will keep the boids from bumping into one another. This will help them to form a flock with some volume rather than collapsing together. To this end, imagine a very simple model of boids that push away from one another if they get within a distance $D_{\text{Avoid}}$ and otherwise don't repel. We can use a simple energy similar to Alignment but without any angular dependence,
$$
\epsilon_{\text{Avoid}}(\Delta R_{ij}) = \begin{cases}\frac{J_{\text{Avoid}}}{\alpha}\left(1 - \frac{||\Delta R_{ij}||}{D_{\text{Avoid}}}\right)^\alpha & ||\Delta R_{ij}||<D_{\text{Avoid}} \\ 0 & \text{otherwise}\end{cases}
$$
This is implemented in the following Python function. Unlike the case of alignment, here we want boids to move away from one another and so we don't need a stop gradient on $\Delta R$.
```
def avoid_fn(dR, J_avoid, D_avoid, alpha):
dr = space.distance(dR) / D_avoid
return np.where(dr < 1.,
J_avoid / alpha * (1 - dr) ** alpha,
0.)
```
Plotting the energy we see that it is highest when boids are overlapping and then goes to zero smoothly until $||\Delta R|| = D_{\text{Align}}$.
```
#@title Avoidance Energy
dr = np.linspace(0, 2., 60)
dR = vmap(lambda r: np.array([0., r]))(dr)
Es = vmap(partial(avoid_fn, J_avoid=1., D_avoid=1., alpha=3.))(dR)
plt.plot(dr, Es, 'r', linewidth=3)
plt.xlim([0, 2])
format_plot('$r$', '$E$')
finalize_plot()
```
We can now run a version of our simulation with both alignment and avoidance.
```
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
# New Avoidance Code
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
#
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return 0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
The avoidance term in the energy stops the boids from collapsing on top of one another.
### Cohesion
The final piece of Reynolds' boid model is cohesion. Notice that in the above simulation, the boids tend to move in the same direction but they also often drift apart. To make the boids behave more like schools of fish or birds, which maintain a more compact arrangement, we add a cohesion term to the energy.
The goal of the cohesion term is to align boids towards the center of mass of their neighbors. Given a boid, $i$, we can compute the center of mass position of its neighbors as,
$$
\Delta R_i = \frac 1{|\mathcal N|} \sum_{j\in\mathcal N}\Delta R_{ij}
$$
where we have let $\mathcal N$ be the set of boids such that $||\Delta R_{ij}|| < D_{\text{Cohesion}}$.
Given the center of mass displacements, we can define a reasonable cohesion energy as,
$$
\epsilon_{Cohesion}\left(\widehat{\Delta R}_i, N_i\right) = \frac12J_{\text{Cohesion}}\left(1 - \widehat {\Delta R}_i\cdot N\right)^2
$$
where $\widehat{\Delta R}_i = \Delta R_i / ||\Delta R_i||$ is the normalized vector pointing in the direction of the center of mass. This function is minimized when the boid is pointing in the direction of the center of mass.
We can implement the cohesion energy in the following python function. Note that as with alignment, we will have boids control their orientation and so we will insert a stop gradient on the displacement vector.
```
def cohesion_fn(dR, N, J_cohesion, D_cohesion, eps=1e-7):
dR = lax.stop_gradient(dR)
dr = np.linalg.norm(dR, axis=-1, keepdims=True)
mask = dr < D_cohesion
N_com = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)
return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
# New Cohesion Code
E_cohesion = partial(cohesion_fn, J_cohesion=0.005, D_cohesion=40.)
#
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) +
np.sum(E_cohesion(dR, N)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
Now the boids travel in tighter, more cohesive, packs. By tuning the range of the cohesive interaction and its strength you can change how strongly the boids attempt to stick together. However, if we raise it too high it can have some undesireable consequences.
```
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.1, D_cohesion=40.) # Raised to 0.05.
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) +
np.sum(E_cohesion(dR, N)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
### Looking Ahead
When the effect of cohesion is set to a large value, the boids cluster well. However, the motion of the individual flocks becomes less smooth and adopts an almost oscillatory behavior. This is caused by boids in the front of the pack getting pulled towards boids behind them.
To improve this situation, we follow Reynolds and note that animals don't really look in all directions. The behavior of our flocks might look more realistic if we encorporated "field of view" for the boids. To this end, in both the alignment function and the cohesion function we will ignore boids that are outside of the line of sight for the boid. We will have a particularly simple definition for line of sight by first defining, $\widehat{\Delta R_{ij}} \cdot N_i = \cos\theta_{ij}$ where $\theta_{ij}$ is the angle between the orientation of the boid and the vector from the boid to its neighbor.
Since most animals that display flocking behavior have eyes in the side of their head, as opposed to the front, we will define $\theta_{\text{min}}$ and $\theta_{\text{max}}$ to bound the angular field of view of the boids. Then, we assume each boid can see neighbors if $\cos\theta_{\text{min}} < \cos\theta < \cos\theta_\text{max}$.
```
def field_of_view_mask(dR, N, theta_min, theta_max):
dr = space.distance(dR)
dR_hat = dR / dr
ctheta = np.dot(dR_hat, N)
# Cosine is monotonically decreasing on [0, pi].
return np.logical_and(ctheta > np.cos(theta_max),
ctheta < np.cos(theta_min))
```
We can then adapt the cohesion function to incorporate an arbitrary mask,
```
def cohesion_fn(dR, N, mask, # New mask parameter.
J_cohesion, D_cohesion, eps=1e-7):
dR = lax.stop_gradient(dR)
dr = space.distance(dR)
mask = np.reshape(mask, mask.shape + (1,))
dr = np.reshape(dr, dr.shape + (1,))
# Updated Masking Code
mask = np.logical_and(dr < D_cohesion, mask)
#
N_com = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)
return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2
```
And finally run a simulation incorporating the field of view.
```
def energy_fn(state):
boids = state['boids']
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = space.map_product(displacement)(boids.R, boids.R)
N = normal(boids.theta)
# New FOV code.
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
# As before, we have to vmap twice over the displacement matrix, but only once
# over the normal.
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
#
return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer))
```
## Extras
Now that the core elements of the simulation are working well enough, we can add some extras fairly easily. In particular, we'll try to add some obstacles and some predators.
### Obstacles
The first thing we'll add are obstacles that the boids and (soon) the predators will try to avoid as they wander around the simulation. For the purposes of this notebook, we'll restrict ourselves to disk-like obstacles. Each disk will be described by a center position and a radius, $D_\text{Obstacle}$.
```
Obstacle = namedtuple('Obstacle', ['R', 'D'])
```
Then we can instantiate some obstacles.
```
N_obstacle = 5
R_rng, D_rng = random.split(random.PRNGKey(5))
obstacles = Obstacle(
box_size * random.uniform(R_rng, (N_obstacle, 2)),
random.uniform(D_rng, (N_obstacle,), minval=30.0, maxval=100.0)
)
```
In a similar spirit to the energy functions above, we would like an energy function that encourages the boids to avoid obstacles. For this purpose we will pick an energy function that is similar in form to the alignment function above,
$$
\epsilon_\text{Obstacle}(\Delta R_{io}, N_i, D_o) = \begin{cases}\frac{J_\text{Obstacle}}{\alpha}\left(1 - \frac{\|\Delta R_{io}\|}{D_o}\right)^\alpha\left(1 + N_i\cdot \widehat{\Delta R_{io}}\right)^2 & \|\Delta R_{io}\| < D_o \\ 0 & \text{Otherwise}\end{cases}
$$
for $\Delta R_{io}$ the displacement vector between a boid $i$ and an obstacle $o$. This energy is zero when the boid and the obstacle are not overlapping. When they are overlapping, the energy is minimized when the boid is facing away from the obstacle.
\
We can write down the boid-energy function in python.
```
def obstacle_fn(dR, N, D, J_obstacle):
dr = space.distance(dR)
dR = dR / np.reshape(dr, dr.shape + (1,))
return np.where(dr < D,
J_obstacle * (1 - dr / D) ** 2 * (1 + np.dot(N, dR)) ** 2,
0.)
```
Now we can run a simulation that includes obstacles.
```
def energy_fn(state):
boids = state['boids']
d = space.map_product(displacement)
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = d(boids.R, boids.R)
N = normal(boids.theta)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
# New obstacle code
obstacles = state['obstacles']
dR_o = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
#
return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_o, N, D)))
update = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)
boids_buffer = []
state = {
'boids': boids,
'obstacles': obstacles
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
display(render(box_size, boids_buffer, obstacles))
```
The boids are now successfully navigating obstacles in their environment.
### Predators
Next we are going to introduce some predators into the environment for the boids to run away from. Much like the boids, the predators will be described by a position and an angle.
```
Predator = namedtuple('Predator', ['R', 'theta'])
predators = Predator(R=np.array([[box_size / 2., box_size /2.]]),
theta=np.array([0.0]))
```
The predators will also follow similar dynamics to the boids, swimming in whatever direction they are pointing at some speed that we can choose. Unlike in the previous versions of the simulation, predators naturally introduce some asymmetry to the system. In particular, we would like the boids to flee from the predators, but we want the predators to chase the boids. To achieve this behavior, we will consider a system reminiscient of a two-player game in which the boids move to minize an energy,
$$
E_\text{Boid} = E_\text{Align} + E_\text{Avoid} + E_\text{Cohesion} + E_\text{Obstacle} + E_\text{Boid-Predator}.
$$
Simultaneously, the predators move in an attempt to minimize a simpler energy,
$$
E_\text{Predator} = E_\text{Predator-Boid} + E_\text{Obstacle}.
$$
To add predators to the environment we therefore need to add two rules, one that dictates the boids behavior near a predator and one for the behavior of predators near a group of boids. In both cases we will see that we can draw significant inspiration from behaviors that we've already developed.
\
We will start with the boid-predator function since it is a bit simpler. In fact, we can use an energy that is virtually identical to the obstacle avoidance energy since the desired behavior is the same.
$$
\epsilon_\text{Boid-Predator}(\Delta R_{ip}, N_i) = \frac{J_\text{Boid-Predator}}\alpha\left(1 - \frac{\|\Delta R_{ip}\|}{D_\text{Boid-Predator}}\right)^\alpha (1 + \widehat{\Delta R_{ip}}\cdot N_i)^2
$$
As before, this function is minimized when the boid is pointing away from the predators. Because we don't want the predators to experience this term we must include a stop-gradient on the predator positions.
```
def boid_predator_fn(R_boid, N_boid, R_predator, J, D, alpha):
N = N_boid
dR = displacement(lax.stop_gradient(R_predator), R_boid)
dr = np.linalg.norm(dR, keepdims=True)
dR_hat = dR / dr
return np.where(dr < D,
J / alpha * (1 - dr / D) ** alpha * (1 + np.dot(dR_hat, N)),
0.)
```
For the predator-boid function we can borrow the cohesion energy that we developed above to have predators that turn towards the center-of-mass of boids in their field of view.
```
def predator_boid_fn(R_predator, N_predator, R_boids, J, D, eps=1e-7):
# It is most convenient to define the predator_boid energy function
# for a single predator and a whole flock of boids. As such we expect shapes,
# R_predator : (spatial_dim,)
# N_predator : (spatial_dim,)
# R_boids : (n, spatial_dim,)
N = N_predator
# As such, we need to vectorize over the boids.
d = vmap(displacement, (0, None))
dR = d(lax.stop_gradient(R_boids), R_predator)
dr = space.distance(dR)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
# Here as well.
fov = vmap(fov, (0, None))
mask = np.logical_and(dr < D, fov(dR, N))
mask = mask[:, np.newaxis]
boid_count = np.where(mask, 1.0, 0)
dR_com = np.where(mask, dR, 0)
dR_com = np.sum(dR_com, axis=0) / (np.sum(boid_count, axis=0) + eps)
dR_com = dR_com / np.linalg.norm(dR_com + eps, keepdims=True)
return f32(0.5) * J * (1 - np.dot(dR_com, N)) ** 2
```
Now we can modify our dynamics to also update predators.
```
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state):
dstate = quantity.force(energy_fn)(state)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
state['predators'] = Predator(*move(state['predators'],
dstate['predators'],
predator_speed))
return state
return update
```
Finally, we can put everything together and run the simulation.
```
def energy_fn(state):
boids = state['boids']
d = space.map_product(displacement)
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
dR = d(boids.R, boids.R)
N = normal(boids.theta)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(vmap(fov, (0, None)))
mask = fov(dR, N)
obstacles = state['obstacles']
dR_bo = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
# New predator code.
predators = state['predators']
E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)
E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))
N_predator = normal(predators.theta)
E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)
E_predator_boid = vmap(E_predator_boid, (0, 0, None))
dR_po = -d(predators.R, obstacles.R)
#
E_boid = (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_bo, N, D)) +
np.sum(E_boid_predator(boids.R, N, predators.R)))
E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) +
np.sum(E_predator_boid(predators.R, N_predator, boids.R)))
return E_boid + E_predator
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
```
We see that our predator now moves around chasing the boids.
### Internal State
Until now, all of the data describing the boids, predators, and obstacles referred to their physical location and orientation. However, we can develop more interesting behavior if we allow the agents in our simulation to have extra data describing their internal state. As an example of this, we will allow predators to accelerate to chase boids if they get close.
\
To this end, we add at extra piece of data to our predator, $t_\text{sprint}$ which is the last time the predator accelerated. If the predator gets within $D_\text{sprint}$ of a boid and it has been at least $T_\text{sprint}$ units of time since it last sprinted it will accelerate. In practice, to accelerate the predator we will adjust its speed so that,
$$
s(t) = s_0 + s_1 e^{-(t - t_\text{sprint}) / C\tau_\text{sprint}}
$$
where $s_0$ is the normal speed of the predator, $s_0 + s_1$ is the peak speed, and $\tau_\text{sprint}$ determines how long the sprint lasts for. In practice, rather than storign $t_\text{sprint}$ we will record $\Delta t = t - t_\text{sprint}$ which is the time since the last sprint.
\
Implementing this first requires that we add the necessary data to the predators.
```
Predator = namedtuple('Predator', ['R', 'theta', 'dt'])
predators = Predator(R=np.array([[box_size / 2., box_size /2.]]),
theta=np.array([0.0]),
dt=np.array([0.]))
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta, *_ = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state):
dstate = quantity.force(energy_fn)(state)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
# New code to accelerate the predators.
D_sprint = 65.
T_sprint = 300.
tau_sprint = 50.
sprint_speed = 2.0
# First we find the distance from each predator to the nearest boid.
d = space.map_product(space.metric(displacement))
predator = state['predators']
dr_min = np.min(d(state['boids'].R, predator.R), axis=1)
# Check whether there is a near enough boid to bother sprinting and if
# enough time has elapsed since the last sprint.
mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)
predator_dt = np.where(mask, 0., predator.dt + dt)
# Adjust the speed according to whether or not we're sprinting.
speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)
predator_R, predator_theta = move(state['predators'],
dstate['predators'],
speed)
state['predators'] = Predator(predator_R, predator_theta, predator_dt)
#
return state
return update
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(400)):
state = lax.fori_loop(0, 50, update, state)
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
```
## Scaling Up
Up to this point, we have simulated a relatively small flock of $n = 200$ boids. In part we have done this because we compute, at each step, an $n\times n$ matrix of distances. Therefore the computational complexity of the flocking simulation scales as $\mathcal O(n^2)$. However, as Reynolds' notes we have built in a locality assumption so that no boids interact provided they are further apart than $D =\max\{D_{\text{Align}}, D_\text{Avoid}, D_\text{Cohesion}\}$. JAX MD provides tools to construct a set of candidates for each boid in about $\mathcal O(n\log n)$ time by predcomputing a list of neighbors for each boid. Using neighbor lists we can scale to much larger simulations.
We create lists of all neighbors within a distance of $D + \delta$ and pack them into an array of shape $n\times n_\text{max_neighbors}$. Using this technique we only need to rebuild the neighbor list if any particle has moved more than a distance of $\delta$. We estimate `max_neighbors` from arrangements of particles and if any boid ever has more than this number of neighbors we must rebuild the neighbor list from scratch and recompile our simulation onto device.
To start with we setup a much larger system of boids.
```
# Simulation Parameters.
box_size = 2400.0 # A float specifying the side-length of the box.
boid_count = 2000 # An integer specifying the number of boids.
obstacle_count = 10 # An integer specifying the number of obstacles.
predator_count = 10 # An integer specifying the number of predators.
dim = 2 # The spatial dimension in which we are simulating.
# Create RNG state to draw random numbers.
rng = random.PRNGKey(0)
# Define periodic boundary conditions.
displacement, shift = space.periodic(box_size)
# Initialize the boids.
# To generate normal vectors that are uniformly distributed on S^N note that
# one can generate a random normal vector in R^N and then normalize it.
rng, R_rng, theta_rng = random.split(rng, 3)
boids = Boids(
R = box_size * random.uniform(R_rng, (boid_count, dim)),
theta = random.uniform(theta_rng, (boid_count,), maxval=2 * np.pi)
)
rng, R_rng, D_rng = random.split(rng, 3)
obstacles = Obstacle(
R = box_size * random.uniform(R_rng, (obstacle_count, dim)),
D = random.uniform(D_rng, (obstacle_count,), minval=100, maxval=300.)
)
rng, R_rng, theta_rng = random.split(rng, 3)
predators = Predator(
R = box_size * random.uniform(R_rng, (predator_count, dim)),
theta = random.uniform(theta_rng, (predator_count,), maxval=2 * np.pi),
dt = np.zeros((predator_count,))
)
neighbor_fn = partition.neighbor_list(displacement,
box_size,
r_cutoff=45.,
dr_threshold=10.,
capacity_multiplier=3)
neighbors = neighbor_fn(boids.R)
print(neighbors.idx.shape)
```
We see that dispite having 2000 boids, they each only have about 13 neighbors apiece at the start of the simulation. Of course this will grow over time and we will have to rebuild the neighbor list as it does. Next we make some minimal modifications to our energy function to rewrite the energy of our simulation to operate on neighbors. This mostly involves changing some of the vectorization patterns with `vmap` and creating a mask of which neighbors in the $n\times n_\text{max neighbors}$ arrays are filled. In JAX MD we use the pattern `mask = neighbors.idx == len(neighbors.idx)`.
```
def energy_fn(state, neighbors):
boids = state['boids']
d = space.map_product(displacement)
fov = partial(field_of_view_mask,
theta_min=0.,
theta_max=np.pi / 3.)
fov = vmap(vmap(fov, (0, None)))
E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)
E_align = vmap(vmap(E_align, (0, None, 0)))
E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)
E_avoid = vmap(vmap(E_avoid))
E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)
# New code to extract displacement vector to neighbors and normals.
R_neighbors = boids.R[neighbors.idx]
dR = -vmap(vmap(displacement, (None, 0)))(boids.R, R_neighbors)
N = normal(boids.theta)
N_neighbors = N[neighbors.idx]
#
# New code to add a mask over neighbors as well as field-of-view.
neighbor_mask = neighbors.idx < dR.shape[0]
fov_mask = np.logical_and(neighbor_mask, fov(dR, N))
#
obstacles = state['obstacles']
dR_bo = -d(boids.R, obstacles.R)
D = obstacles.D
E_obstacle = partial(obstacle_fn, J_obstacle=1000.)
E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))
predators = state['predators']
E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)
E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))
N_predator = normal(predators.theta)
E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)
E_predator_boid = vmap(E_predator_boid, (0, 0, None))
dR_po = -d(predators.R, obstacles.R)
E_boid = (0.5 * np.sum(E_align(dR, N, N_neighbors) * fov_mask + E_avoid(dR)) +
np.sum(E_cohesion(dR, N, fov_mask)) + np.sum(E_obstacle(dR_bo, N, D)) +
np.sum(E_boid_predator(boids.R, N, predators.R)))
E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) +
np.sum(E_predator_boid(predators.R, N_predator, boids.R)))
return E_boid + E_predator
```
Next we have to update our simulation to use and update the neighbor list.
```
def dynamics(energy_fn, dt, boid_speed, predator_speed):
# We extract common movement functionality into a `move` function.
def move(boids, dboids, speed):
R, theta, *_ = boids
dR, dtheta, *_ = dboids
n = normal(theta)
return (shift(R, dt * (speed * n + dR)),
theta + dt * dtheta)
@jit
def update(_, state_and_neighbors):
state, neighbors = state_and_neighbors
# New code to update neighbor list.
neighbors = neighbor_fn(state['boids'].R, neighbors)
dstate = quantity.force(energy_fn)(state, neighbors)
state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))
# Predator acceleration.
D_sprint = 65.
T_sprint = 300.
tau_sprint = 50.
sprint_speed = 2.0
d = space.map_product(space.metric(displacement))
predator = state['predators']
dr_min = np.min(d(state['boids'].R, predator.R), axis=1)
mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)
predator_dt = np.where(mask, 0., predator.dt + dt)
speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)
speed = speed[:, np.newaxis]
predator_R, predator_theta = move(state['predators'],
dstate['predators'],
speed)
state['predators'] = Predator(predator_R, predator_theta, predator_dt)
#
return state, neighbors
return update
```
And now we can conduct our larger simulation.
```
update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)
boids_buffer = []
predators_buffer = []
state = {
'boids': boids,
'obstacles': obstacles,
'predators': predators
}
for i in ProgressIter(range(800)):
new_state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))
# If the neighbor list can't fit in the allocation, rebuild it but bigger.
if neighbors.did_buffer_overflow:
print('REBUILDING')
neighbors = neighbor_fn(state['boids'].R)
state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))
assert not neighbors.did_buffer_overflow
else:
state = new_state
boids_buffer += [state['boids']]
predators_buffer += [state['predators']]
display(render(box_size, boids_buffer, obstacles, predators_buffer))
```
At the end of the simulation we can see how large our neighbor list had to be to accomodate all of the boids.
```
print(neighbors.idx.shape)
```
| github_jupyter |
## Observations and Insights
1) It was clear that Capomulin was the most effective drug regime.
2) Capomulin and Ramicane both had higher numbers of mice completing the course of drugs. This could be a result of the drugs performance allowing the mice to continue longer with the regime.
3) For the drug Capomulin, the correlation of mouse weight vs the average tumor volume was 0.84. This indicates that there is a positive correlation between the two sets of data, suggesting that the weight of a mouse could influence the effectiveness of the drug regime.
4) From the boxplots it can be seen that Capomulin and Ramicane were the two most effective regimens as all of their mice had significantlly lower final volumes of their tumors compared to the next two drug regimens, Infubinol and Ceftamin.
5) While most mice showed tumor volume increase for the Infubinol regimen, there was one mouse that had a reduction in tumor growth in the study. This is a potential outlier within the study.
```
! pip install scipy
! pip install autopep8
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
import autopep8
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
data_combined_df = pd.merge(
mouse_metadata, study_results, how="inner", on="Mouse ID")
# Display the data table for preview
data_combined_df.head()
# Checking the number of mice.
mice = data_combined_df['Mouse ID'].value_counts()
numberofmice = len(mice)
numberofmice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicatemice = data_combined_df.loc[data_combined_df.duplicated(
subset=['Mouse ID', 'Timepoint', ]), 'Mouse ID'].unique()
print(duplicatemice)
# Optional: Get all the data for the duplicate mouse ID.
duplicate_g989 = data_combined_df[data_combined_df.duplicated(
['Mouse ID', 'Timepoint'])]
duplicate_g989
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = data_combined_df[data_combined_df['Mouse ID'].isin(
duplicatemice) == False]
clean_df
# Checking the number of mice in the clean DataFrame.
cleanmice = clean_df['Mouse ID'].value_counts()
numberofcleanmice = len(cleanmice)
numberofcleanmice
```
## Summary Statistics
```
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
mean = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
# print (mean)
median = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
# print (median)
variance = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
# print (variance)
std_dv = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
# print (std_dv)
sem = clean_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
# print (sem)
summary_df = pd.DataFrame({"Mean": mean, "Median": median,
"Variance": variance, "Standard Deviation": std_dv, "SEM": sem})
summary_df
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
single_group_by = clean_df.groupby('Drug Regimen')
summary_df_2 = single_group_by.agg(['mean', 'median', 'var', 'std', 'sem'])[
"Tumor Volume (mm3)"]
summary_df_2
# This method produces everything in a single groupby function
```
## Bar and Pie Charts
```
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
micepertreatment = clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
# micepertreatment
plot_pandas = micepertreatment.plot.bar(
figsize=(10, 5), color='g', fontsize=12)
plt.xlabel("Drug Regimen", fontsize=16)
plt.ylabel("Number of Mice", fontsize=16)
plt.title("Total Number of Mice per Treatment", fontsize=20)
plt.show()
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
x_axis = summary_df.index.tolist()
# x_axis
y_axis = micepertreatment.tolist()
# y_axis
tick_locations = []
for x in x_axis:
tick_locations.append(x)
plt.xlim(-.75, len(x_axis)-.25)
plt.ylim(0, max(y_axis) + 10)
plt.xlabel("Drug Regimen", fontsize=16)
plt.ylabel("Number of Mice", fontsize=16)
plt.title("Total Number of Mice per Treatment", fontsize=18)
plt.bar(x_axis, y_axis, color='b', alpha=.75, align="center")
plt.xticks(tick_locations, x_axis, rotation=90)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Group by gender and get the number to plot
genderpercentage = clean_df["Mouse ID"].groupby([clean_df["Sex"]]).nunique()
genderpercentage
list_sex = genderpercentage.keys()
list_sex
explode = [0.025, 0]
colors = ['green', 'blue']
genderpercentage.plot(kind='pie', y=list_sex, autopct='%1.1f%%',
explode=explode, colors=colors, startangle=50, shadow=True)
plt.title('Distribution of female versus male mice', fontsize=18)
plt.axis("equal")
plt.ylabel('Sex', fontsize=14)
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# mouse count per sex
sex_count = clean_df.loc[(clean_df["Timepoint"] == 0), :]
sex_count
# Labels for the sections of our pie chart
labels = sex_count["Sex"].unique()
labels
# The values of each section of the pie chart
mouse_sex = sex_count["Sex"].value_counts().tolist()
mouse_sex
# plot using pyplot
plt.pie(mouse_sex, labels=labels, autopct="%1.1f%%",
explode=explode, colors=colors, startangle=50, shadow=True)
plt.title("Distribution of female versus male mice", fontsize=18)
plt.axis("equal")
plt.ylabel("Sex", fontsize=14)
plt.show()
```
## Quartiles, Outliers and Boxplots
```
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
last_timepoint = clean_df.groupby('Mouse ID').max()['Timepoint']
last_timepoint
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
Vol_last = pd.merge(last_timepoint, data_combined_df,
on=("Mouse ID", "Timepoint"), how="left")
Vol_last
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
treatments
# Create empty list to fill with tumor vol data (for plotting)
tumor_vol = []
tumor_vol
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for drug in treatments:
# Locate the rows which contain mice on each drug and get the tumor volumes
tumorsize = Vol_last[Vol_last['Drug Regimen']
== drug]['Tumor Volume (mm3)']
# add subset
tumor_vol.append(tumorsize)
# Determine outliers using upper and lower bounds
IQR = tumorsize.quantile(.75) - tumorsize.quantile(.25)
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
lower_bound = (lowerq-(1.5*iqr))
upper_bound = (upperq+(1.5*iqr))
print(f" ")
print(f"{drug} IQR:{IQR}")
print(f"Values below {lower_bound} for {drug} could be outliers.")
print(f"Values above {upper_bound} for {drug} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
fig1, ax = plt.subplots(figsize=(9, 7))
ax.set_title(
"Final tumor volume of each mouse across four regimens of interest")
ax.set_xlabel("Drug Regimen")
ax.set_ylabel("Tumor Volume (mm3)")
ax.boxplot(tumor_vol, labels=treatments, sym="gD")
plt.xticks([1, 2, 3, 4], treatments)
plt.show()
```
## Line and Scatter Plots
```
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
vol_data = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin"]
vol_data
capomulins185_df = vol_data.loc[vol_data["Mouse ID"] == "s185"]
capomulins185_df
x_axis1 = capomulins185_df["Timepoint"]
tumor = capomulins185_df["Tumor Volume (mm3)"]
plt.plot(x_axis1, tumor, marker="o", color="purple", )
plt.title("Time point Vs. tumor volume for Mouse m601 treated with Capomulin")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.grid(True)
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
mouseavgweight = vol_data.groupby(["Mouse ID"]).mean()
plt.scatter(mouseavgweight["Weight (g)"],
mouseavgweight["Tumor Volume (mm3)"], marker="H")
plt.title(
"Mouse weight versus average tumor volume for the Capomulin regimen", fontsize=18)
plt.xlabel("Weight (g)", fontsize=12)
plt.ylabel("Average Tumor Volume (mm3)", fontsize=12)
plt.show()
```
## Correlation and Regression
```
# Calculate the correlation coefficient and linear regression model
correlation = st.pearsonr(
mouseavgweight["Weight (g)"], mouseavgweight["Tumor Volume (mm3)"])
# for mouse weight and average tumor volume for the Capomulin regimen
x_values = mouseavgweight["Weight (g)"]
y_values = mouseavgweight["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = st.linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (20, 36), fontsize=15, color="red")
plt.title("Mouse weight and average tumor volume for the Capomulin")
plt.xlabel("Mouse Weight (g)")
plt.ylabel("Tumor Volume (mm3)")
print(
f"The correlation coefficient between mouse weight and average tumor volume for the Capomulin regimen is {round(correlation[0],2)}")
print(f"The r-squared is {rvalue**2}")
plt.show()
```
| github_jupyter |
```
import pandas
import matplotlib as mpl
import xarray as xr
import numpy as np
import datetime as dt
dir_cmc='F:/data/sst/cmc/CMC0.2deg/v2/'
dir_cmc_clim='F:/data/sst/cmc/CMC0.2deg/v2/climatology/'
def get_filename(lyr,idyjl):
podaac_dir_v3 = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.1deg/v3/'
podaac_dir_v2 = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/'
d = dt.date(lyr,1,1) + dt.timedelta(idyjl - 1)
syr=str(d.year).zfill(4)
smon=str(d.month).zfill(2)
sdym=str(d.day).zfill(2)
sjdy=str(idyjl).zfill(3)
if lyr<2017:
cmc_filename=podaac_dir_v2 + syr + smon + sdym + '120000-CMC-L4_GHRSST-SSTfnd-CMC0.1deg-GLOB-v02.0-fv03.0.nc'
else:
cmc_filename=podaac_dir_v3 + syr + smon + sdym + '120000-CMC-L4_GHRSST-SSTfnd-CMC0.1deg-GLOB-v02.0-fv03.0.nc'
#testing 0.1 CMC
filename = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/1994/002/19940102120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
ds_v2 = xr.open_dataset(filename)
new_lat = np.linspace(ds_v2.lat[0], ds_v2.lat[-1], ds_v2.dims['lat'])
new_lon = np.linspace(ds_v2.lon[0], ds_v2.lon[-1], ds_v2.dims['lon'])
ds2 = ds.interp(lat = new_lat,lon = new_lon)
#for 2017 - present use 0.1 CMC interpolated onto 0.2 grid for monthly averages and run on pythonanywhere
for lyr in range(2017,2019): #2017):
# ds_mnth=[]
# for imon in range(1,13):
# init = 0
for idyjl in range(1,366):
cmc_filename = get_filename(lyr,idyjl)
#cmc_filename = podaac_dir + syr + '/' + sjdy + '/' + fname_tem
#print(cmc_filename)
ds = xr.open_dataset(cmc_filename,drop_variables=['analysis_error','sea_ice_fraction'])
ds_masked = ds.where(ds['mask'] == 1.)
ds.close()
ds_masked['sq_sst']=ds_masked.analysed_sst**2
if init==0:
ds_sum = ds_masked
init = 1
else:
ds_sum = xr.concat([ds_sum,ds_masked],dim = 'time')
print(idyjl,ds_sum.dims)
# ds_sum = ds_sum.mean('time',skipna=True)
# ds_mnth.append(ds_sum)
combined = ds_sum.resample(time='1M').mean()
combined = xr.concat(ds_mnth, dim='time')
fname_tem='monthly_average_' + syr + '120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
cmc_filename_out = './data/' + fname_tem
combined.to_netcdf(cmc_filename_out)
```
| github_jupyter |
# Data Preprocessing for Social Anxiety Detection : Participant 4
***
## Participant Details
__Gender:__ male <br/>
__Ethnicity:__ white <br/>
__Age:__ 19 <br/>
__Anxiety score:__ 72 <br/>
__Anxiety category:__ 2 <br/>
***
## Contents
__1.Introduction <br/>__
1.1. Nature of the dataset <br/>
1.2. Description of the ML experiments <br/>
__2.Import packages <br/>__
__3.Import data <br/>__
3.1. Import HR data and resample <br/>
3.2. Import ST data <br/>
3.3. Import EDA data <br/>
__4.Combine data <br/>__
__5.Data labelling <br/>__
5.1. Labelling for experiment (1) and (3) <br/>
5.2. Labelling for experiment (2) <br/>
__6.Data visualisation and export__
***
## 1. Introduction
This notebook preprocesses the physiological data needed for the supervised machine learning (ML) experiments that investigate whether subclinical social anxiety in young adults can be detected using physiological data obtained from wearable sensors.
### 1.1. Nature of the dataset
The dataset consists of Heart Rate (HR) data, Skin Temperature (ST) data and Electrodermal Activity (EDA) data. This physiological data was collected using an E4 Empatica wearable device. Using the default sampling rates of the E4, EDA was measured in microSiemens (μS) at 4 Hz using stainless steel electrodes positioned on the inner side of the wrist. HR was measured in Beats Per Minute (BPM) at 1 Hz using data derived from a Photoplethysmography sensor. ST was measured in degrees Celsius (°C) at 4 Hz using an infrared thermophile.
### 1.2. Description of the ML experiments
__Experiment (1)__ investigates whether models can be trained to classify between baseline and socially anxious states. The data is either labelled '0' during the baseline period and '1' during the anxiety period (during anticipation and reactive anxiety).
__Experiment (2)__ investigates whether models can be trained to differentiate between baseline, anticipation anxiety and reactive anxiety states. The data is labelled in three ways, '0' during the baseline period, '1' during the anticipation anxiety period and '2' during the reactive anxiety period.
__Experiment (3)__ investigates whether models can be trained to classify between social anxiety experienced by individuals with differing social anxiety severity. The data was segregated based on scores reported using the self-reported version of Liebowitz Social Anxiety Scale (LSAS-SR), the data was is either labelled as '0' for individuals in anxiety category 1 (LSAS-SR:50-64) or labelled as '1' for individuals in anxiety category 2 (LSAS-SR:65-80).
***
## 2.Import packages
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
## 3.Import and combine data
### 3.1. Import HR data and upsample
HR is imported and upsampled to 4Hz similar to ST and EDA. The data is then cleaned using a moving average filter in order to remove noise to reduce the risk of overfitting.
```
hr = pd.read_csv("HR.csv")
hr.index = pd.date_range('2020-03-04', periods = len(hr), freq='1S')
#resampling HR to 4Hz
hr_resample = hr.resample('0.25S').ffill()
#Applying moving average filter
rolling = hr_resample.rolling(window=9)
hr_filtered = rolling.mean()
#Plotting the comparison
fig, (ax1, ax2) = plt.subplots(2, 1)
hr_resample[2300:2400].plot( ax=ax1, legend=False, color = 'indigo')
ax1.yaxis.set_label_text("HR (BPM)")
ax1.xaxis.set_label_text('Time(min)')
ax1.set_title("Resampled HR")
ax1.grid(which='both', alpha=2)
hr_filtered[2300:2400].plot( ax=ax2, legend=False, color = 'indigo')
ax2.yaxis.set_label_text("HR (BPM)")
ax2.xaxis.set_label_text('Time(min)')
ax2.set_title("Resampled HR After Filtering")
ax2.grid(which='both', alpha=2)
fig.set_size_inches(15, 5)
fig.subplots_adjust(hspace=0.7)
plt.show()
```
### 3.2. Import ST data
The ST data is imported and then cleaned using a moving average filter in order to remove noise to reduce the risk of overfitting.
```
st = pd.read_csv("ST.csv")
st.index = pd.date_range('2020-03-04', periods = len(st), freq='0.25S')
#Applying moving average filter
rolling = st.rolling(window=15)
st_filtered = rolling.mean()
#Plotting the comparison
fig, (ax1, ax2) = plt.subplots(2, 1)
st[2300:2400].plot( ax=ax1, legend=False, color = 'indigo')
ax1.yaxis.set_label_text("ST (°C)")
ax1.xaxis.set_label_text('Time(min)')
ax1.set_title("Raw ST")
ax1.grid(which='both', alpha=2)
st_filtered[2300:2400].plot( ax=ax2, legend=False, color = 'indigo')
ax2.yaxis.set_label_text("ST (°C)")
ax2.xaxis.set_label_text('Time(min)')
ax2.set_title("ST After Filtering")
ax2.grid(which='both', alpha=2)
fig.set_size_inches(15, 5)
fig.subplots_adjust(hspace=0.7)
plt.show()
```
### 3.3. Import EDA data
The EDA data is imported and then cleaned using a moving average filter in order to remove noise to reduce the risk of overfitting. The EDA data is also range corrected in order to remove inter-individual differences, more details about the range correction method can be found in the paper.
```
eda = pd.read_csv("EDA.csv")
eda.index = pd.date_range('2020-03-04', periods = len(eda), freq='0.25S')
#Applying moving average filter
rolling = eda.rolling(window=15)
eda_filtered = rolling.mean()
#Range corrected EDA - value - min/max-min
eda_corrected = (eda_filtered - 0.073)/(0.152-0.073)
#Plotting the comparison
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
eda[2300:2500].plot( ax=ax1, legend=False, color = 'indigo')
ax1.yaxis.set_label_text("EDA (μS)")
ax1.xaxis.set_label_text('Time(min)')
ax1.set_title("Raw EDA")
ax1.grid(which='both', alpha=2)
eda_filtered[2300:2500].plot( ax=ax2, legend=False, color = 'indigo')
ax2.yaxis.set_label_text("EDA (μS)")
ax2.xaxis.set_label_text('Time(min)')
ax2.set_title("EDA After Filtering")
ax2.grid(which='both', alpha=2)
eda_corrected[2300:2500].plot( ax=ax3, legend=False, color = 'indigo')
ax3.yaxis.set_label_text("EDA (μS)")
ax3.xaxis.set_label_text('Time(min)')
ax3.set_title("Range corrected EDA")
ax3.grid(which='both', alpha=2)
fig.set_size_inches(15, 6)
fig.subplots_adjust(hspace=1.3)
eda_filtered=eda_corrected
plt.show()
#eda[480:5304].min()
#eda[480:5304].max()
```
## 4.Combine data
```
df = pd.concat([hr_filtered, st_filtered, eda_filtered], ignore_index=True, axis = 1 )
df = df.T.reset_index(drop=True).T
display(df.describe())
```
## 5.Data labelling
The data was labelled for three different experiments. The anxiety duration in data cells etc. was calculated using a spreadsheet and the timestamps recorded during the experiments.
```
#insert column specifically for labels
df.insert(3,3,0)
display(df.describe())
```
### 5.1. Labelling for experiment (1) and (3)
For experiment (1) the data was labelled '1' (allocated to the social anxiety class) from when the task was announced to when the task was finished. The first 2 minutes from the baseline period were also discarded to account for acclimisation, the data after the task was also discarded.
For experiment (3) only the data in the anxious period (from task announcement to task end) was extracted and labelled. This individual falls into anxiety catergory 2 based on their LSAS-SR scores therefore their anxious data is labelled '1'. Data is then shuffled and a certain number of samples is taken.
```
experiment_df = df
#duration (labels) of anxiety duration (both anticipation and reactive, labelled '1')
experiment_df[3][2345:5304] = 1
display(experiment_df[3].value_counts())
#removing the data after the task had ended
experiment_df = experiment_df.drop(experiment_df.index[5304:])
#experiment 1 - removing the first 2 mins of the baseline period to account for acclimisation
experiment1_df = experiment_df.drop(experiment_df.index[:480])
display(experiment1_df[3].value_counts())
experiment1_df.to_csv("experiment_1.csv")
#experiment 3 - removing baseline period
experiment3_df = experiment_df.drop(experiment_df.index[:2345])
display(experiment3_df[3].value_counts())
#shuffling and extracting a set number of samples
idx = np.random.permutation(experiment3_df.index)
shuffled = experiment3_df.reindex(idx, axis=0)
shuffled = shuffled.reset_index(drop=True)
shuffled = shuffled.drop(shuffled.index[1667:])
shuffled.to_csv("experiment_3.csv")
```
### 5.2. Labelling for experiment (2)
For experiment (2) the data was labelled '1' during the anticipation anxiety stage (task announcement to task start) and labelled '2' during the reactive anxiety stage (task start to task end). The first 2 minutes from the baseline period were also discarded to account for acclimisation, the data after the task was also discarded.
```
experiment2_df = df
#duration (labels) of task prep (anticipation anxiety duration, labelled '1')
experiment2_df[3][2345:3984] = 1
#duration (labels) of task execution (reactive anxiety duration, labelled '2')
experiment2_df[3][3984:5304] = 2
display(experiment2_df[3].value_counts())
#removing the data after the task had ended
experiment2_df = experiment2_df.drop(experiment2_df.index[5304:])
#removing the first 2 mins of the baseline period to account for acclimisation
experiment2_df = experiment2_df.drop(experiment2_df.index[:506])
display(experiment2_df[3].value_counts())
experiment2_df.to_csv("experiment_2.csv")
```
## 6.Data visualisation
The physiological data and experiment (1) and (2) labels were plotted. Pearson correlation matrices were also formulated for the dataset used in experiment (1) and (2).
```
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(5, 1)
ax1.set_title('Combined Physiological Data and Experiment Labels (1 & 2)', fontsize = 15)
experiment1_df[0].plot(ax=ax1, legend=False, color='indigo')
ax1.yaxis.set_label_text("HR (BPM)")
ax1.xaxis.set_label_text('Time(min)')
ax1.grid(which='both', alpha=2)
experiment1_df[1].plot(ax=ax2, legend=False, color='indigo')
ax2.yaxis.set_label_text("ST (°C)")
ax2.xaxis.set_label_text('Time(min)')
ax2.grid(which='both', alpha=2)
experiment1_df[2].plot(ax=ax3, legend=False, color='indigo')
ax3.yaxis.set_label_text("Range Corrected EDA (μS)")
ax3.xaxis.set_label_text('Time(min)')
ax3.grid(which='both', alpha=2)
experiment1_df[3].plot(ax=ax4, legend=False, color='indigo')
ax4.yaxis.set_label_text("Experiment (1) labels")
ax4.xaxis.set_label_text('Time(min)')
ax4.grid(which='both', alpha=2)
experiment2_df[3].plot(ax=ax5, legend=False, color='indigo')
ax5.yaxis.set_label_text("Experiment (2) labels")
ax5.xaxis.set_label_text('Time(min)')
ax5.grid(which='both', alpha=2)
fig.set_size_inches(15, 14)
fig.subplots_adjust(hspace=0.4)
plt.show()
#Correlation matrix with Experiment 1 (binary labels)
labeldata = ['HR', 'ST', 'EDA','Labels']
sns.heatmap(experiment1_df.corr(method = 'pearson'), vmin=0, vmax=1, annot=True, cmap="YlGnBu", yticklabels = labeldata, xticklabels =labeldata)
fig = plt.gcf()
#Correlation matrix with Experiment 2 (Mult-class labels)
sns.heatmap(experiment2_df.corr(method = 'pearson'), vmin=0, vmax=1, annot=True, cmap="YlGnBu", yticklabels = labeldata, xticklabels =labeldata)
fig = plt.gcf()
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import pickle
import numpy as np
import pandas as pd
import skimage.io as io
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
from keras.applications.resnet50 import preprocess_input
from keras.models import Model
```
### In this part, we conduct the following procedure to make our data be analytic-ready.
**Step 1.** For every species, we select out the **representative images**.
**Step 2.** For every species representative image, we calculate its **HSV values with regard of different parts** (body, forewing, hindwing, whole)
**Step 3.** For every species representative image, we extract its **2048-dimensional features** from the well-trained neural network model
**Step 4.** We cluster species based on either the 2-dimensional t-SNE map or 2048D features into **k assemblages through k-Means Clustering**
**Step 5.** We use **t-SNE to compress its 2048-dimensional features** into one dimension as the trait value
**Step 6.** We quantify the **assemblage-level color diversity** by calculating the average cosine distance among every pair of species in the same assemblage
### output files:
1. **all_complete_table.csv**: main result for further analysis where a row implies a **species**
2. **trait_analysis.csv**: trait value for T-statistics analysis (T stands for trait), where a row implies an **image**
3. **cluster_center.csv**: information about assemblage centers where a row implies an assemblage center
4. **in-cluser_pairwise_diversity.csv**: result of pair-wise color distance where a row implies a pair of species
```
model_dirname = '/home/put_data/moth/code/cmchang/regression/fullcrop_dp0_newaug-rmhue+old_species_keras_resnet_fold_20181121_4'
# read testing dataset and set the path to obtain every part's mask
Xtest = pd.read_csv(os.path.join(model_dirname, 'test.csv'))
Xtest['img_rmbg_path'] = Xtest.Number.apply(lambda x: '/home/put_data/moth/data/whole_crop/'+str(x)+'.png')
Xtest['img_keep_body_path'] = Xtest.img_rmbg_path.apply(lambda x: x.replace('whole_crop','KEEP_BODY'))
Xtest['img_keep_down_path'] = Xtest.img_rmbg_path.apply(lambda x: x.replace('whole_crop','KEEP_DOWN'))
Xtest['img_keep_up_path'] = Xtest.img_rmbg_path.apply(lambda x: x.replace('whole_crop','KEEP_UP'))
Xtest = Xtest.reset_index()
Xtest.drop(columns='index', inplace=True)
# get the dictionary to look up the average elevation of a species
with open(os.path.join('/home/put_data/moth/metadata/1121_Y_mean_dict.pickle'), 'rb') as handle:
Y_dict = pickle.load(handle)
Ytest = np.vstack(Xtest['Species'].apply(lambda x: Y_dict[x]))
# aggregate the testing data by Species
df_species_group = Xtest.groupby('Species').apply(
lambda g: pd.Series({
'indices': g.index.tolist(),
}))
df_species_group = df_species_group.sample(frac=1).reset_index()
display(df_species_group.head())
```
### Step 1.
```
# select out the representative image which is the closest to its average elevation
sel = list()
for k in range(df_species_group.shape[0]):
row = df_species_group.iloc[k]
i = np.argmin(np.abs(np.array(Xtest.Alt[row['indices']]) - Y_dict[row['Species']]))
sel.append(row['indices'][i])
# Xout: DataFrame only contains representative images
Xout = Xtest.iloc[sel]
Yout = Ytest[sel]
Xout = Xout.reset_index()
Xout.drop(columns='index', inplace=True)
Xout.head()
```
### Step 2.
```
# extract the HSV features for species representatives
import skimage.color as color
def img_metrics(img):
hsv = color.rgb2hsv(img)
mask = 1.0 - (np.mean(img, axis=2)==255.0) + 0.0
x,y = np.where(mask)
mean_hsv = np.mean(hsv[x,y], axis=0)
std_hsv = np.std(hsv[x,y], axis=0)
return mean_hsv, std_hsv
df_reg_list = list()
species_list = list()
filename_list = list()
for k in range(Xout.shape[0]):
print(k, end='\r')
species = Xout.iloc[k]['Species']
species_list.append(species)
body_img = io.imread(Xout.iloc[k]['img_keep_body_path'])
mask = 1.0 - (np.mean(body_img, axis=2)==255.0) + 0.0
body_img[:,:,0] = body_img[:,:,0]*mask
body_img[:,:,1] = body_img[:,:,1]*mask
body_img[:,:,2] = body_img[:,:,2]*mask
img = io.imread(Xout.iloc[k]['img_keep_up_path'])
img += body_img
alt = Y_dict[Xout.iloc[k]['Species']]
mean_hsv, std_hsv = img_metrics(img)
whole_img = io.imread(Xout.iloc[k]['img_rmbg_path'])
whole_mean_hsv, whole_std_hsv = img_metrics(whole_img)
res = np.append(whole_mean_hsv[:3], mean_hsv[:3])
res = np.append(res, [alt])
df_reg_list.append(res)
df_reg_output = pd.DataFrame(data=df_reg_list,
columns=['h.whole', 's.whole', 'v.whole',
'h.body_fore','s.body_fore', 'v.body_fore','alt'])
```
### Step 3.
```
# extract 2048-dimensional features
from keras.models import load_model
model = load_model(os.path.join(model_dirname,'model.h5'))
features = model.get_layer('global_average_pooling2d_1')
extractor = Model(inputs=model.input, outputs=features.output)
TestImg = list()
for i in range(Xout.shape[0]):
img = io.imread(list(Xout['img_rmbg_path'])[i])
TestImg.append(img)
TestImg = np.stack(TestImg)
TestInput = preprocess_input(TestImg.astype(float))
Fout = extractor.predict(x=TestInput)
Yout = np.array([Y_dict[sp] for sp in Xout.Species])
np.save(file='Species_Representative_1047x2048.npy', arr=Fout)
Fout.shape
```
### Step 4.
```
# compress 2048-D features to 2-D map for visualization and clustering
from sklearn.manifold import TSNE
F_embedded = TSNE(n_components=2, perplexity=120).fit_transform(Fout)
from sklearn.cluster import KMeans
from sklearn import metrics
from time import time
def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('%-9s\t%.2fs\t%.3f\t%.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.silhouette_score(data, estimator.labels_,
metric='cosine',
sample_size=500)))
return estimator
for k in [30]:
km = KMeans(init='k-means++', n_clusters=k, n_init=20)
km = bench_k_means(km, name="k-means++", data=Fout)
from collections import Counter
Counter(km.labels_)
Xout['tsne.0'] = F_embedded[:,0]
Xout['tsne.1'] = F_embedded[:,1]
Xout['km_label'] = km.labels_
# representative image information
resout = pd.concat([Xout, df_reg_output], axis=1)
resout.to_csv(os.path.join(model_dirname, 'all_complete_table.csv'), index=False)
```
#### - If clustering based on t-SNE maps
```
# # cluster information
# stat = Xout[['km_label','Alt']].groupby('km_label').apply(np.mean)
# stat = stat.sort_values('Alt')
# stat.columns = ['km_label', 'class_alt']
# # center information
# centers = km.cluster_centers_
# myk = km.cluster_centers_.shape[0]
# centx, centy = list(), list()
# for i in range(stat.shape[0]):
# centx.append(centers[int(stat.iloc[i]['km_label']),0])
# centy.append(centers[int(stat.iloc[i]['km_label']),1])
# # add center information into clustere information
# stat['center_x'] = centx
# stat['center_y'] = centy
# stat['order'] = np.arange(myk)
# # output cluster information
# stat.to_csv(os.path.join(model_dirname,'cluster_center.csv'), index=False)
```
#### - If clustering based on 2048D features
```
from sklearn.metrics.pairwise import pairwise_distances
# cluster information
stat = Xout[['km_label','Alt']].groupby('km_label').apply(np.mean)
stat = stat.sort_values('km_label')
stat.columns = ['km_label', 'class_alt']
# center information
centers = km.cluster_centers_
myk = km.cluster_centers_.shape[0]
centx, centy = list(), list()
for i in range(myk):
center = centers[i:(i+1),:]
sel = np.where(km.labels_==i)[0]
nearest_species = np.argmin(pairwise_distances(X=center, Y=Fout2[sel], metric='cosine'))
i_nearest_species = sel[nearest_species]
centx.append(F_embedded[i_nearest_species, 0])
centy.append(F_embedded[i_nearest_species, 1])
# add center information into clustere information
stat['center_x'] = centx
stat['center_y'] = centy
stat = stat.sort_values('class_alt')
# stat.columns = ['km_label', 'class_alt']
stat['order'] = np.arange(myk)
# output cluster information
stat.to_csv(os.path.join(model_dirname,'cluster_center.csv'), index=False)
```
### Step 5.
```
# compress 2048-D features to 1-D trait for functional trait analysis
TestImg = list()
for i in range(Xtest.shape[0]):
img = io.imread(list(Xtest['img_rmbg_path'])[i])
TestImg.append(img)
TestImg = np.stack(TestImg)
TestInput = preprocess_input(TestImg.astype(float))
Ftest = extractor.predict(x=TestInput)
from sklearn.manifold import TSNE
F_trait = TSNE(n_components=1, perplexity=100).fit_transform(Ftest)
F_trait = F_trait - np.min(F_trait)
Xtest['trait'] = F_trait[:,0]
np.save(file='Species_TestingInstance_4249x2048.npy', arr=Ftest)
# image trait information table
dtrait = pd.merge(Xtest[['Species', 'trait']], resout[['Species','km_label','alt']], how='left', on='Species')
dtrait.to_csv(os.path.join(model_dirname, 'trait_analysis.csv'), index=False)
```
### Step 6.
```
# calculate in-cluster pairwise distance
from sklearn.metrics.pairwise import pairwise_distances
# just convert the cluster labels to be ordered for better visualization in the next analysis
km_label_to_order = dict()
order_to_km_label = dict()
for i in range(myk):
km_label_to_order[int(stat.iloc[i]['km_label'])] = i
order_to_km_label[i] = int(stat.iloc[i]['km_label'])
pair_diversity = np.array([])
order = np.array([])
for k in range(myk):
this_km_label = order_to_km_label[k]
sel = np.where(resout.km_label == this_km_label)[0]
if len(sel) == 1:
t = np.array([[0]])
dist_list = np.array([0])
else:
t = pairwise_distances(Fout[sel, :], metric='cosine')
dist_list = np.array([])
for i in range(t.shape[0]):
dist_list = np.append(dist_list,t[i,(i+1):])
pair_diversity = np.append(pair_diversity, dist_list)
order = np.append(order, np.repeat(k, len(dist_list)))
di = pd.DataFrame({'diversity': pair_diversity,
'order': order})
di.to_csv(os.path.join(model_dirname, 'in-cluser_pairwise_diversity.csv'), index=False)
```
| github_jupyter |
##### Copyright 2020 The OpenFermion Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Introduction to OpenFermion
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/openfermion/tutorials/intro_to_openfermion"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/intro_to_openfermion.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Note: The examples below must be run sequentially within a section.
## Setup
Install the OpenFermion package:
```
try:
import openfermion
except ImportError:
!pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
```
## Initializing the FermionOperator data structure
Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\sigma^+_k$ and $\sigma^-_k$ but are distinguished by the canonical fermionic anticommutation relations, $\{a^\dagger_i, a^\dagger_j\} = \{a_i, a_j\} = 0$ and $\{a_i, a_j^\dagger\} = \delta_{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators:
$$
\begin{align}
& a_1 \nonumber \\
& 1.7 a^\dagger_3 \nonumber \\
&-1.7 \, a^\dagger_3 a_1 \nonumber \\
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 \nonumber \\
&(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1 \nonumber
\end{align}
$$
The FermionOperator class is contained in $\textrm{ops/_fermion_operator.py}$. In order to support fast addition of FermionOperator instances, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encoded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $a^\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple:
$$
\begin{align}
I & \mapsto () \nonumber \\
a_1 & \mapsto ((1, 0),) \nonumber \\
a^\dagger_3 & \mapsto ((3, 1),) \nonumber \\
a^\dagger_3 a_1 & \mapsto ((3, 1), (1, 0)) \nonumber \\
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \nonumber
\end{align}
$$
Note that when initializing a single ladder operator one should be careful to add the comma after the inner pair. This is because in python ((1, 2)) = (1, 2) whereas ((1, 2),) = ((1, 2),). The "terms tuple" is usually convenient when one wishes to initialize a term as part of a coded routine. However, the terms tuple is not particularly intuitive. Accordingly, OpenFermion also supports another user-friendly, string notation below. This representation is rendered when calling "print" on a FermionOperator.
$$
\begin{align}
I & \mapsto \textrm{""} \nonumber \\
a_1 & \mapsto \textrm{"1"} \nonumber \\
a^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \\
a^\dagger_3 a_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \\
a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto \textrm{"4^}\;\textrm{3^}\;\textrm{9}\;\textrm{1"} \nonumber
\end{align}
$$
Let's initialize our first term! We do it two different ways below.
```
from openfermion.ops import FermionOperator
my_term = FermionOperator(((3, 1), (1, 0)))
print(my_term)
my_term = FermionOperator('3^ 1')
print(my_term)
```
The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator.
```
good_way_to_initialize = FermionOperator('3^ 1', -1.7)
print(good_way_to_initialize)
bad_way_to_initialize = -1.7 * FermionOperator('3^ 1')
print(bad_way_to_initialize)
identity = FermionOperator('')
print(identity)
zero_operator = FermionOperator()
print(zero_operator)
```
Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples.
```
my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j)
print(my_operator)
print(my_operator.terms)
```
## Manipulating the FermionOperator data structure
So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below.
```
from openfermion.ops import FermionOperator
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 + term_2
print(my_operator)
my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator += term_2
print('')
print(my_operator)
```
The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), ==, !=, *=, *, /, /=, +, +=, -, -=, - and **. Note that since FermionOperators involve floats, == and != check for (in)equality up to numerical precision. We demonstrate some of these methods below.
```
term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)
term_2 = FermionOperator('3^ 1', -1.7)
my_operator = term_1 - 33. * term_2
print(my_operator)
my_operator *= 3.17 * (term_2 + term_1) ** 2
print('')
print(my_operator)
print('')
print(term_2 ** 3)
print('')
print(term_1 == 2.*term_1 - term_1)
print(term_1 == my_operator)
```
Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.
```
from openfermion.utils import commutator, count_qubits, hermitian_conjugated
from openfermion.transforms import normal_ordered
# Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered.
term_1 = FermionOperator('4^ 3 3^', 1. + 2.j)
print(hermitian_conjugated(term_1))
print(term_1.is_normal_ordered())
print(count_qubits(term_1))
# Normal order the term.
term_2 = normal_ordered(term_1)
print('')
print(term_2)
print(term_2.is_normal_ordered())
# Compute a commutator of the terms.
print('')
print(commutator(term_1, term_2))
```
## The QubitOperator data structure
The QubitOperator data structure is another essential part of openfermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \textrm{"X"}), (3, \textrm{"Z"}), (4, \textrm{"Y"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.
```
from openfermion.ops import QubitOperator
my_first_qubit_operator = QubitOperator('X1 Y2 Z3')
print(my_first_qubit_operator)
print(my_first_qubit_operator.terms)
operator_2 = QubitOperator('X3 Z4', 3.17)
operator_2 -= 77. * my_first_qubit_operator
print('')
print(operator_2)
```
## Jordan-Wigner and Bravyi-Kitaev
openfermion provides functions for mapping FermionOperators to QubitOperators.
```
from openfermion.ops import FermionOperator
from openfermion.transforms import jordan_wigner, bravyi_kitaev
from openfermion.utils import hermitian_conjugated
from openfermion.linalg import eigenspectrum
# Initialize an operator.
fermion_operator = FermionOperator('2^ 0', 3.17)
fermion_operator += hermitian_conjugated(fermion_operator)
print(fermion_operator)
# Transform to qubits under the Jordan-Wigner transformation and print its spectrum.
jw_operator = jordan_wigner(fermion_operator)
print('')
print(jw_operator)
jw_spectrum = eigenspectrum(jw_operator)
print(jw_spectrum)
# Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum.
bk_operator = bravyi_kitaev(fermion_operator)
print('')
print(bk_operator)
bk_spectrum = eigenspectrum(bk_operator)
print(bk_spectrum)
```
We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.
```
from openfermion.transforms import reverse_jordan_wigner
# Initialize QubitOperator.
my_operator = QubitOperator('X0 Y1 Z2', 88.)
my_operator += QubitOperator('Z1 Z4', 3.17)
print(my_operator)
# Map QubitOperator to a FermionOperator.
mapped_operator = reverse_jordan_wigner(my_operator)
print('')
print(mapped_operator)
# Map the operator back to qubits and make sure it is the same.
back_to_normal = jordan_wigner(mapped_operator)
back_to_normal.compress()
print('')
print(back_to_normal)
```
## Sparse matrices and the Hubbard model
Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in openfermion.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models.
```
from openfermion.hamiltonians import fermi_hubbard
from openfermion.linalg import get_sparse_operator, get_ground_state
from openfermion.transforms import jordan_wigner
# Set model.
x_dimension = 2
y_dimension = 2
tunneling = 2.
coulomb = 1.
magnetic_field = 0.5
chemical_potential = 0.25
periodic = 1
spinless = 1
# Get fermion operator.
hubbard_model = fermi_hubbard(
x_dimension, y_dimension, tunneling, coulomb, chemical_potential,
magnetic_field, periodic, spinless)
print(hubbard_model)
# Get qubit operator under Jordan-Wigner.
jw_hamiltonian = jordan_wigner(hubbard_model)
jw_hamiltonian.compress()
print('')
print(jw_hamiltonian)
# Get scipy.sparse.csc representation.
sparse_operator = get_sparse_operator(hubbard_model)
print('')
print(sparse_operator)
print('\nEnergy of the model is {} in units of T and J.'.format(
get_ground_state(sparse_operator)[0]))
```
## Hamiltonians in the plane wave basis
A user can write plugins to openfermion which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and openfermion in future but do not discuss them in this tutorial.
When using simpler basis sets such as plane waves, these packages are not needed. openfermion comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in openfermion.hamiltonians. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.
```
from openfermion.hamiltonians import jellium_model
from openfermion.utils import Grid
from openfermion.linalg import eigenspectrum
from openfermion.transforms import jordan_wigner, fourier_transform
# Let's look at a very small model of jellium in 1D.
grid = Grid(dimensions=1, length=3, scale=1.0)
spinless = True
# Get the momentum Hamiltonian.
momentum_hamiltonian = jellium_model(grid, spinless)
momentum_qubit_operator = jordan_wigner(momentum_hamiltonian)
momentum_qubit_operator.compress()
print(momentum_qubit_operator)
# Fourier transform the Hamiltonian to the position basis.
position_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless)
position_qubit_operator = jordan_wigner(position_hamiltonian)
position_qubit_operator.compress()
print('')
print (position_qubit_operator)
# Check the spectra to make sure these representations are iso-spectral.
spectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator)
print('')
print(spectral_difference)
```
## Basics of MolecularData class
Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF integrals) can be looked up on-the-fly if the user has computed them in the past. OpenFermion supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.
The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during OpenFermion installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.
When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.
Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.
```
from openfermion.chem import MolecularData
# Set parameters to make a simple molecule.
diatomic_bond_length = .7414
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
charge = 0
description = str(diatomic_bond_length)
# Make molecule and print out a few interesting facts about it.
molecule = MolecularData(geometry, basis, multiplicity,
charge, description)
print('Molecule has automatically generated name {}'.format(
molecule.name))
print('Information about this molecule would be saved at:\n{}\n'.format(
molecule.filename))
print('This molecule has {} atoms and {} electrons.'.format(
molecule.n_atoms, molecule.n_electrons))
for atom, atomic_number in zip(molecule.atoms, molecule.protons):
print('Contains {} atom, which has {} protons.'.format(
atom, atomic_number))
```
If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are available for [Psi4](http://psicode.org/) [(OpenFermion-Psi4)](http://github.com/quantumlib/OpenFermion-Psi4) and [PySCF](https://github.com/sunqm/pyscf) [(OpenFermion-PySCF)](http://github.com/quantumlib/OpenFermion-PySCF), and there may be more in the future. For the purposes of this example, we will load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils.
```
# Set molecule parameters.
basis = 'sto-3g'
multiplicity = 1
bond_length_interval = 0.1
n_points = 25
# Generate molecule at different bond lengths.
hf_energies = []
fci_energies = []
bond_lengths = []
for point in range(3, n_points + 1):
bond_length = bond_length_interval * point
bond_lengths += [bond_length]
description = str(round(bond_length,2))
print(description)
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., bond_length))]
molecule = MolecularData(
geometry, basis, multiplicity, description=description)
# Load data.
molecule.load()
# Print out some results of calculation.
print('\nAt bond length of {} angstrom, molecular hydrogen has:'.format(
bond_length))
print('Hartree-Fock energy of {} Hartree.'.format(molecule.hf_energy))
print('MP2 energy of {} Hartree.'.format(molecule.mp2_energy))
print('FCI energy of {} Hartree.'.format(molecule.fci_energy))
print('Nuclear repulsion energy between protons is {} Hartree.'.format(
molecule.nuclear_repulsion))
for orbital in range(molecule.n_orbitals):
print('Spatial orbital {} has energy of {} Hartree.'.format(
orbital, molecule.orbital_energies[orbital]))
hf_energies += [molecule.hf_energy]
fci_energies += [molecule.fci_energy]
# Plot.
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(0)
plt.plot(bond_lengths, fci_energies, 'x-')
plt.plot(bond_lengths, hf_energies, 'o-')
plt.ylabel('Energy in Hartree')
plt.xlabel('Bond length in angstrom')
plt.show()
```
The geometry data needed to generate MolecularData can also be retreived from the PubChem online database by inputting the molecule's name.
```
from openfermion.chem import geometry_from_pubchem
methane_geometry = geometry_from_pubchem('methane')
print(methane_geometry)
```
## InteractionOperator and InteractionRDM for efficient numerical representations
Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\rho_{pq} = \left \langle p \mid a^\dagger_p a_q \mid q \right \rangle$ and $\rho_{pqrs} = \left \langle pq \mid a^\dagger_p a^\dagger_q a_r a_s \mid rs \right \rangle$, respectively.
Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\rho_{pq}$) and $h_{pqrs}$ (or $\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.
These classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator: $\textrm{.constant}$, $\textrm{.one_body_coefficients}$ and $\textrm{.two_body_coefficients}$ . For instance, InteractionOperator[(p, 1), (q, 1), (r, 0), (s, 0)] would return $h_{pqrs}$ and InteractionRDM would return $\rho_{pqrs}$. Importantly, the class supports fast basis transformations using the method PolynomialTensor.rotate_basis(rotation_matrix).
But perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here.
Below, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral.
```
from openfermion.chem import MolecularData
from openfermion.transforms import get_fermion_operator, jordan_wigner
from openfermion.linalg import get_ground_state, get_sparse_operator
import numpy
import scipy
import scipy.linalg
# Load saved file for LiH.
diatomic_bond_length = 1.45
geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]
basis = 'sto-3g'
multiplicity = 1
# Set Hamiltonian parameters.
active_space_start = 1
active_space_stop = 3
# Generate and populate instance of MolecularData.
molecule = MolecularData(geometry, basis, multiplicity, description="1.45")
molecule.load()
# Get the Hamiltonian in an active space.
molecular_hamiltonian = molecule.get_molecular_hamiltonian(
occupied_indices=range(active_space_start),
active_indices=range(active_space_start, active_space_stop))
# Map operator to fermions and qubits.
fermion_hamiltonian = get_fermion_operator(molecular_hamiltonian)
qubit_hamiltonian = jordan_wigner(fermion_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in canonical basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse operator and ground state energy.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy before rotation is {} Hartree.\n'.format(energy))
# Randomly rotate.
n_orbitals = molecular_hamiltonian.n_qubits // 2
n_variables = int(n_orbitals * (n_orbitals - 1) / 2)
numpy.random.seed(1)
random_angles = numpy.pi * (1. - 2. * numpy.random.rand(n_variables))
kappa = numpy.zeros((n_orbitals, n_orbitals))
index = 0
for p in range(n_orbitals):
for q in range(p + 1, n_orbitals):
kappa[p, q] = random_angles[index]
kappa[q, p] = -numpy.conjugate(random_angles[index])
index += 1
# Build the unitary rotation matrix.
difference_matrix = kappa + kappa.transpose()
rotation_matrix = scipy.linalg.expm(kappa)
# Apply the unitary.
molecular_hamiltonian.rotate_basis(rotation_matrix)
# Get qubit Hamiltonian in rotated basis.
qubit_hamiltonian = jordan_wigner(molecular_hamiltonian)
qubit_hamiltonian.compress()
print('The Jordan-Wigner Hamiltonian in rotated basis follows:\n{}'.format(qubit_hamiltonian))
# Get sparse Hamiltonian and energy in rotated basis.
sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)
energy, state = get_ground_state(sparse_hamiltonian)
print('Ground state energy after rotation is {} Hartree.'.format(energy))
```
## Quadratic Hamiltonians and Slater determinants
The general electronic structure Hamiltonian
$H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or
is quartic in the fermionic creation and annihilation operators. However, in many situations
we may fruitfully approximate these Hamiltonians by replacing these quartic terms with
terms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory.
These Hamiltonians have a number of
special properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus
warranting a special data structure. We refer to Hamiltonians which
only contain terms that are quadratic in the fermionic creation and annihilation operators
as quadratic Hamiltonians, and include the general case of non-particle conserving terms as in
a general Bogoliubov transformation. Eigenstates of quadratic Hamiltonians can be prepared
efficiently on both a quantum and classical computer, making them amenable to initial guesses for
many more challenging problems.
A general quadratic Hamiltonian takes the form
$$H = \sum_{p, q} (M_{pq} - \mu \delta_{pq}) a^\dagger_p a_q + \frac{1}{2} \sum_{p, q} (\Delta_{pq} a^\dagger_p a^\dagger_q + \Delta_{pq}^* a_q a_p) + \text{constant},$$
where $M$ is a Hermitian matrix, $\Delta$ is an antisymmetric matrix,
$\delta_{pq}$ is the Kronecker delta symbol, and $\mu$ is a chemical
potential term which we keep separate from $M$ so that we can use it
to adjust the expectation of the total number of particles.
In OpenFermion, quadratic Hamiltonians are conveniently represented and manipulated
using the QuadraticHamiltonian class, which stores $M$, $\Delta$, $\mu$ and the constant. It is specialized to exploit the properties unique to quadratic Hamiltonians. Like InteractionOperator and InteractionRDM, it inherits from the PolynomialTensor class.
The BCS mean-field model of superconductivity is a quadratic Hamiltonian. The following code constructs an instance of this model as a FermionOperator, converts it to a QuadraticHamiltonian, and then computes its ground energy:
```
from openfermion.hamiltonians import mean_field_dwave
from openfermion.transforms import get_quadratic_hamiltonian
# Set model.
x_dimension = 2
y_dimension = 2
tunneling = 2.
sc_gap = 1.
periodic = True
# Get FermionOperator.
mean_field_model = mean_field_dwave(
x_dimension, y_dimension, tunneling, sc_gap, periodic=periodic)
# Convert to QuadraticHamiltonian
quadratic_hamiltonian = get_quadratic_hamiltonian(mean_field_model)
# Compute the ground energy
ground_energy = quadratic_hamiltonian.ground_energy()
print(ground_energy)
```
Any quadratic Hamiltonian may be rewritten in the form
$$H = \sum_p \varepsilon_p b^\dagger_p b_p + \text{constant},$$
where the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ makes it easy to deduce its eigenvalues; they are sums of subsets of the $\varepsilon_p$, which we call the orbital energies of $H$. The following code computes the orbital energies and the constant:
```
orbital_energies, constant = quadratic_hamiltonian.orbital_energies()
print(orbital_energies)
print()
print(constant)
```
Eigenstates of quadratic hamiltonians are also known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied:
```
from openfermion.circuits import gaussian_state_preparation_circuit
circuit_description, start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian)
for parallel_ops in circuit_description:
print(parallel_ops)
print('')
print(start_orbitals)
```
In the circuit description, each elementary operation is either a tuple of the form $(i, j, \theta, \varphi)$, indicating the operation $\exp[i \varphi a_j^\dagger a_j]\exp[\theta (a_i^\dagger a_j - a_j^\dagger a_i)]$, which is a Givens rotation of modes $i$ and $j$, or the string 'pht', indicating the particle-hole transformation on the last fermionic mode, which is the operator $\mathcal{B}$ such that $\mathcal{B} a_N \mathcal{B}^\dagger = a_N^\dagger$ and leaves the rest of the ladder operators unchanged. Operations that can be performed in parallel are grouped together.
In the special case that a quadratic Hamiltonian conserves particle number ($\Delta = 0$), its eigenstates take the form
$$\lvert \Psi_S \rangle = b^\dagger_{1}\cdots b^\dagger_{N_f}\lvert \text{vac} \rangle,\qquad
b^\dagger_{p} = \sum_{k=1}^N Q_{pq}a^\dagger_q,$$
where $Q$ is an $N_f \times N$ matrix with orthonormal rows. These states are also known as Slater determinants. OpenFermion also provides functionality to obtain circuits for preparing Slater determinants starting with the matrix $Q$ as the input.
| github_jupyter |
# Evaluate a trained encoder
Notebook Author: Aniket Tekawade, Argonne National Laboratory, atekawade@anl.gov
This notebook will run some tests on a trained encoder-decoder model to (1) visualize the latent space clusters (2) evaluate segmentation accuracy (3) something else (4) and something else.
### Set paths and arguments
```
from tensorflow.config.experimental import *
GPU_mem_limit = 16.0
gpus = list_physical_devices('GPU')
if gpus:
try:
set_virtual_device_configuration(gpus[0], [VirtualDeviceConfiguration(memory_limit=GPU_mem_limit*1000.0)])
except RuntimeError as e:
print(e)
#what is if gpus?
# paths
model_path = "/data02/MyArchive/aisteer_3Dencoders/models"
csv_path = "/data02/MyArchive/aisteer_3Dencoders/data_TomoTwin/datalist_train.csv"
# arguments
n_samples = 5000
model_tag = "111d32_set6"
noise_level = 0.18
patch_size = tuple([64]*3)
binning = 2
latent_dims = int(model_tag.split('_')[0].split('d')[-1])
#so this is only for the 111-32?
%matplotlib inline
import sys
import os
import numpy as np
import pandas as pd
import h5py
import time
import matplotlib.pyplot as plt
import matplotlib as mpl
from tomo_encoders.img_stats import Parallelize, calc_jac_acc, pore_analysis
from tomo_encoders.data_sampling import data_generator_4D, get_data_from_flist
from tomo_encoders.porosity_encoders import custom_objects_dict
from tomo_encoders.latent_vis import *
from tomo_encoders.feature_maps_vis import view_midplanes
from tensorflow.keras.models import load_model
import pickle
figw = 8
import seaborn as sns
sns.set(font_scale = 1)
sns.set_style("whitegrid", {'axes.grid' : False})
```
### Load the trained model
```
model_names = {"segmenter" : "segmenter%s.hdf5"%model_tag, \
"encoder" : "encoder%s.hdf5"%model_tag, \
"PCA" : "PCA%s.pkl"%model_tag}
encoder = load_model(os.path.join(model_path, model_names["encoder"]), \
custom_objects = custom_objects_dict)
segmenter = load_model(os.path.join(model_path, model_names["segmenter"]), \
custom_objects = custom_objects_dict)
```
### Load the data and draw $64^3$ sized samples
```
Xs, Ys, plot_labels = get_data_from_flist(csv_path, \
normalize = True,\
data_tags = ("recon", "gt_labels"),\
group_tags = ["tomo"],\
downres = binning)
dg = data_generator_4D(Xs, Ys, \
patch_size, n_samples, \
scan_idx = True, add_noise = noise_level)
x, y, sample_labels = next(dg)
print("Shape of x: %s"%str(x.shape))
```
**Histogram of sampled patches:** How many patches are drawn from each dataset?
```
tmp_labels = [plot_label.split('train_blobs_')[-1] for plot_label in plot_labels]
sample_hist = pd.DataFrame(columns = ["label", "n_pts"])
sample_hist["label"] = tmp_labels
sample_hist = sample_hist.set_index("label")
for idx, sample_lab in enumerate(tmp_labels):
sample_hist.loc[sample_lab, "n_pts"] = np.size(np.where(sample_labels == idx))
# sample_hist
sample_hist.plot.barh()
```
### Get output from decoder and encoder
**Encoder output** Get latent vector, then apply PCA.
```
dfN = get_latent_vector(encoder, x, sample_labels, plot_labels)
pkl_filename = os.path.join(model_path, model_names["PCA"])
# Load from file
with open(pkl_filename, 'rb') as file:
pca = pickle.load(file)
ncomps = 2
df = transform_PCA(dfN, latent_dims, pca, ncomps = ncomps)
df["${||{h}||}$"] = np.linalg.norm(dfN[["$h_%i$"%i for i in range(latent_dims)]], axis = 1)
df = rescale_z(df)
```
**Decoder output** How does the segmented output look?
```
# accuracy of segmentation using intersection-over-union (IoU)
yp = segmenter.predict(x)
yp = np.round(yp)
df["IoU"] = Parallelize(list(zip(y, yp)), calc_jac_acc, procs = 48)
nplots = 3
fig, ax = plt.subplots(nplots,3, figsize = (8,3*nplots))
for ii in range(nplots):
view_midplanes(vol = x[ii,...,0], ax = ax[ii])
view_midplanes(vol = yp[ii,...,0], ax = ax[ii], cmap = "copper", alpha = 0.3)
ax[ii,0].set_ylabel(tmp_labels[sample_labels[ii]])
fig.tight_layout()
```
**Calculate some metadata** to further understand cluster distances (future use)
```
# ground-truth porosity values (calculated after applying connected components filter)
f = Parallelize(y, pore_analysis, procs = 48)
f = np.asarray(f)
df[["void-fraction", "npores", "pore-vol"]] = f
# df_temp = pd.DataFrame(columns = ["void-fraction", "npores", "pore-vol"], data = f)
# signal-to-noise ratio (SNR)
f = Parallelize(list(zip(x,y)), calc_SNR, procs = 48)
f = np.asarray(f)
df_temp = pd.DataFrame(columns = ["SNR"], data = f.reshape(-1,1))
df["SNR"] = f
df.head()
```
### Analyze Segmentation Accuracy
**Compare accuracy with porosity metrics:** Is accuracy sensitive to pore size?
```
modelid = model_tag.split('_')[0].split('d')
modelid = ('-').join(modelid)
bins = ["10-12", "12-14", "14-16", "16-18"]
IoU_binned = []
IoU_binned.append(np.mean(df[df["param"].between(10,11,inclusive = True)]["IoU"]))
IoU_binned.append(np.mean(df[df["param"].between(12,13,inclusive = True)]["IoU"]))
IoU_binned.append(np.mean(df[df["param"].between(14,15,inclusive = True)]["IoU"]))
IoU_binned.append(np.mean(df[df["param"].between(16,18,inclusive = True)]["IoU"]))
fig, ax = plt.subplots(1,1, figsize = (6,6), sharey = True)
ax.bar(bins, IoU_binned)
ax.set_xlabel("param $s_p$")
ax.set_title("(a) binned over $s_p$ for %s"%modelid)
ax.set_ylim([0.6,1.0])
fig.tight_layout()
```
**Compare accuracy with SNR:** Is accuracy sensitive to the SNR in input images?
```
mean_IoUs = np.zeros(len(plot_labels))
for il, label in enumerate(plot_labels):
mean_IoUs[il] = calc_jac_acc(y[sample_labels == il],\
yp[sample_labels == il])
plt.barh(tmp_labels, mean_IoUs)
plt.xlabel("IoU")
print("Min IoU in datasets: %.2f"%mean_IoUs.min())
plt.title("Patches", fontsize = 20)
```
### Analyze the reduced latent (z) space
**Plot PCA** Plot the 2D projection of latent space
```
sns.set(font_scale = 1.2)
sns.set_style("whitegrid", {'axes.grid' : False})
fig, ax = plt.subplots(1,1,figsize = (8,8), sharex = True, sharey = True)
sns.scatterplot(data = df, x = "$z_0$", y = "$z_1$", \
hue = "param", \
palette = "viridis", ax = ax, \
legend = 'full', \
style = "measurement", )
modelid = model_tag.split('_')[0].split('d')
modelid = ('-').join(modelid)
ax.set_title("%s"%modelid)
fig.tight_layout()
```
**Compare** calculated porosity metrics with z-space. Npte that $s_p$ varies logarithmically with pore size (or number of pores per volume).
```
df["log npores"] = np.log(df["npores"])
df["log pore-vol"] = np.log(df["pore-vol"])
df["${log(z_1)}$"] = np.log(df["$z_1$"])
df["${log(z_0)}$"] = np.log(df["$z_0$"])
sns.set(font_scale = 1.2)
sns.set_style("whitegrid", {'axes.grid' : False})
fig, ax = plt.subplots(1,2,figsize = (8,4), sharex = True, sharey = True)
hues = ["log npores", "log pore-vol"] #["npores", "pore-vol"]
for idx, hue in enumerate(hues):
sns.scatterplot(data = df, x = "$z_0$", y = "$z_1$", \
hue = hue, \
palette = "viridis", ax = ax.flat[idx], \
legend = 'brief')
modelid = model_tag.split('_')[0].split('d')
modelid = ('-').join(modelid)
ax.flat[idx].set_title("%s"%modelid)
fig.tight_layout()
```
### Analyze Compute Times
```
niters = 4000
idx = 10
```
**Pore analysis** (connected components)
```
t0 = time.time()
output = [pore_analysis(y[idx]) for idx in range(niters)]
t1 = time.time()
tot_time = (t1-t0)*1000.0/niters
print(tot_time)
```
**Segmentation** (encoder+decoder) - this could vary significantly based on available GPU memory
```
x_in = x[:niters]
t0 = time.time()
yp_temp = segmenter.predict(x_in)
t1 = time.time()
tot_time = (t1-t0)*1000.0/niters
print(tot_time)
```
**Encoder**
```
x_in = x[:niters]
t0 = time.time()
yp_temp = encoder.predict(x_in)
t1 = time.time()
tot_time = (t1-t0)*1000.0/niters
print(tot_time)
```
### THE END
| github_jupyter |

<h1 style="text-align:center;font-size:30px;" > Quora Question Pairs </h1>
<h1> 1. Business Problem </h1>
<h2> 1.1 Description </h2>
<p>Quora is a place to gain and share knowledge—about anything. It’s a platform to ask questions and connect with people who contribute unique insights and quality answers. This empowers people to learn from each other and to better understand the world.</p>
<p>
Over 100 million people visit Quora every month, so it's no surprise that many people ask similarly worded questions. Multiple questions with the same intent can cause seekers to spend more time finding the best answer to their question, and make writers feel they need to answer multiple versions of the same question. Quora values canonical questions because they provide a better experience to active seekers and writers, and offer more value to both of these groups in the long term.
</p>
<br>
> Credits: Kaggle
__ Problem Statement __
- Identify which questions asked on Quora are duplicates of questions that have already been asked.
- This could be useful to instantly provide answers to questions that have already been answered.
- We are tasked with predicting whether a pair of questions are duplicates or not.
<h2> 1.2 Sources/Useful Links</h2>
- Source : https://www.kaggle.com/c/quora-question-pairs
<br><br>____ Useful Links ____
- Discussions : https://www.kaggle.com/anokas/data-analysis-xgboost-starter-0-35460-lb/comments
- Kaggle Winning Solution and other approaches: https://www.dropbox.com/sh/93968nfnrzh8bp5/AACZdtsApc1QSTQc7X0H3QZ5a?dl=0
- Blog 1 : https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning
- Blog 2 : https://towardsdatascience.com/identifying-duplicate-questions-on-quora-top-12-on-kaggle-4c1cf93f1c30
<h2>1.3 Real world/Business Objectives and Constraints </h2>
1. The cost of a mis-classification can be very high.
2. You would want a probability of a pair of questions to be duplicates so that you can choose any threshold of choice.
3. No strict latency concerns.
4. Interpretability is partially important.
<h1>2. Machine Learning Probelm </h1>
<h2> 2.1 Data </h2>
<h3> 2.1.1 Data Overview </h3>
<p>
- Data will be in a file Train.csv <br>
- Train.csv contains 5 columns : qid1, qid2, question1, question2, is_duplicate <br>
- Size of Train.csv - 60MB <br>
- Number of rows in Train.csv = 404,290
</p>
<h3> 2.1.2 Example Data point </h3>
<pre>
"id","qid1","qid2","question1","question2","is_duplicate"
"0","1","2","What is the step by step guide to invest in share market in india?","What is the step by step guide to invest in share market?","0"
"1","3","4","What is the story of Kohinoor (Koh-i-Noor) Diamond?","What would happen if the Indian government stole the Kohinoor (Koh-i-Noor) diamond back?","0"
"7","15","16","How can I be a good geologist?","What should I do to be a great geologist?","1"
"11","23","24","How do I read and find my YouTube comments?","How can I see all my Youtube comments?","1"
</pre>
<h2> 2.2 Mapping the real world problem to an ML problem </h2>
<h3> 2.2.1 Type of Machine Leaning Problem </h3>
<p> It is a binary classification problem, for a given pair of questions we need to predict if they are duplicate or not. </p>
<h3> 2.2.2 Performance Metric </h3>
Source: https://www.kaggle.com/c/quora-question-pairs#evaluation
Metric(s):
* log-loss : https://www.kaggle.com/wiki/LogarithmicLoss
* Binary Confusion Matrix
<h2> 2.3 Train and Test Construction </h2>
<p> </p>
<p> We build train and test by randomly splitting in the ratio of 70:30 or 80:20 whatever we choose as we have sufficient points to work with. </p>
<h1>3. Exploratory Data Analysis </h1>
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from subprocess import check_output
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import os
import gc
import re
from nltk.corpus import stopwords
import distance
from nltk.stem import PorterStemmer
from bs4 import BeautifulSoup
```
<h2> 3.1 Reading data and basic stats </h2>
```
df = pd.read_csv("train.csv")
print("Number of data points:",df.shape[0])
df.head()
df.info()
```
We are given a minimal number of data fields here, consisting of:
- id: Looks like a simple rowID
- qid{1, 2}: The unique ID of each question in the pair
- question{1, 2}: The actual textual contents of the questions.
- is_duplicate: The label that we are trying to predict - whether the two questions are duplicates of each other.
<h3> 3.2.1 Distribution of data points among output classes</h3>
- Number of duplicate(smilar) and non-duplicate(non similar) questions
```
df.groupby("is_duplicate")['id'].count().plot.bar()
print('~> Total number of question pairs for training:\n {}'.format(len(df)))
print('~> Question pairs are not Similar (is_duplicate = 0):\n {}%'.format(100 - round(df['is_duplicate'].mean()*100, 2)))
print('\n~> Question pairs are Similar (is_duplicate = 1):\n {}%'.format(round(df['is_duplicate'].mean()*100, 2)))
```
<h3> 3.2.2 Number of unique questions </h3>
```
qids = pd.Series(df['qid1'].tolist() + df['qid2'].tolist())
unique_qs = len(np.unique(qids))
qs_morethan_onetime = np.sum(qids.value_counts() > 1)
print ('Total number of Unique Questions are: {}\n'.format(unique_qs))
#print len(np.unique(qids))
print ('Number of unique questions that appear more than one time: {} ({}%)\n'.format(qs_morethan_onetime,qs_morethan_onetime/unique_qs*100))
print ('Max number of times a single question is repeated: {}\n'.format(max(qids.value_counts())))
q_vals=qids.value_counts()
q_vals=q_vals.values
x = ["unique_questions" , "Repeated Questions"]
y = [unique_qs , qs_morethan_onetime]
plt.figure(figsize=(10, 6))
plt.title ("Plot representing unique and repeated questions ")
sns.barplot(x,y)
plt.show()
```
<h3>3.2.3 Checking for Duplicates </h3>
```
#checking whether there are any repeated pair of questions
pair_duplicates = df[['qid1','qid2','is_duplicate']].groupby(['qid1','qid2']).count().reset_index()
print ("Number of duplicate questions",(pair_duplicates).shape[0] - df.shape[0])
```
<h3> 3.2.4 Number of occurrences of each question </h3>
```
plt.figure(figsize=(20, 10))
plt.hist(qids.value_counts(), bins=160)
plt.yscale('log')
plt.title('Log-Histogram of question appearance counts')
plt.xlabel('Number of occurences of question')
plt.ylabel('Number of questions')
print ('Maximum number of times a single question is repeated: {}\n'.format(max(qids.value_counts())))
```
<h3> 3.2.5 Checking for NULL values </h3>
```
#Checking whether there are any rows with null values
nan_rows = df[df.isnull().any(1)]
print (nan_rows)
```
- There are two rows with null values in question2
```
# Filling the null values with ' '
df = df.fillna('')
nan_rows = df[df.isnull().any(1)]
print (nan_rows)
```
<h2>3.3 Basic Feature Extraction (before cleaning) </h2>
Let us now construct a few features like:
- ____freq_qid1____ = Frequency of qid1's
- ____freq_qid2____ = Frequency of qid2's
- ____q1len____ = Length of q1
- ____q2len____ = Length of q2
- ____q1_n_words____ = Number of words in Question 1
- ____q2_n_words____ = Number of words in Question 2
- ____word_Common____ = (Number of common unique words in Question 1 and Question 2)
- ____word_Total____ =(Total num of words in Question 1 + Total num of words in Question 2)
- ____word_share____ = (word_common)/(word_Total)
- ____freq_q1+freq_q2____ = sum total of frequency of qid1 and qid2
- ____freq_q1-freq_q2____ = absolute difference of frequency of qid1 and qid2
```
if os.path.isfile('df_fe_without_preprocessing_train.csv'):
df = pd.read_csv("df_fe_without_preprocessing_train.csv",encoding='latin-1')
else:
df['freq_qid1'] = df.groupby('qid1')['qid1'].transform('count')
df['freq_qid2'] = df.groupby('qid2')['qid2'].transform('count')
df['q1len'] = df['question1'].str.len()
df['q2len'] = df['question2'].str.len()
df['q1_n_words'] = df['question1'].apply(lambda row: len(row.split(" ")))
df['q2_n_words'] = df['question2'].apply(lambda row: len(row.split(" ")))
def normalized_word_Common(row):
w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" ")))
w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" ")))
return 1.0 * len(w1 & w2)
df['word_Common'] = df.apply(normalized_word_Common, axis=1)
def normalized_word_Total(row):
w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" ")))
w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" ")))
return 1.0 * (len(w1) + len(w2))
df['word_Total'] = df.apply(normalized_word_Total, axis=1)
def normalized_word_share(row):
w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(" ")))
w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(" ")))
return 1.0 * len(w1 & w2)/(len(w1) + len(w2))
df['word_share'] = df.apply(normalized_word_share, axis=1)
df['freq_q1+q2'] = df['freq_qid1']+df['freq_qid2']
df['freq_q1-q2'] = abs(df['freq_qid1']-df['freq_qid2'])
df.to_csv("df_fe_without_preprocessing_train.csv", index=False)
df.head()
```
<h3> 3.3.1 Analysis of some of the extracted features </h3>
- Here are some questions have only one single words.
```
print ("Minimum length of the questions in question1 : " , min(df['q1_n_words']))
print ("Minimum length of the questions in question2 : " , min(df['q2_n_words']))
print ("Number of Questions with minimum length [question1] :", df[df['q1_n_words']== 1].shape[0])
print ("Number of Questions with minimum length [question2] :", df[df['q2_n_words']== 1].shape[0])
```
<h4> 3.3.1.1 Feature: word_share </h4>
```
plt.figure(figsize=(12, 8))
plt.subplot(1,2,1)
sns.violinplot(x = 'is_duplicate', y = 'word_share', data = df[0:])
plt.subplot(1,2,2)
sns.distplot(df[df['is_duplicate'] == 1.0]['word_share'][0:] , label = "1", color = 'red')
sns.distplot(df[df['is_duplicate'] == 0.0]['word_share'][0:] , label = "0" , color = 'blue' )
plt.show()
```
- The distributions for normalized word_share have some overlap on the far right-hand side, i.e., there are quite a lot of questions with high word similarity
- The average word share and Common no. of words of qid1 and qid2 is more when they are duplicate(Similar)
<h4> 3.3.1.2 Feature: word_Common </h4>
```
plt.figure(figsize=(12, 8))
plt.subplot(1,2,1)
sns.violinplot(x = 'is_duplicate', y = 'word_Common', data = df[0:])
plt.subplot(1,2,2)
sns.distplot(df[df['is_duplicate'] == 1.0]['word_Common'][0:] , label = "1", color = 'red')
sns.distplot(df[df['is_duplicate'] == 0.0]['word_Common'][0:] , label = "0" , color = 'blue' )
plt.show()
```
<p> The distributions of the word_Common feature in similar and non-similar questions are highly overlapping </p>
| github_jupyter |
## Libraries
- numpy: package for scientific computing
- matplotlib: 2D plotting library
- tensorflow: open source software library for machine intelligence
- **learn**: Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning
- mse: "mean squared error" as evaluation metric
- **lstm_predictor**: our lstm class
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.contrib import learn
from sklearn.metrics import mean_squared_error, mean_absolute_error
from lstm_predictor import generate_data, lstm_model
```
## Parameter definitions
- LOG_DIR: log file
- TIMESTEPS: RNN time steps
- RNN_LAYERS: RNN layer information
- DENSE_LAYERS: Size of DNN, [10, 10]: Two dense layer with 10 hidden units
- TRAINING_STEPS
- BATCH_SIZE
- PRINT_STEPS
```
LOG_DIR = './ops_logs'
TIMESTEPS = 5
RNN_LAYERS = [{'steps': TIMESTEPS}]
DENSE_LAYERS = [10, 10]
TRAINING_STEPS = 100000
BATCH_SIZE = 100
PRINT_STEPS = TRAINING_STEPS / 100
```
## Generate waveform
- fct: function
- x: observation
- time_steps
- seperate: check multimodal
```
X, y = generate_data(np.sin, np.linspace(0, 100, 10000), TIMESTEPS, seperate=False)
```
## Create a regressor with TF Learn
**Parameters**:
- model_fn: regression model
- n_classes: 0 for regression
- verbose
- steps: training steps
- optimizer: ("SGD", "Adam", "Adagrad")
- learning_rate
- batch_size
```
regressor = learn.TensorFlowEstimator(model_fn=lstm_model(TIMESTEPS, RNN_LAYERS, DENSE_LAYERS),
n_classes=0,
verbose=1,
steps=TRAINING_STEPS,
optimizer='Adagrad',
learning_rate=0.03,
batch_size=BATCH_SIZE)
```
## ValidationMonitor
- x
- y
- every_n_steps
- early_stopping_rounds
```
validation_monitor = learn.monitors.ValidationMonitor(X['val'], y['val'],
every_n_steps=PRINT_STEPS,
early_stopping_rounds=1000)
```
## Train and validation
- fit: fitting using training data
```
regressor.fit(X['train'], y['train'], monitors=[validation_monitor], logdir=LOG_DIR)
```
## Evaluate using test set
Evaluate our hypothesis using test set. The mean squared error (MSE) is used for the evaluation metric.
```
predicted = regressor.predict(X['test'])
mse = mean_squared_error(y['test'], predicted)
print ("Error: %f" % mse)
```
## Plotting
Then, plot both predicted values and original values from test set.
```
plot_predicted, = plt.plot(predicted, label='predicted')
plot_test, = plt.plot(y['test'], label='test')
plt.legend(handles=[plot_predicted, plot_test])
```
| github_jupyter |
# Benchmark ML Computation Speed
In this notebook, we test the computational performance of [digifellow](https://digifellow.swfcloud.de/hub/spawn) jupyterhub performance against free access like *Colab* and *Kaggle*. The baseline of this comparison is an average PC *(Core i5 2.5GHz - 8GB RAM - No GPU)*
The task of this test is classifying the MNIST dataset with different algorithms *(LR, ANN, CNN)* involving different libraries *(SKLearn, Tensorflow)* and comparing the performance with and without GPU acceleration.
## Dependencies
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras import layers
from sklearn.linear_model import LogisticRegression
readings = []
```
## Preprocessing
```
(train_images, train_labels), (_i, _l) = mnist.load_data()
train_images = train_images.reshape(-1,28*28)
train_images = train_images / 255.0
```
## SieKitLearn - Logistic Regression
```
LG = LogisticRegression(penalty='l1', solver='saga', tol=0.1)
```
### sklearn timer
```
%%timeit -n 1 -r 10 -o
LG.fit(train_images, train_labels)
readings.append(_.all_runs)
```
## Tensorflow - ANN
```
annModel = keras.Sequential()
annModel.add(tf.keras.Input(shape=(28*28,)))
annModel.add(layers.Dense(128, activation='relu'))
annModel.add(layers.Dense(10, activation='softmax'))
annModel.compile('sgd','sparse_categorical_crossentropy',['accuracy'])
```
### ANN timer (CPU)
```
%%timeit -n 1 -r 10 -o
with tf.device('/CPU:0'):
annModel.fit(train_images, train_labels, epochs=5, verbose=0)
readings.append(_.all_runs)
```
### ANN timer (GPU)
```
%%timeit -n 1 -r 10 -o
with tf.device('/GPU:0'):
annModel.fit(train_images, train_labels, epochs=5, verbose=0)
readings.append(_.all_runs)
```
## Tensorflow - CNN
```
cnnModel = keras.Sequential()
cnnModel.add(tf.keras.Input(shape=(28, 28, 1)))
cnnModel.add(layers.Conv2D(filters=16,kernel_size=(3, 3),activation='relu'))
cnnModel.add(layers.BatchNormalization())
cnnModel.add(layers.MaxPooling2D())
cnnModel.add(layers.Flatten())
cnnModel.add(layers.Dense(128, activation='relu'))
cnnModel.add(layers.Dropout(0.2))
cnnModel.add(layers.Dense(10, activation='softmax'))
cnnModel.compile('sgd','sparse_categorical_crossentropy',['accuracy'])
```
### CNN timer (CPU)
```
%%timeit -n 1 -r 10 -o
with tf.device('/CPU:0'):
cnnModel.fit(train_images.reshape(-1, 28, 28, 1), train_labels, epochs=5, verbose=0)
readings.append(_.all_runs)
```
### CNN timer (GPU)
```
%%timeit -n 1 -r 10 -o_.all_runs
with tf.device('/GPU:0'):
cnnModel.fit(train_images.reshape(-1, 28, 28, 1), train_labels, epochs=5, verbose=0)
readings.append(_.all_runs)
```
## Storing readings
```
import csv
with open('readings', 'w') as f:
wr = csv.writer(f)
wr.writerow(readings)
```
Done :)
| github_jupyter |
# Instructor Turn Activity 1 Request Intro
```
# Dependencies
import requests
import json
# URL for GET requests to retrieve vehicle data
url = "https://api.spacexdata.com/v2/launchpads"
# Print the response object to the console
print(requests.get(url))
# Retrieving data and converting it into JSON
print(requests.get(url).json())
# Pretty Print the output of the JSON
response = requests.get(url).json()
print(json.dumps(response, indent=4, sort_keys=True))
```
# Students Turn Activity 2 Space X
# Instructions
* Take a few minutes to explore the JSON Placeholder API: https://api.spacexdata.com
* Once you understand the structure of the API and its endpoint, choose one of the endpoints that is _not_ `vehicles`, and do the following:
* Retrieve and print the JSON for _all_ of the records from your chosen endpoint.
* Retrieve and print the JSON for the a _specific_ record from your chosen endpoint.
- - -
```
# Dependencies
import requests
import json
# URL for GET requests to retrieve vehicle data
url = "https://api.spacexdata.com/v2/launchpads"
# Pretty print JSON for all launchpads
#response = requests.get(url).json()
# Pretty print JSON for a specific launchpad
#response = requests.get(url + "/vafb_slc_4w").json()
#print(json.dumps(response, indent=4, sort_keys=True))
```
# Instructor Turn Activity 3 Manipulating JSON
```
# Dependencies
import requests
import json
# Performing a GET Request and saving the
# API's response within a variable
url = "https://api.spacexdata.com/v2/rockets/falcon9"
response = requests.get(url)
response_json = response.json()
print(json.dumps(response_json, indent=4, sort_keys=True))
# It is possible to grab a specific value
# from within the JSON object
print(response_json["cost_per_launch"])
# It is also possible to perform some
# analyses on values stored within the JSON object
number_payloads = len(response_json["payload_weights"])
print(f"There are {number_payloads} payloads.")
# Finally, it is possible to reference the
# values stored within sub-dictionaries and sub-lists
payload_weight = response_json["payload_weights"][0]["kg"]
print(f"The first payload weighed {payload_weight} Kilograms")
```
# Students Turn Activity 4 Far Far Away
# Star War Api
# Far Far Away
* **Instructions:**
* Using the starter file provided, collect the following pieces of information from the Star Wars API.
* The name of the character
* The number of films they were in
* The name of their first starship
* Once the data has been collected, print it out to the console.
* **Hints:**
* It would be in the programmer's best interest to print out the JSON from the initial request before anything else. This will let them know what keys they should reference.
* The "starship" values are links to another API call. This means that the programmer will need to create a request based off of the values of a previous request.
* **Bonus:**
* Come up with a way in which to collect and print out all of the film names a character was in.
- - -
```
# Dependencies
import requests
import json
# URL for GET requests to retrieve Star Wars character data
base_url = "https://swapi.co/api/people/"
# Create a url with a specific character id
character_id = '4'
url = base_url + character_id
print(url)
# Perform a get request for this character
response = requests.get(url)
print(response.url)
# Storing the JSON response within a variable
data = response.json()
#print(json.dumps(data, indent=4, sort_keys=True))
# Collecting the name of the character collected
# YOUR CODE HERE
character_name = data["name"]
# Print the character and the number of films that they were in
# YOUR CODE HERE
film_number = len(data["films"])
# Figure out what their first starship was and print the ship
# YOUR CODE HERE
first_ship_url = data["starships"][0]
ship_response = requests.get(first_ship_url).json()
ship_response
first_ship = ship_response["name"]
# Print character name and how many films they were in
# YOUR CODE HERE
print(f"{character_name} was in {film_number} films")
# Print what their first ship was
# YOUR CODE HERE
print(f"Their first ship: {first_ship}")
# BONUS
# YOUR CODE HERE
films = []
for film in data['films']:
cur_film = requests.get(film).json()
film_title = cur_film["title"]
films.append(film_title)
print(f"{character_name} was in:")
print(films)
```
# Student Pair Turns Activity 5 Number Facts API
# Number Facts API
* **Instructions:**
* Using the [Numbers API](http://numbersapi.com), create an application that takes in a user's inputs and returns a number fact based upon it.
* **Hints:**
* The URL to make your request to must have `?json` at its end so that the data format returned is JSON. The default response is pure text.
* Make sure to read through the documentation when creating your application. Some types require more or less data than others.
- - -
```
# Dependencies
import requests
import json
# Base URL for GET requests to retrieve number/date facts
url = "http://numbersapi.com/"
# Ask the user what kind of data they would like to search for
question = ("What type of data would you like to search for? "
"[Trivia, Math, Date, or Year] ")
kind_of_search = input(question)
# Create code to return a number fact
# If the kind of search is "date" take in two numbers
if(kind_of_search.lower() == "date"):
# Collect the month to search for
month = input("What month would you like to search for? ")
# Collect the day to search for
day = input("What day would you like to search for? ")
# Make an API call to the "date" API and convert response object to JSON
response = requests.get(f"{url}{month}/{day}/{kind_of_search.lower()}?json").json()
# Print the fact stored within the response
print(response["text"])
# If the kind of search is anything but "date" then take one number
else:
# Collect the number to search for
number = input("What number would you like to search for? ")
# Make an API call to the API and convert response object to JSON
response = requests.get(url + number + "/" + kind_of_search.lower()+ "?json").json()
# Print the fact stored within the response
print(response["text"])
```
# Instructor Turn Activity 6 OMDB Request
```
import requests
import json
# New Dependency! Use this to pretty print the JSON
# https://docs.python.org/3/library/pprint.html
from pprint import pprint
# Note that the ?t= is a query param for the t-itle of the
# movie we want to search for.
url = "http://www.omdbapi.com/?t="
api_key = "&apikey=trilogy"
# Performing a GET request similar to the one we executed
# earlier
response = requests.get(url + "Aliens" + api_key)
print(response.url)
# Converting the response to JSON, and printing the result.
data = response.json()
pprint(data)
# Print a few keys from the response JSON.
print(f"Movie was directed by {data['Director']}.")
print(f"Movie was released in {data['Country']}.")
```
# Students Turn Activity 7 Explore OMDb API
# OMDb API
* **Instructions:**
* Read the OMDb documentation, and make a few API calls to
get some information about your favorite movie.
# Students Turn Activity 8 Movie Question
```
# Dependencies
import requests
url = "http://www.omdbapi.com/?apikey=trilogy&t="
# Who was the director of the movie Aliens?
#movie = requests.get(url + "Aliens").json()
#print(f'The director of Aliens was {movie["Director"]}.')
# What was the movie Gladiator rated?
#movie = requests.get(url + "Gladiator").json()
#print(f'The rating of Gladiator was {movie["Rated"]}.')
# What year was 50 First Dates released?
#movie = requests.get(url + "50 First Dates").json()
#print(f'The movie 50 First Dates was released in {movie["Year"]}.')
# Who wrote Moana?
#movie = requests.get(url + "Moana").json()
#print(f'Moana was written by {movie["Writer"]}.')
# What was the plot of the movie Sing?
#movie = requests.get(url + "Sing").json()
#print(f'The plot of Sing was: {movie["Plot"]}')
```
# Instructor Turn Activity 9 Iterative Request
```
# Dependencies
import random
import json
import requests
# Let's get the JSON for 100 posts sequentially.
url = "http://jsonplaceholder.typicode.com/posts/"
# Create an empty list to store the responses
response_json = []
# Create random indices representing
# a user's choice of posts
indices = random.sample(list(range(1, 100)), 10)
indices
# Make a request for each of the indices
for x in range(len(indices)):
print(f"Making request number: {x} for ID: {indices[x]}")
# Get one of the posts
post_response = requests.get(url + str(indices[x]))
# Save post's JSON
response_json.append(post_response.json())
# Now we have 10 post objects,
# which we got by making 100 requests to the API.
print(f"We have {len(response_json)} posts!")
response_json
```
# Student Activity 10 Turn MovieLoop
# Instructions
Consider the following list of movie titles.
```python
movies = ["Aliens", "Sing", "Moana"]
```
Make a request to the OMDb API for each movie in the list. Then:
1. Print the director of each movie
2. Save the responses in another list
- - -
```
# Dependencies
import requests
url = "http://www.omdbapi.com/?apikey=trilogy&t="
movies = ["Aliens", "Sing", "Moana"]
# Use a loop to store the movies in a list
responses = [];
for movie in movies:
movie_data = requests.get(url + movie).json()
responses.append(movie_data)
print(f'The director of {movie} is {movie_data["Director"]}')
responses
```
# Instructors Turn Activity 11 NYT Api
```
# Dependencies
import requests
from pprint import pprint
from config import api_key
url = "https://api.nytimes.com/svc/search/v2/articlesearch.json?"
# Search for articles that mention granola
query = "granola"
# Build query URL
query_url = url + "api-key=" + api_key + "&q=" + query
query_url
# Request articles
articles = requests.get(query_url).json()
# The "response" property in articles contains the actual articles
# list comprehension.
articles_list = [article for article in articles["response"]["docs"]]
pprint(articles_list)
# Print the web_url of each stored article
print("Your Reading List")
for article in articles_list:
print(article["web_url"])
```
# Students Turn Activity 12 Instructions
* Save the NYT API endpoint to a variable. Make sure you include the right query parameter to retrieve JSON data!
* Register for and save your API Key to a variable.
* Decide on a search term, and save it to a variable.
* Limit your search to articles published within a range of dates—for example, only articles published in 2014. _Hint_: Read the documentation on `end_date`.
* Build your query URL, and save it to a variable.
* Retrieve your list of articles with a GET request.
* Take a look at the documentation. How do you get ahold of the articles in the response?
* Store each article in the response inside of a list.
* Print a `snippet` from each article.
* As a bonus, try to figure out how we could get 30 results. _Hint_: Look up the `page` query parameter. If you get a message saying you've exceeded your rate limit, don't fret—you've solved the problem.
- - -
```
# Dependencies
import requests
from config import api_key
url = "https://api.nytimes.com/svc/search/v2/articlesearch.json?"
# Store a search term
#query = "obama"
# Search for articles published between a begin and end date
#begin_date = "20160101"
#end_date = "20160130"
# Build url
#query_url = f"{url}api-key={api_key}&q={query}&begin_date={begin_date}&end_date={end_date}"
#print(query_url)
# Retrieve articles
articles = requests.get(query_url).json()
articles_list = [article for article in articles["response"]["docs"]]
for article in articles_list:
print(f'A snippet from the article: {article["snippet"]}')
# BONUS: How would we get 30 results?
# HINT: Look up the page query param
# Emply list for articles
#articles_list = []
# loop through pages 0-2
for page in range(0, 3):
query_url = "https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=164b73c522a8420c8e05343ef1da0a7e&q=obama&begin_date=20160101&end_date=20160130"
# create query with page number
query_url = f"{query_url}&page={str(page)}"
print(query_url)
articles = requests.get(query_url).json()
# loop through the response and append each article to the list
for article in articles["response"]["docs"]:
articles_list.append(article)
```
| github_jupyter |
```
#Hi there, Can you see this
import numpy as np
import tensorflow as tf
from numpy import load
import operator
import tensorflow as tf
import matplotlib.pyplot as plt
import os,sys,inspect
sys.path.insert(1, os.path.join(sys.path[0], '..')) #this line should always stay above the next line below
from tfrbm import BBRBM, GBRBM, BBRBMTEMP
from tensorflow.examples.tutorials.mnist import input_data
#load the mnist data
mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)
mnist_images = mnist.test.images
mnist_images1= np.where(mnist_images > 0, 1, 0)
#helper fcts to print images
def show_digit(x,y):
plt.imshow(x)#,cmap = plt.cm.binary)
plt.colorbar(mappable=None, cax=None, ax=None)
plt.title(y)
plt.locator_params(axis="x", nbins=10)
plt.locator_params(axis="y", nbins=1)
plt.show()
#labels pixels
accu = [0]
num_avg = 10
n_data = 5 # number of images we want to reconstruct
print("accuracy",accu)
t1 = np.zeros(10) #number of labels
num_t= 1 #number of pixel to be used to construct random image
t2 = np.ones(num_t) # create an np array of num_T elements
img_zer = np.zeros(784-num_t) # create a np array of size rest of remaining image pixels
print("minist test size",mnist_images1.shape)
#create the RBM using the BBRBMTEMP class
bbrbm = BBRBMTEMP(n_visible=794, n_hidden=64, learning_rate=0.01, momentum=0.95, use_tqdm=True,t=1)
#first run
fname = ["1-3","2-3","3-3","4-3","5-3","6-3","7-3","8-3","9-3","10-3","11-3","12-3","13-3","14-3","15-3","16-3","17-3","18-3","19-3","20-3"]
#n_data = mnist_images1.shape[0]
print("number of images to be reconstructed ", n_data)
#load the saved weights (Model)
filename = 'weights_class15kep005'
name = 'bbrbm_class15kep'
bbrbm.load_weights(filename,name)
#set up the random image for testing
random_image = np.random.uniform(0,1,784)
image_rec_bin = np.greater(random_image, 0.9) #np.random.uniform(0,1,784))
random_image = image_rec_bin.astype( int)
print("t is ",bbrbm.temp)
for j in range(n_data) :
#set temperature parameters
bbrbm.temp = 15
temp_idx = 20 # N of MC to run for each temperature value
temp_dec_step = 0.01
temp_chng_step = temp_idx
range_idx = int((bbrbm.temp/temp_dec_step)*temp_idx)
print("temp @ beg of loop is ", bbrbm.temp)
#Construct the image using number of pixels picked
random_image = np.concatenate((t2, img_zer), axis=0)#image_rec_bin.astype(int)
#shuffle the pixels to get a random image
np.random.shuffle(random_image)
#show_digit(random_image.reshape(28, 28), "Original random image")
# use mnist image for reconstruction
#image = mnist_images1[j+5] #random_image #
# uncommment line below to use random image
image = np.random.choice(2,784,p=[0.5,0.5]) #random_image #mnist_images1[1] # # pick 0 or one randomly (with probability of 0.5 each)
##### without cropping
img = image.reshape(28,28)
# img_org = img
#Add the labels pixels to the image
img_w_labels = np.concatenate((t1, img.flatten()), axis=0)
img_org = img_w_labels
#print("shape of of org img", img_org.shape)
#imga = random_image#imga = img
show_digit(img_org[10:794].reshape(28,28),"Input image")
#reconstruct image for N-MC
#
print("range index",range_idx)
#reconstruct n times
for i in range(range_idx):
image_rec1 = bbrbm.reconstruct(img_w_labels.reshape(1,-1),bbrbm.temp)
#print("shape of of rec1",image_rec1.shape)
image_rec1 = image_rec1.reshape(794, )
#print("i am inside for-loop")
if (i == temp_idx):
b = bbrbm.temp
#print("i am inside if_stat 1")
if( bbrbm.temp > temp_dec_step):
bbrbm.temp = bbrbm.temp - temp_dec_step
temp_idx += temp_chng_step
#print("temp is ",bbrbm.temp )
if (i == range_idx-temp_chng_step):
print("temp is ", bbrbm.temp)
bbrbm.temp = 0
# print("i am inside if_stat 2")
#t1 = image_rec1[0:10]
rec_backup = image_rec1
#image_rec1 = image_rec1[10:794].reshape(28,28 )
#print("size ofa", a.size)
img_w_labels = rec_backup#img_org + np.concatenate((t1, (image_rec1 * mask_c).flatten()), axis=0)
#show_digit(image_rec1[10:794].reshape(28, 28), "returned image")
#show_digit(img[10:794].reshape(28, 28), "image to be fed")
print("temp is ",bbrbm.temp )
show_digit(rec_backup[10:794].reshape(28, 28), "reconstructed image" )
print("END OF EXPERIMENT!")
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Introdução-ao-Numpy" data-toc-modified-id="Introdução-ao-Numpy-1"><span class="toc-item-num">1 </span>Introdução ao Numpy</a></div><div class="lev2 toc-item"><a href="#Copiando-variáveis-ndarray" data-toc-modified-id="Copiando-variáveis-ndarray-11"><span class="toc-item-num">1.1 </span>Copiando variáveis <code>ndarray</code></a></div><div class="lev2 toc-item"><a href="#Sem-cópia-explícita,-apenas-referência" data-toc-modified-id="Sem-cópia-explícita,-apenas-referência-12"><span class="toc-item-num">1.2 </span>Sem cópia explícita, apenas referência</a></div><div class="lev2 toc-item"><a href="#Cópia-rasa" data-toc-modified-id="Cópia-rasa-13"><span class="toc-item-num">1.3 </span>Cópia rasa</a></div><div class="lev2 toc-item"><a href="#Reshape" data-toc-modified-id="Reshape-14"><span class="toc-item-num">1.4 </span>Reshape</a></div><div class="lev2 toc-item"><a href="#Slice---Fatiamento" data-toc-modified-id="Slice---Fatiamento-15"><span class="toc-item-num">1.5 </span>Slice - Fatiamento</a></div><div class="lev2 toc-item"><a href="#Transposto" data-toc-modified-id="Transposto-16"><span class="toc-item-num">1.6 </span>Transposto</a></div><div class="lev2 toc-item"><a href="#Ravel" data-toc-modified-id="Ravel-17"><span class="toc-item-num">1.7 </span>Ravel</a></div><div class="lev2 toc-item"><a href="#Cópia-profunda" data-toc-modified-id="Cópia-profunda-18"><span class="toc-item-num">1.8 </span>Cópia profunda</a></div><div class="lev2 toc-item"><a href="#Documentação-Oficial-Numpy" data-toc-modified-id="Documentação-Oficial-Numpy-19"><span class="toc-item-num">1.9 </span>Documentação Oficial Numpy</a></div>
# Introdução ao Numpy
## Copiando variáveis ``ndarray``
O ``ndarray`` foi projetado para acesso otimizado a uma grande quantidade de dados. Neste sentido, os conceitos
descritos a seguir sobre as três formas de cópias entre variáveis ditas sem cópia, cópia rasa (*shallow*) e
cópia profunda (*deep*) são fundamentais para uma codificação eficiente. Podemos dizer que um ``ndarray`` possui
o cabeçalho que contém dados pelas informações sobre o tipo do elemento, a dimensionalidade (``shape``) e
passo ou deslocamento para o próximo elemento (``strides``) e os dados raster em si. A tabela
a seguir mostra a situação do cabeçalho e dos dados nos três tipos de cópias.
Tipo |Cabeçalho: Type, Shape, Strides| Dados raster |Exemplo
-----------------------|-------------------------------|------------------|---------------------------
Sem cópia, apenas ref |apontador original |apontador original|a = b
Cópia rasa |novo |apontador original|b = a.reshape, slicing, a.T
Cópia profunda |novo |novo |a = b.copy()
## Sem cópia explícita, apenas referência
No caso abaixo, usaremos o comando normal de igual como atribuição do array ``a`` para o array ``b``.
Verifica-se que tanto o shape como os dados de ``b`` são os mesmos de ``a``. Tudo se passa como ``b``
fosse apenas um apontador para ``a``. Qualquer modificação em ``b`` é refletida em ``a``.
```
import numpy as np
a = np.arange(6)
b = a
print("a =\n",a)
print("b =\n",b)
b.shape = (2,3) # mudança no shape de b,
print("\na shape =",a.shape) # altera o shape de a
b[0,0] = -1 # mudança no conteúdo de b
print("a =\n",a) # altera o conteudo de a
print("\nid de a = ",id(a)) # id é um identificador único de objeto
print("id de b = ",id(b)) # a e b possuem o mesmo id
print('np.may_share_memory(a,b):',np.may_share_memory(a,b))
```
Observe que mesmo no retorno de uma função, a cópia explícita pode não acontecer. Veja o exemplo a
seguir de uma função que apenas retorna a variável de entrada:
```
def cc(a):
return a
b = cc(a)
print("id de a = ",id(a))
print("id de b = ",id(b))
print('np.may_share_memory(a,b):',np.may_share_memory(a,b))
```
## Cópia rasa
A cópia rasa é muito útil e extensivamente utilizada. É usada quando se quer indexar o array original
através da mudança de dimensionalidade ou do
refatiamento, porém sem a necessidade de realizar uma cópia dos dados raster. Desta forma consegue-se
uma otimização no acesso ao array n-dimensional. Existem várias formas onde a cópia rasa acontece,
sendo as principais:
1) no caso do ``reshape`` onde o número de elementos do ``ndarray`` é o mesmo, porém sua dimensionalidade
é alterada;
2) no caso de fatiamento onde um subarray é indexado;
3) no caso de transposição do array;
4) no caso de linearização do raster através do ``ravel()``.
entre outros.
## Reshape
O exemplo a seguir mostra inicialmente a criação de um vetor unidimensional sequencial sendo "visto" de
forma bidimensional ou tridimensional.
```
a = np.arange(30)
print("a =\n", a)
print('a.shape:',a.shape)
b = a.reshape( (5, 6))
print("b =\n", b)
b[:, 0] = -1
print('b=\n',b)
print("a =\n", a)
c = a.reshape( (2, 3, 5) )
print("c =\n", c)
print('c.base is a:',c.base is a)
print('np.may_share_memory(a,c):',np.may_share_memory(a,c))
print('id(a),id(c):',id(a),id(c))
```
## Slice - Fatiamento
O exemplo a seguir mostra a cópia rasa no uso de fatiamento. No exemplo, todos os elementos de linhas
e colunas pares são modificados para 1. CUIDADO: quando é feita a atribuição de b = 1., é importante
que b seja referenciado como ndarray na forma b[:,:], caso contrário, se fizermos b = 1., uma nova
variável é criada.
```
a = np.zeros( (5, 6))
print('a.shape:',a.shape)
b = a[::2,::2]
print('b.shape:',b.shape)
b[:,:] = 1.
print('b=\n', b)
print('a=\n', a)
print('b.base is a:',b.base is a)
print('np.may_share_memory(a,b):',np.may_share_memory(a,b))
```
Este outro exemplo é uma forma atraente de processar uma coluna de uma matriz bidimensional,
porém é preciso CUIDADO, pois o uso de b deve ser com b[:] se for atribuído um novo valor para
ele, caso contrário, se fizermos b = arange(5), uma nova variável é criada.
```
a = np.arange(25).reshape((5,5))
print('a=\n',a)
b = a[:,0]
print('b=',b)
b[:] = np.arange(5)
b[2] = 100
print('b=',b)
print('a=\n',a)
a = np.arange(25).reshape((5,5))
print('a=\n',a)
b = np.arange(5)
print('b=',b)
print('a=\n',a)
```
## Transposto
A operação matricial de transposição que troca linhas por colunas produz também um *view*
da imagem, sem necessidade de cópia:
```
a = np.arange(24).reshape((4,6))
print('a:\n',a)
at = a.T
print('at:\n',at)
print('at.shape',at.shape)
print('np.may_share_memory(a,at):',np.may_share_memory(a,at))
```
## Ravel
Aplicando-se o método ``ravel()`` a um ``ndarray``, gera-se um *view* do raster
linearizado (i.e. uma única dimensão) do ``ndarray``.
```
a = np.arange(24).reshape((4,6))
print('a:\n',a)
av = a.ravel()
print('av.shape:',av.shape)
print('av:\n',av)
print('np.may_share_memory(a,av):',np.may_share_memory(a,av))
```
## Cópia profunda
Cria uma copia completa do array, do seu shape e conteúdo. A recomendação é utilizar a
função ``copy()`` para realizar a copia profunda, entretanto é possível conseguir a
copia profunda pelo ``np.array``.
```
b = a.copy()
c = np.array(a, copy=True)
print("id de a = ",id(a))
print("id de b = ",id(b))
print("id de c = ",id(c))
print('np.may_share_memory(a,b):',np.may_share_memory(a,b))
print('np.may_share_memory(a,c):',np.may_share_memory(a,c))
```
## Documentação Oficial Numpy
[Copies and Views](http://wiki.scipy.org/Tentative_NumPy_Tutorial#head-1529ae93dd5d431ffe3a1001a4ab1a394e70a5f2)
| github_jupyter |
<h1> <b>Homework 2</b></h1>
<i>Alejandro J. Rojas<br>
ale@ischool.berkeley.edu<br>
W261: Machine Learning at Scale<br>
Week: 02<br>
Jan 26, 2016</i></li>
<h2>HW2.0. </h2>
What is a race condition in the context of parallel computation? Give an example.
What is MapReduce?
How does it differ from Hadoop?
Which programming paradigm is Hadoop based on? Explain and give a simple example in code and show the code running.
<h2>HW2.1. Sort in Hadoop MapReduce</h2>
Given as input: Records of the form '<'integer, “NA”'>', where integer is any integer, and “NA” is just the empty string.
Output: sorted key value pairs of the form '<'integer, “NA”'>' in decreasing order; what happens if you have multiple reducers? Do you need additional steps? Explain.
Write code to generate N random records of the form '<'integer, “NA”'>'. Let N = 10,000.
Write the python Hadoop streaming map-reduce job to perform this sort. Display the top 10 biggest numbers. Display the 10 smallest numbers
# Data
```
import random
N = 10000 ### for a sample size of N
random.seed(0) ### pick a random seed to replicate results
input_file = open("numcount.txt", "w") # writing file
for i in range(N):
a = random.randint(0, 100) ### Select a random integer from 0 to 100
b = ''
input_file.write(str(a))
input_file.write(b)
input_file.write('\n')
input_file.close()
```
# Mapper
```
%%writefile mapper.py
#!/usr/bin/python
import sys
for line in sys.stdin: ### input comes from STDIN (standard input)
number = line.strip() ### remove leading and trailing whitespace
print ('%s\t%s' % (number, 1)) ### mapper out looks like 'number' \t 1
!chmod +x mapper.py
```
# Reducer
```
%%writefile reducer.py
#!/usr/bin/python
from operator import itemgetter
import sys
current_number = None
current_count = 0
number = None
numlist = []
# input comes from STDIN
for line in sys.stdin:
line = line.strip() ### remove leading and trailing whitespace
line = line.split('\t') ### parse the input we got from mapper.py
number = line[0] ### integer generated randomly we got from mapper.py
try:
count = line[1]
count = int(count) ### convert count (currently a string) to int
except ValueError: ### if count was not a number then silently
continue ### ignore/discard this line
if current_number == number: ### this IF-switch only works because Hadoop sorts map output
current_count += count ### by key (here: number) before it is passed to the reducer
else:
if current_number:
numlist.append((current_number,current_count)) ### store tuple in a list once totalize count per number
current_count = count ### set current count
current_number = number ### set current number
if current_number == number: ### do not forget to output the last word if needed!
numlist.append((current_number,current_count))
toplist = sorted(numlist,key=lambda record: record[1], reverse=True) ### sort list from largest count to smallest
bottomlist = sorted(numlist,key=lambda record: record[1]) ### sort list from smalles to largest
print '%25s' %'TOP 10', '%25s' % '', '%28s' %'BOTTOM 10'
print '%20s' %'Number', '%10s' %'Count', '%20s' % '', '%20s' %'Number','%10s' %'Count'
for i in range (10):
print '%20s%10s' % (toplist[i][0], toplist[i][1]),'%20s' % '', '%20s%10s' % (bottomlist[i][0], bottomlist[i][1])
!chmod +x reducer.py
!echo "10 \n 10\n 5\n 6\n 8\n 9 \n 10 \n 9 \n 12 \n 21 \n 22 \n 23 \n 24 \n 25" | python mapper.py | sort -k1,1 | python reducer.py
```
# Run numcount in Hadoop
<h2>start yarn and hdfs</h2>
```
!/usr/local/Cellar/hadoop/2.7.1/sbin/start-yarn.sh ### start up yarn
!/usr/local/Cellar/hadoop/2.7.1/sbin/start-dfs.sh ### start up dfs
```
<h2> remove files from prior runs </h2>
```
!hdfs dfs -rm -r /user/venamax ### remove prior files
```
<h2> create folder</h2>
```
!hdfs dfs -mkdir -p /user/venamax ### create hdfs folder
```
<h2> upload numcount.txt to hdfs</h2>
```
!hdfs dfs -put numcount.txt /user/venamax #### save source data file to hdfs
```
<h2> Hadoop streaming command </h2>
hadoop jar hadoopstreamingjarfile \
-D stream.num.map.output.key.fields=n \
-mapper mapperfile \
-reducer reducerfile \
-input inputfile \
-output outputfile
```
!hadoop jar hadoop-*streaming*.jar -mapper mapper.py -reducer reducer.py -input numcount.txt -output numcountOutput
```
<h2>show the results</h2>
```
!hdfs dfs -cat numcountOutput/part-00000
```
<h2>stop yarn and hdfs </h2>
```
!/usr/local/Cellar/hadoop/2.7.1/sbin/stop-yarn.sh
!/usr/local/Cellar/hadoop/2.7.1/sbin/stop-dfs.sh
```
<h2>HW2.2. WORDCOUNT</h2>
Using the Enron data from HW1 and Hadoop MapReduce streaming, write the mapper/reducer job that will determine the word count (number of occurrences) of each white-space delimitted token (assume spaces, fullstops, comma as delimiters). Examine the word “assistance” and report its word count results.
CROSSCHECK: >grep assistance enronemail_1h.txt|cut -d$'\t' -f4| grep assistance|wc -l
8
#NOTE "assistance" occurs on 8 lines but how many times does the token occur? 10 times! This is the number we are looking for!
# Mapper
```
%%writefile mapper.py
#!/usr/bin/python
## mapper.py
## Author: Alejandro J. Rojas
## Description: mapper code for HW2.2
import sys
import re
########## Collect user input ###############
filename = sys.argv[1]
findwords = re.split(" ",sys.argv[2].lower())
with open (filename, "r") as myfile:
for line in myfile.readlines():
line = line.strip()
record = re.split(r'\t+', line) ### Each email is a record with 4 components
### 1) ID 2) Spam Truth 3) Subject 4) Content
if len(record)==4: ### Take only complete records
for i in range (2,len(record)): ### Starting from Subject to the Content
bagofwords = re.split(" " | "," ,record[i])### Collect all words present on each email
for word in bagofwords:
flag=0
if word in findwords:
flag=1
print '%s\t%s\t%s\t%s\t%s' % (word, 1,record[0], record[1],flag)
### output: word, 1, id, spam truth and flag
!chmod +x mapper.py
```
# Reducer
```
%%writefile reducer.py
#!/usr/bin/python
from operator import itemgetter
import sys
from itertools import groupby
current_word, word = None, None
current_wordcount, current_spam_wordcount, current_ham_wordcount = 0,0,0
current_id, record_id = None, None
current_y_true, y_true = None, None
current_flag, flag = None,None
sum_records, sum_spamrecords, sum_hamrecords = 0,0,0
sum_spamwords, sum_hamwords = 0,0
flagged_words = []
emails={} #Associative array to hold email data
words={} #Associative array for word data
# input comes from STDIN
for line in sys.stdin:
line = line.strip() ### remove leading and trailing whitespace
line = line.split('\t') ### parse the input we got from mapper.py
word = line[0] ### word we get from mapper.py
try:
count = line[1]
count = int(count) ### convert count (currently a string) to int
email = line[2] ### id that identifies each email
y_true = line[3]
y_true = int(y_true) ### spam truth as an integer
flag = line[4]
flag = int(flag) ### flags if word is in the user specified list
except ValueError: ### if count was not a number then silently
continue ### ignore/discard this line
if current_word == word: ### this IF-switch only works because Hadoop sorts map output
current_count += count ### by key (here: word) before it is passed to the reducer
if current_word not in words.keys():
words[current_word]={'ham_count':0,'spam_count':0,'flag':flag}
if email not in emails.keys():
emails[current_email]={'y_true':y_true,'word_count':0,'words':[]}
if y_true == 1:
sma
if y_true == 1: ### if record where word is located is a spam
current_spamcount += count ### add to spam count of that word
sum_spamwords += 1
else:
current_hamcount += count ### if not add to ham count of thet word
sum_hamwords +=1
emails[current_email]['word_count'] += 1
emails[current_email]['words'].append(current_word)### store words in email
else:
if current_word:
if flag==1 and current_word not in flagged_words:
flagged_words.append(current_word)
words[current_word]['flag'] = flag ### denote if current word is a word specified by the user list
words[current_word]['spam_count'] += current_spamcount ### update spam count for current word
words[current_word]['ham_count'] += current_hamcount ### update ham count for current word
current_count = count ### set current count
current_spamcount, current_hamcount = 0,0 ### initialize spam and ham wordcount
current_word = word ### set current number
current_email = email ### set current id of email
current_y_true = y_true ### set current spam truth
current_flag = flag ### set current flag
if current_word == word: ### do not forget to output the last word if needed!
emails[current_email]['word_count'] += 1
emails[current_email]['words'].append(current_word)### store words in email
words[current_word]['flag'] = flag ### denote if current word is a word specified by the user list
words[current_word]['spam_count'] += current_spamcount ### update spam count for current word
words[current_word]['ham_count'] += current_hamcount ### update ham count for current word
#Calculate stats for entire corpus
prior_spam=spam_email_count/len(emails)
prior_ham=1-prior_spam
vocab_count=len(words)#number of unique words in the total vocabulary
for k,word in words.iteritems():
#These versions calculate conditional probabilities WITH Laplace smoothing.
#word['p_spam']=(word['spam_count']+1)/(spam_word_count+vocab_count)
#word['p_ham']=(word['ham_count']+1)/(ham_word_count+vocab_count)
#Compute conditional probabilities WITHOUT Laplace smoothing
word['p_spam']=(word['spam_count'])/(spam_word_count)
word['p_ham']=(word['ham_count'])/(ham_word_count)
#At this point the model is now trained, and we can use it to make our predictions
for j,email in emails.iteritems():
#Log versions - no longer used
#p_spam=log(prior_spam)
#p_ham=log(prior_ham)
p_spam=prior_spam
p_ham=prior_ham
for word in email['words']:
if word in flagged_words:
try:
#p_spam+=log(words[word]['p_spam']) #Log version - no longer used
p_spam*=words[word]['p_spam']
except ValueError:
pass #This means that words that do not appear in a class will use the class prior
try:
#p_ham+=log(words[word]['p_ham']) #Log version - no longer used
p_ham*=words[word]['p_ham']
except ValueError:
pass
if p_spam>p_ham:
spam_pred=1
else:
spam_pred=0
print j+'\t'+str(email['spam'])+'\t'+str(spam_pred)
toplist = sorted(numlist,key=lambda record: record[1], reverse=True) ### sort list from largest count to smallest
bottomlist = sorted(numlist,key=lambda record: record[1]) ### sort list from smalles to largest
print '%25s' %'TOP 10', '%25s' % '', '%28s' %'BOTTOM 10'
print '%20s' %'Number', '%10s' %'Count', '%20s' % '', '%20s' %'Number','%10s' %'Count'
for i in range (10):
print '%20s%10s' % (toplist[i][0], toplist[i][1]),'%20s' % '', '%20s%10s' % (bottomlist[i][0], bottomlist[i][1])
```
<h2>HW2.2.1</h2> Using Hadoop MapReduce and your wordcount job (from HW2.2) determine the top-10 occurring tokens (most frequent tokens)
<h2>HW2.3. Multinomial NAIVE BAYES with NO Smoothing</h2>
Using the Enron data from HW1 and Hadoop MapReduce, write a mapper/reducer job(s) that
will both learn Naive Bayes classifier and classify the Enron email messages using the learnt Naive Bayes classifier. Use all white-space delimitted tokens as independent input variables (assume spaces, fullstops, commas as delimiters). Note: for multinomial Naive Bayes, the Pr(X=“assistance”|Y=SPAM) is calculated as follows:
the number of times “assistance” occurs in SPAM labeled documents / the number of words in documents labeled SPAM
E.g., “assistance” occurs 5 times in all of the documents Labeled SPAM, and the length in terms of the number of words in all documents labeled as SPAM (when concatenated) is 1,000. Then Pr(X=“assistance”|Y=SPAM) = 5/1000. Note this is a multinomial estimation of the class conditional for a Naive Bayes Classifier. No smoothing is needed in this HW. Multiplying lots of probabilities, which are between 0 and 1, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities. Please pay attention to probabilites that are zero! They will need special attention. Count up how many times you need to process a zero probabilty for each class and report.
Report the performance of your learnt classifier in terms of misclassifcation error rate of your multinomial Naive Bayes Classifier. Plot a histogram of the log posterior probabilities (i.e., Pr(Class|Doc))) for each class over the training set. Summarize what you see.
Error Rate = misclassification rate with respect to a provided set (say training set in this case). It is more formally defined here:
Let DF represent the evalution set in the following:
Err(Model, DF) = |{(X, c(X)) ∈ DF : c(X) != Model(x)}| / |DF|
Where || denotes set cardinality; c(X) denotes the class of the tuple X in DF; and Model(X) denotes the class inferred by the Model “Model”
<h2>HW2.4 Repeat HW2.3 with the following modification: use Laplace plus-one smoothing. </h2>
Compare the misclassifcation error rates for 2.3 versus 2.4 and explain the differences.
For a quick reference on the construction of the Multinomial NAIVE BAYES classifier that you will code,
please consult the "Document Classification" section of the following wikipedia page:
https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Document_classification
OR the original paper by the curators of the Enron email data:
http://www.aueb.gr/users/ion/docs/ceas2006_paper.pdf
<h2>HW2.5. Repeat HW2.4. This time when modeling and classification ignore tokens with a frequency of less than three (3) in the training set. </h2>How does it affect the misclassifcation error of learnt naive multinomial Bayesian Classifier on the training dataset:
<h2>HW2.6 Benchmark your code with the Python SciKit-Learn implementation of the multinomial Naive Bayes algorithm</h2>
It always a good idea to benchmark your solutions against publicly available libraries such as SciKit-Learn, The Machine Learning toolkit available in Python. In this exercise, we benchmark ourselves against the SciKit-Learn implementation of multinomial Naive Bayes. For more information on this implementation see: http://scikit-learn.org/stable/modules/naive_bayes.html more
In this exercise, please complete the following:
— Run the Multinomial Naive Bayes algorithm (using default settings) from SciKit-Learn over the same training data used in HW2.5 and report the misclassification error (please note some data preparation might be needed to get the Multinomial Naive Bayes algorithm from SkiKit-Learn to run over this dataset)
- Prepare a table to present your results, where rows correspond to approach used (SkiKit-Learn versus your Hadoop implementation) and the column presents the training misclassification error
— Explain/justify any differences in terms of training error rates over the dataset in HW2.5 between your Multinomial Naive Bayes implementation (in Map Reduce) versus the Multinomial Naive Bayes implementation in SciKit-Learn
<h2>HHW 2.6.1 OPTIONAL (note this exercise is a stretch HW and optional)</h2>
— Run the Bernoulli Naive Bayes algorithm from SciKit-Learn (using default settings) over the same training data used in HW2.6 and report the misclassification error
- Discuss the performance differences in terms of misclassification error rates over the dataset in HW2.5 between the Multinomial Naive Bayes implementation in SciKit-Learn with the Bernoulli Naive Bayes implementation in SciKit-Learn. Why such big differences. Explain.
Which approach to Naive Bayes would you recommend for SPAM detection? Justify your selection.
<h2>HW2.7 OPTIONAL (note this exercise is a stretch HW and optional)</h2>
The Enron SPAM data in the following folder enron1-Training-Data-RAW is in raw text form (with subfolders for SPAM and HAM that contain raw email messages in the following form:
--- Line 1 contains the subject
--- The remaining lines contain the body of the email message.
In Python write a script to produce a TSV file called train-Enron-1.txt that has a similar format as the enronemail_1h.txt that you have been using so far. Please pay attend to funky characters and tabs. Check your resulting formated email data in Excel and in Python (e.g., count up the number of fields in each row; the number of SPAM mails and the number of HAM emails). Does each row correspond to an email record with four values? Note: use "NA" to denote empty field values.
<h2>HW2.8 OPTIONAL</h2>
Using Hadoop Map-Reduce write job(s) to perform the following:
-- Train a multinomial Naive Bayes Classifier with Laplace plus one smoothing using the data extracted in HW2.7 (i.e., train-Enron-1.txt). Use all white-space delimitted tokens as independent input variables (assume spaces, fullstops, commas as delimiters). Drop tokens with a frequency of less than three (3).
-- Test the learnt classifier using enronemail_1h.txt and report the misclassification error rate. Remember to use all white-space delimitted tokens as independent input variables (assume spaces, fullstops, commas as delimiters). How do we treat tokens in the test set that do not appear in the training set?
<h2>HW2.8.1 OPTIONAL</h2>
— Run both the Multinomial Naive Bayes and the Bernoulli Naive Bayes algorithms from SciKit-Learn (using default settings) over the same training data used in HW2.8 and report the misclassification error on both the training set and the testing set
- Prepare a table to present your results, where rows correspond to approach used (SciKit-Learn Multinomial NB; SciKit-Learn Bernouili NB; Your Hadoop implementation) and the columns presents the training misclassification error, and the misclassification error on the test data set
- Discuss the performance differences in terms of misclassification error rates over the test and training datasets by the different implementations. Which approch (Bernouili versus Multinomial) would you recommend for SPAM detection? Justify your selection.
<h2>=====================
END OF HOMEWORK</h2>
| github_jupyter |
```
import pandas as pd
import numpy as np
import glob
import os
import seaborn as sns
import matplotlib.pylab as pl
from matplotlib import colors, cm
import matplotlib
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import cm, colors
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
L_space = np.array([1,2,4,8,16,32,64])
path ="rec_lattice_bounds/column_migration/sim_data_mig/"
rough_1_files = glob.glob(path+"rough_1_*.txt")
rough_2_files = glob.glob(path+"rough_2_*.txt")
rough_4_files = glob.glob(path+"rough_4_*.txt")
rough_8_files = glob.glob(path+"rough_8_*.txt")
rough_16_files = glob.glob(path+"rough_16_*.txt")
rough_32_files = glob.glob(path+"rough_32_*.txt")
rough_64_files = glob.glob(path+"rough_64_*.txt")
het_files = glob.glob(path+"het*.txt")
K_var = [1000000]
K_str = ["K"+str(K)+"_" for K in K_var]
B_var = [0, 2.5, 3.5, 6, 10 ]
B_str = ["B"+str(B)+"_" for B in B_var]
n_files =100
rough_data_mig=np.zeros((5,1,n_files,7,1000,3))
het_data_mig=np.zeros((5,1,n_files,1000,2))
for Kn, Ki,K in zip(range(len(K_var)),K_var,K_str):
for Bn,Bi,B in zip(range(len(B_var)),B_var,B_str):
#print('hi')
rough_sub_files_1=[]
rough_sub_files_2=[]
rough_sub_files_4=[]
rough_sub_files_8=[]
rough_sub_files_16=[]
rough_sub_files_32=[]
rough_sub_files_64=[]
het_sub_files = []
#for fr10,fr20,fr30,fr40,fr50,fr60,fr70 in zip(rough_10_files,rough_20_files,rough_30_files,rough_40_files,rough_50_files,rough_50_files,rough_50_files):
# print(fr10)
# if K in fr10 and B in fr10:
# rough_sub_files_10.append(fr10)
for fr1,fr2,fr4,fr8,fr16,fr32,fr64,fh in zip(rough_1_files,rough_2_files,rough_4_files,rough_8_files,rough_16_files,rough_32_files,rough_64_files,het_files):
#print(fr10)
if K in fr1 and B in fr1:
rough_sub_files_1.append(fr1)
if K in fr2 and B in fr2:
rough_sub_files_2.append(fr2)
if K in fr4 and B in fr4:
rough_sub_files_4.append(fr4)
if K in fr8 and B in fr8:
rough_sub_files_8.append(fr8)
if K in fr16 and B in fr16:
rough_sub_files_16.append(fr16)
if K in fr32 and B in fr32:
rough_sub_files_32.append(fr32)
if K in fr64 and B in fr64:
rough_sub_files_64.append(fr64)
if K in fh and B in fh:
het_sub_files.append(fh)
# for i, fr10,fr20,fr30,fr40,fr50,fr60,fr70,fr80,fr90,fr100,fr110,fr120,fr130,fr140,fr150,fr160,fr170,fr180 in zip(range(len(rough_sub_files_10)),rough_sub_files_10,rough_sub_files_20,rough_sub_files_30,rough_sub_files_40,
# rough_sub_files_50,rough_sub_files_60,rough_sub_files_70,rough_sub_files_80,
# rough_sub_files_90,rough_sub_files_100,rough_sub_files_110,
# rough_sub_files_120,rough_sub_files_130,rough_sub_files_140,rough_sub_files_150,rough_sub_files_160,rough_sub_files_170,rough_sub_files_180):
# rough_data[Bn,Kn,i,0]= np.loadtxt(fr10,delimiter = ', ')
# rough_data[Bn,Kn,i,1]= np.loadtxt(fr20,delimiter = ', ')
# rough_data[Bn,Kn,i,2]= np.loadtxt(fr30,delimiter = ', ')
# rough_data[Bn,Kn,i,3]= np.loadtxt(fr40,delimiter = ', ')
# rough_data[Bn,Kn,i,4]= np.loadtxt(fr50,delimiter = ', ')
# rough_data[Bn,Kn,i,5]= np.loadtxt(fr60,delimiter = ', ')
# rough_data[Bn,Kn,i,6]= np.loadtxt(fr70,delimiter = ', ')
# rough_data[Bn,Kn,i,7]= np.loadtxt(fr80,delimiter = ', ')
# rough_data[Bn,Kn,i,8]= np.loadtxt(fr90,delimiter = ', ')
# rough_data[Bn,Kn,i,9]= np.loadtxt(fr100,delimiter = ', ')
# rough_data[Bn,Kn,i,10]= np.loadtxt(fr110,delimiter = ', ')
# rough_data[Bn,Kn,i,11]= np.loadtxt(fr120,delimiter = ', ')
# rough_data[Bn,Kn,i,12]= np.loadtxt(fr130,delimiter = ', ')
# rough_data[Bn,Kn,i,13]= np.loadtxt(fr140,delimiter = ', ')
# rough_data[Bn,Kn,i,14]= np.loadtxt(fr150,delimiter = ', ')
# rough_data[Bn,Kn,i,15]= np.loadtxt(fr160,delimiter = ', ')
# rough_data[Bn,Kn,i,16]= np.loadtxt(fr170,delimiter = ', ')
# rough_data[Bn,Kn,i,17]= np.loadtxt(fr180,delimiter = ', ')
for i,fr1,fr2,fr4,fr8,fr16,fr32,fr64,fh in zip(range(n_files),rough_sub_files_1,rough_sub_files_2,rough_sub_files_4,rough_sub_files_8,rough_sub_files_16,rough_sub_files_32,rough_sub_files_64,het_sub_files):
#arr = np.loadtxt(fr,delimiter = ', ',skiprows=1)
#rough_data[Bn,Kn,i,:,0]= np.concatenate((np.repeat(np.array([[0,10,20,30,40,50]]),1000,axis=0),
# np.array([arr[:,-1]-8,arr[:,-1]-6,arr[:,-1]-4,arr[:,-1]-2,arr[:,-1]]).T),axis=1)
#rough_data[Bn,Kn, i,:,1] = arr[:,1:-1]
#print(i)
rough_data_mig[Bn,Kn,i,0]= np.loadtxt(fr1,delimiter = ', ')
rough_data_mig[Bn,Kn,i,1]= np.loadtxt(fr2,delimiter = ', ')
rough_data_mig[Bn,Kn,i,2]= np.loadtxt(fr4,delimiter = ', ')
rough_data_mig[Bn,Kn,i,3]= np.loadtxt(fr8,delimiter = ', ')
rough_data_mig[Bn,Kn,i,4]= np.loadtxt(fr16,delimiter = ', ')
rough_data_mig[Bn,Kn,i,5]= np.loadtxt(fr32,delimiter = ', ')
rough_data_mig[Bn,Kn,i,6]= np.loadtxt(fr64,delimiter = ', ')
het_data_big[Bn,Kn,i]= np.loadtxt(fh,delimiter = ', ')
plt.plot(np.mean(rough_data_mig[-1,0,:,0,:,0],axis=0),np.mean(rough_data_mig[-1,0,:,:,:,2],axis=0).T/L_space)
path ="rec_lattice_bounds/column_migration/sim_data_nomig/"
rough_1_files = glob.glob(path+"rough_1_*.txt")
rough_2_files = glob.glob(path+"rough_2_*.txt")
rough_4_files = glob.glob(path+"rough_4_*.txt")
rough_8_files = glob.glob(path+"rough_8_*.txt")
rough_16_files = glob.glob(path+"rough_16_*.txt")
rough_32_files = glob.glob(path+"rough_32_*.txt")
rough_64_files = glob.glob(path+"rough_64_*.txt")
het_files = glob.glob(path+"het*.txt")
K_var = [1000000]
K_str = ["K"+str(K)+"_" for K in K_var]
B_var = [0, 2.5, 3.5, 6, 10 ]
B_str = ["B"+str(B)+"_" for B in B_var]
n_files =100
rough_data_nomig=np.zeros((5,1,n_files,7,1000,3))
het_data_nomig=np.zeros((5,1,n_files,1000,2))
rough_diffs=np.zeros((5,5))
def diff_fit(x, a, b):
return 2*a*x+b
for Kn, Ki,K in zip(range(len(K_var)),K_var,K_str):
for Bn,Bi,B in zip(range(len(B_var)),B_var,B_str):
#print('hi')
rough_sub_files_1=[]
rough_sub_files_2=[]
rough_sub_files_4=[]
rough_sub_files_8=[]
rough_sub_files_16=[]
rough_sub_files_32=[]
rough_sub_files_64=[]
het_sub_files = []
#for fr10,fr20,fr30,fr40,fr50,fr60,fr70 in zip(rough_10_files,rough_20_files,rough_30_files,rough_40_files,rough_50_files,rough_50_files,rough_50_files):
# print(fr10)
# if K in fr10 and B in fr10:
# rough_sub_files_10.append(fr10)
for fr1,fr2,fr4,fr8,fr16,fr32,fr64,fh in zip(rough_1_files,rough_2_files,rough_4_files,rough_8_files,rough_16_files,rough_32_files,rough_64_files,het_files):
#print(fr10)
if K in fr1 and B in fr1:
rough_sub_files_1.append(fr1)
if K in fr2 and B in fr2:
rough_sub_files_2.append(fr2)
if K in fr4 and B in fr4:
rough_sub_files_4.append(fr4)
if K in fr8 and B in fr8:
rough_sub_files_8.append(fr8)
if K in fr16 and B in fr16:
rough_sub_files_16.append(fr16)
if K in fr32 and B in fr32:
rough_sub_files_32.append(fr32)
if K in fr64 and B in fr64:
rough_sub_files_64.append(fr64)
if K in fh and B in fh:
het_sub_files.append(fh)
# for i, fr10,fr20,fr30,fr40,fr50,fr60,fr70,fr80,fr90,fr100,fr110,fr120,fr130,fr140,fr150,fr160,fr170,fr180 in zip(range(len(rough_sub_files_10)),rough_sub_files_10,rough_sub_files_20,rough_sub_files_30,rough_sub_files_40,
# rough_sub_files_50,rough_sub_files_60,rough_sub_files_70,rough_sub_files_80,
# rough_sub_files_90,rough_sub_files_100,rough_sub_files_110,
# rough_sub_files_120,rough_sub_files_130,rough_sub_files_140,rough_sub_files_150,rough_sub_files_160,rough_sub_files_170,rough_sub_files_180):
# rough_data[Bn,Kn,i,0]= np.loadtxt(fr10,delimiter = ', ')
# rough_data[Bn,Kn,i,1]= np.loadtxt(fr20,delimiter = ', ')
# rough_data[Bn,Kn,i,2]= np.loadtxt(fr30,delimiter = ', ')
# rough_data[Bn,Kn,i,3]= np.loadtxt(fr40,delimiter = ', ')
# rough_data[Bn,Kn,i,4]= np.loadtxt(fr50,delimiter = ', ')
# rough_data[Bn,Kn,i,5]= np.loadtxt(fr60,delimiter = ', ')
# rough_data[Bn,Kn,i,6]= np.loadtxt(fr70,delimiter = ', ')
# rough_data[Bn,Kn,i,7]= np.loadtxt(fr80,delimiter = ', ')
# rough_data[Bn,Kn,i,8]= np.loadtxt(fr90,delimiter = ', ')
# rough_data[Bn,Kn,i,9]= np.loadtxt(fr100,delimiter = ', ')
# rough_data[Bn,Kn,i,10]= np.loadtxt(fr110,delimiter = ', ')
# rough_data[Bn,Kn,i,11]= np.loadtxt(fr120,delimiter = ', ')
# rough_data[Bn,Kn,i,12]= np.loadtxt(fr130,delimiter = ', ')
# rough_data[Bn,Kn,i,13]= np.loadtxt(fr140,delimiter = ', ')
# rough_data[Bn,Kn,i,14]= np.loadtxt(fr150,delimiter = ', ')
# rough_data[Bn,Kn,i,15]= np.loadtxt(fr160,delimiter = ', ')
# rough_data[Bn,Kn,i,16]= np.loadtxt(fr170,delimiter = ', ')
# rough_data[Bn,Kn,i,17]= np.loadtxt(fr180,delimiter = ', ')
for i,fr1,fr2,fr4,fr8,fr16,fr32,fr64,fh in zip(range(n_files),rough_sub_files_1,rough_sub_files_2,rough_sub_files_4,rough_sub_files_8,rough_sub_files_16,rough_sub_files_32,rough_sub_files_64,het_sub_files):
#arr = np.loadtxt(fr,delimiter = ', ',skiprows=1)
#rough_data[Bn,Kn,i,:,0]= np.concatenate((np.repeat(np.array([[0,10,20,30,40,50]]),1000,axis=0),
# np.array([arr[:,-1]-8,arr[:,-1]-6,arr[:,-1]-4,arr[:,-1]-2,arr[:,-1]]).T),axis=1)
#rough_data[Bn,Kn, i,:,1] = arr[:,1:-1]
#print(i)
rough_data_nomig[Bn,Kn,i,0]= np.loadtxt(fr1,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,1]= np.loadtxt(fr2,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,2]= np.loadtxt(fr4,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,3]= np.loadtxt(fr8,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,4]= np.loadtxt(fr16,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,5]= np.loadtxt(fr32,delimiter = ', ')
rough_data_nomig[Bn,Kn,i,6]= np.loadtxt(fr64,delimiter = ', ')
het_data_nomig[Bn,Kn,i]= np.loadtxt(fh,delimiter = ', ')
for i in range(5):
diff_data =np.mean(rough_data_nomig[Bn,Kn,:,i+1,500:,2],axis=0)/L_space[-1]
pars, cov = curve_fit(f=diff_fit, xdata=np.mean(rough_data_nomig[Bn,Kn,:,i+1,500:,0],axis=0), ydata=diff_data, p0=[0.01, 0.01], bounds=(-np.inf, np.inf))
rough_diffs[Bn,i] = pars[0]
rough_diffs[0,:]
diff_data =np.mean(rough_data_nomig[2,0,:,6,:,2],axis=0)/L_space[0]
plt.plot(diff_data)
L_space = np.array([1,2,4,8,16,32,64])
plt.plot(np.mean(rough_data_nomig[0,0,:,0,:,0],axis=0),np.mean(rough_data_nomig[-1,0,:,:,:,2],axis=0).T/L_space)
path ="rec_lattice_bounds/column_migration/sim_data_1D/"
vel_files = glob.glob(path+"pop_*.txt")
n_files =101
D1_Data = np.zeros((5,n_files,1000,2))
K_var = [1000000]
K_str = ["K"+str(K)+"_" for K in K_var]
D1_diffs=[]
B_var = [0, 2.5, 3.5, 6, 10 ]
B_str = ["B"+str(B)+"_" for B in B_var]
def diff_fit(x, a, b):
return 2*a*x+b
for Bn,Bi,B in zip(range(len(B_var)),B_var,B_str):
#print('hi')
vel_sub_files=[]
for f in vel_files:
if B in f:
vel_sub_files.append(f)
for i,f in enumerate(vel_sub_files):
D1_Data[Bn,i] = np.loadtxt(f,delimiter = ', ')
diff_data =np.std(D1_Data[Bn,:,400:,1],axis=0)**2
pars, cov = curve_fit(f=diff_fit, xdata=np.mean(D1_Data[Bn,:,400:,0],axis=0), ydata=diff_data, p0=[0.01, 0.01], bounds=(-np.inf, np.inf))
print(pars[0])
D1_diffs.append(pars[0])
rough_diffs[:-1,-1]
plt.plot(np.std(D1_Data[-1:,:,:,1],axis=1).T)
plt.plot(np.std(D1_Data[0,:,:,1],axis=0))
plt.plot(np.std(D1_Data[1,:,:,1],axis=0))
plt.plot(np.std(D1_Data[2,:,:,1],axis=0))
plt.plot(np.std(D1_Data[3,:,:,1],axis=0))
B
plt.plot(D1_diffs[:-1])
plt.plot(rough_diffs[:-1,-1])
plt.yscale('log')
D1_diffs
rough_diffs[:-1,-1]
```
| github_jupyter |
# Anxiety/Confidence Analysis
Steps:
1. Using seed words for the anxiety class and confidence class, use word2vec to find the 1000 most "anxious" and 1000 most "confident" words
2. For each class of documents (either topic, province, or topic/province), calculate the proportion of anxious words used in that class relative to the proportion of anxious words used in total
3. cassssh
```
from utils import DTYPE, PARSE_DATES, PROV_CONSOLIDATION, CONSOLIDATED_PROVINCES, CONVERTERS, ANCHOR_NAMES, PROVINCE_COLOR_MAP
from tqdm.auto import tqdm
import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
import numpy as np
import glob
tqdm.pandas()
prov_map = lambda x : x if x not in PROV_CONSOLIDATION else PROV_CONSOLIDATION[x]
total_df = pd.read_csv("../data/processed_data/total_tweet_dataset.csv",header=0,dtype=DTYPE,converters=CONVERTERS,parse_dates=PARSE_DATES)
total_df = total_df.set_index("id").sort_values("created_at")[~total_df.index.duplicated()]
total_df["created_at"] = total_df["created_at"].dt.to_period("D").dt.to_timestamp('s')
total_df["province"] = total_df["province"].apply(prov_map)
total_df = total_df[total_df.clean_text.notnull()]
total_df["province"] = total_df["province"].apply(prov_map)
total_df = total_df[total_df["province"].isin(CONSOLIDATED_PROVINCES)]
print(len(total_df))
from kaleido.scopes.plotly import PlotlyScope
scope = PlotlyScope()
vis_args = {
"template": "simple_white",
"font":{"size": 23},
"width": 1000
}
vis_args = {
"template": "simple_white",
"font":{"size": 23},
"width": 1000
}
```
## Bootstrapping
start with seed words for the anxious and confident classes
> We retain the 1,000 terms that are semantically closest, on average, to the anxiety seed words, and the 1,000 terms closest to the confidence seed words.
use word2vec and cosine similarity to build a big vocab of anxious/confident words
word embeddings are derived from: https://github.com/RaRe-Technologies/gensim-data
```
from gensim.models import KeyedVectors
from text_cleaning import clean_text
import gensim.downloader as api
model = "glove-twitter-25"
anxious_fp = f"../data/external_datasets/{model}-anxious_words.csv"
confident_fp = f"../data/external_datasets/{model}-twitter-confident_words.csv"
# word_vectors = api.load(model)
# word_vectors.save(f"../models/{model}.kv")
word_vectors = KeyedVectors.load(f"../models/{model}.kv", mmap='r')
anxious_seed = ["risk", "threat", "concerned", "doubt", "worry", "fear", "danger", "tension", "stress", "anxious", "upset", "alarming", "worry", "hazard", "uncertain", "scare", "unknown", "alarm", "tense", "anxiety", "distress", "nervous", "risky", "troubled", "threatening", "panic", "fearful", "frighten", "unrest", "doubtful"]
confident_seed = ["ease", "protect", "sure", "security", "safety", "protection", "confidence", "safe", "trust", "hope", "guarantee", "assurance", "certainty", "secure", "confident", "known", "guaranteed", "reassure", "predictable", "quiet", "hopeful", "optimistic", "bold", "convinced", "optimism", "content", "reassurance", "calm", "faithful", "comfort"]
def seed_similarity(word,seed):
for s in seed: print(word_vectors.similarity(word,seed_word))
return np.array([word_vectors.similarity(word,seed_word) for seed_word in seed]).mean()
vocab = pd.DataFrame({"word":word_vectors.vocab.keys()})
vocab["anxious_similarity"] = vocab["word"].progress_apply(lambda x : seed_similarity(x,anxious_seed))
vocab["confident_similarity"] = vocab["word"].progress_apply(lambda x : seed_similarity(x,confident_seed))
vocab["word"] = vocab["word"].progress_apply(clean_text)
vocab = vocab[vocab["word"].astype(bool)]
vocab[["word","anxious_similarity"]].sort_values("anxious_similarity",ascending=False).head(1000).to_csv(anxious_fp)
vocab[["word","confident_similarity"]].sort_values("confident_similarity",ascending=False).head(1000).to_csv(confident_fp)
anxious_values,confident_values = pd.read_csv(anxious_fp,index_col=0)["word"],pd.read_csv(confident_fp,index_col=0)["word"]
```
Likewise, we have a lexicon of words representing the inverse emotional state, confidence/security.
One of the many benefits of the lexicon expansion approach is that we account for the various ways with which people may express anxiety in natural language. Moreover, using a model trained on examples from real-life social media means that we account for the particular register of discussions taking place on the web.
To devise lexical measures of anxiety, we define the likelihood of a lexicon word appearing in a category of reviews as:
$P(L|c)=\frac{\sum_{c\in L}count(w,c)}{\sum_{c\in C}count(w,c)}$
```
from utils import ANCHOR_NAMES
text_data = total_df[["clean_text","province","cluster"]]
text_data["clean_text"] = text_data["clean_text"].apply(lambda x : x.split())
def potts_score(text,seed):
text = text.explode("clean_text")
text = pd.DataFrame(text.value_counts()).reset_index().set_index("clean_text")
seed_counts = text[text.index.isin(seed)]
return float(seed_counts.sum()/text.sum())
cluster_anxiety_score,cluster_confidence_score = [],[]
for clus in sorted(total_df["cluster"].unique()):
iso = text_data[text_data["cluster"]==clus][["clean_text"]]
anx,conf = potts_score(iso,anxious_values),potts_score(iso,confident_values)
cluster_anxiety_score.append(anx)
cluster_confidence_score.append(conf)
cluster_anxiety_score = np.array(cluster_anxiety_score)/sum(cluster_anxiety_score)
cluster_confidence_score = np.array(cluster_confidence_score)/sum(cluster_confidence_score)
cluster_scores = pd.DataFrame({"cluster":sorted(total_df["cluster"].unique()),
"cluster_name": ANCHOR_NAMES + [f"Overflow {i+1}" for i in range(5)],
"anxiety_score": cluster_anxiety_score,
"confidence_score": cluster_confidence_score})
cluster_scores
# seed_counts = text[text["clean_text"].isin(anxious_values)]["clean_text"].unique()
# seed_counts
# text.where(text["clean_text"].isin(anxious_values)).dropna().value_counts()# seed_counts = text.where(text.index.isin(anxious_values)).dropna().value_counts().sum()
# total_counts = text.sum()
fig = go.Figure(data=[go.Bar(name='Anxiety Potts Score', x=cluster_scores["cluster_name"], y=cluster_scores["anxiety_score"]),
go.Bar(name='Confidence Potts Score', x=cluster_scores["cluster_name"], y=cluster_scores["confidence_score"]),
])
# Change the bar mode
fig.update_layout(barmode='group',**vis_args)
# fp = "../visualizations/sentiment_analysis/anxiety_confidence-topic"
# with open(f"{fp}.pdf", "wb") as f:
# f.write(scope.transform(fig, format="pdf"))
fig.show()
province_anxiety_score,province_confidence_score = [],[]
for prov in sorted(total_df["province"].unique()):
iso = text_data[text_data["province"]==prov][["clean_text"]]
anx,conf = potts_score(iso,anxious_values),potts_score(iso,confident_values)
province_anxiety_score.append(anx)
province_confidence_score.append(conf)
province_anxiety_score = np.array(province_anxiety_score)/sum(province_anxiety_score)
province_confidence_score = np.array(province_confidence_score)/sum(province_confidence_score)
province_scores = pd.DataFrame({"province":sorted(total_df["province"].unique()),
"anxiety_score": province_anxiety_score,
"confidence_score": province_confidence_score})
fig = go.Figure(data=[go.Bar(name='Anxiety Potts Score', x=province_scores["province"], y=province_scores["anxiety_score"]),
go.Bar(name='Confidence Potts Score', x=province_scores["province"], y=province_scores["confidence_score"]),
])
# Change the bar mode
fig.update_layout(barmode='group',**vis_args)
# fp = "../visualizations/sentiment_analysis/anxiety_confidence-province"
# with open(f"{fp}.pdf", "wb") as f:
# f.write(scope.transform(fig, format="pdf"))
fig.show()
```
| github_jupyter |
```
import numpy as np
import sys
sys.path.append('../external/Transformer_modules/')
sys.path.append('../src/')
import torch, torch.nn as nn
import torch.nn.functional as F
from modules import MultiHeadAttention, PositionwiseFeedForward
import mnist
%load_ext autoreload
%autoreload 2
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class GlobalAveragePooling(nn.Module):
def __init__(self, dim=-1):
super(self.__class__, self).__init__()
self.dim = dim
def forward(self, x):
return x.mean(dim=self.dim)
class Discriminator(nn.Module):
def __init__(self, in_dim, hidden_dim=100,ffn_dim =200,n_head=8):
super(Discriminator, self).__init__()
self.fc1 = nn.Linear(in_dim, hidden_dim)
nn.init.xavier_normal_(self.fc1.weight)
nn.init.constant_(self.fc1.bias, 0.0)
self.mha_1 = MultiHeadAttention(n_head=n_head,d_model = hidden_dim)
self.ffn_1 = PositionwiseFeedForward(hidden_dim, ffn_dim, use_residual=False)
self.gl_1 = GlobalAveragePooling(dim = 1)
self.fc2 = nn.Linear(hidden_dim, 10)
nn.init.xavier_normal_(self.fc2.weight)
nn.init.constant_(self.fc2.bias, 0.0)
def forward(self, x):
h1 = F.relu(self.fc1(x))
h2 = self.mha_1(h1)
h3 = self.ffn_1(h2)
score = self.fc2(self.gl_1(h3))
return score
x_train = mnist.make_clouds(mnist.x_train,500)
y_train = mnist.y_train
x_val = mnist.make_clouds(mnist.x_val,500)
y_val = mnist.y_val
model = Discriminator(2).cuda(0)
x_test = mnist.mnist_test
def compute_loss(X_batch, y_batch):
X_batch = Variable(torch.FloatTensor(X_batch)).cuda(0)
y_batch = Variable(torch.LongTensor(y_batch)).cuda(0)
logits = model(X_batch)
return F.cross_entropy(logits, y_batch).mean()
def iterate_minibatches(X, y, batchsize):
indices = np.random.permutation(np.arange(len(X)))
for start in range(0, len(indices), batchsize):
ix = indices[start: start + batchsize]
yield X[ix], y[ix]
opt = torch.optim.Adam(model.parameters())
import time
num_epochs = 150 # total amount of full passes over training data
batch_size = 200
train_loss = []
val_accuracy = []
for epoch in range(num_epochs):
start_time = time.time()
model.train(True)
for X_batch, y_batch in iterate_minibatches(x_train,y_train,batchsize=batch_size):
# train on batch
loss = compute_loss(X_batch, y_batch)
loss.backward()
opt.step()
opt.zero_grad()
train_loss.append(loss.cpu().detach().numpy())
del loss
# And a full pass over the validation data:
model.train(False) # disable dropout / use averages for batch_norm
for X_batch, y_batch in iterate_minibatches(x_val, y_val, batch_size):
logits = model(Variable(torch.FloatTensor(X_batch)).cuda())
y_pred = logits.max(1)[1].cpu().detach().numpy()
val_accuracy.append(np.mean(y_batch == y_pred))
del logits
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration): \t{:.6f}".format(
np.mean(train_loss[-len(x_train) // batch_size :])))
print(" validation accuracy: \t\t\t{:.2f} %".format(
np.mean(val_accuracy[-len(x_val) // batch_size :]) * 100))
```
| github_jupyter |
# Motivación: Redes Neuronales Convolucionales
La información que extraemos de las entradas sensoriales a menudo está determinada por su contexto. Con las imágenes, podemos suponer que los píxeles cercanos están estrechamente relacionados y su información colectiva es más relevante cuando se toma como una unidad. Por el contrario, podemos suponer que los píxeles individuales no transmiten información relacionada entre sí. Por ejemplo, para reconocer letras o dígitos, necesitamos analizar la dependencia de píxeles cercanos, porque determinan la forma del elemento. De esta manera, podríamos calcular la diferencia entre, por ejemplo, un 0 o un 1. Los píxeles de una imagen están organizados en una cuadrícula bidimensional, y si la imagen no es en escala de grises, tendremos una tercera dimensión para Los mapas de colores. Alternativamente, una imagen de resonancia magnética (MRI) también usa espacio tridimensional. Puede recordar que, hasta ahora, si queríamos alimentar una imagen a una red neuronal, teníamos que cambiarla de una matriz bidimensional a una matriz unidimensional. Las CNN están diseñadas para abordar este problema: cómo hacer que la información perteneciente a las neuronas que están más cerca sea más relevante que la información proveniente de las neuronas que están más separadas. En problemas visuales, esto se traduce en hacer que las neuronas procesen información proveniente de píxeles que están cerca uno del otro. Con CNNs, podremos alimentar entradas de una, dos o tres dimensiones y la red producirá una salida de la misma dimensionalidad. Como veremos más adelante, esto nos dará varias ventajas
Cuando tratamos de clasificar las imágenes CIFAR-10 usando una red de capas completamente conectadas con poco éxito. Una de las razones es que se sobreajustan. Si miramos la primera capa oculta de esa red, que tiene 1.024 neuronas. El tamaño de entrada de la imagen es 32x32x3 = 3,072. Por lo tanto, la primera capa oculta tenía un total de 2072 * 1024 = 314, 5728 pesos. ¡Ese no es un número pequeño! No solo es fácil sobreajustar una red tan grande, sino que también es ineficiente en la memoria. Además, cada neurona de entrada (o píxel) está conectada a cada neurona en la capa oculta. Debido a esto, la red no puede aprovechar la proximidad espacial de los píxeles, ya que no tiene una manera de saber qué píxeles están cerca uno del otro. Por el contrario, las CNN tienen propiedades que proporcionan una solución efectiva a estos problemas:
- Conectan neuronas, que solo corresponden a píxeles vecinos de la imagen. De esta manera, las neuronas están "forzadas" a recibir información de otras neuronas que están espacialmente cercanas. Esto también reduce el número de pesos, ya que no todas las neuronas están interconectadas.
- Una CNN utiliza el uso compartido de parámetros. En otras palabras, se comparte un número limitado de pesos entre todas las neuronas de una capa. Esto reduce aún más la cantidad de pesas y ayuda a combatir el sobreajuste. Puede sonar confuso, pero quedará claro en la siguiente sección.
La capa convolucional es el bloque de construcción más importante de una CNN. Consiste en un conjunto de filtros (también conocidos como núcleos o detectores de características), donde cada filtro se aplica en todas las áreas de los datos de entrada. Un filtro se define por un conjunto de pesos aprendibles. Como un guiño al tema en cuestión, la siguiente imagen ilustra esto muy bien:

Se muestra una capa de entrada bidimensional de una red neuronal. Por el bien de la simplicidad, asumiremos que esta es la capa de entrada, pero puede ser cualquier capa de la red. Como hemos visto en los capítulos anteriores, cada neurona de entrada representa la intensidad de color de un píxel (asumiremos que es una imagen en escala de grises por simplicidad). Primero, aplicaremos un filtro 3x3 en la parte superior derecha esquina de la imagen. Cada neurona de entrada está asociada con un solo peso del filtro. Tiene nueve pesos, debido a las nueve neuronas de entrada, pero, en general, el tamaño es arbitrario (2x2, 4x4, 5x5, etc.). La salida del filtro es una suma ponderada de sus entradas (el activaciones de las neuronas de entrada). Su propósito es resaltar una característica específica en la entrada, por ejemplo, una arista o una línea. El grupo de neuronas cercanas, que participan en la entrada. se llaman el campo receptivo. En el contexto de la red, la salida del filtro representa el valor de activación de una neurona en la siguiente capa. La neurona estará activa, si la función es presente en esta ubicación espacial.
Para cada nueva neurona, deslizaremos el filtro por la imagen de entrada y calcularemos su salida (la suma ponderada) con cada nuevo conjunto de neuronas de entrada. En el siguiente diagrama, puede ver cómo calcular las activaciones de las siguientes dos posiciones (un píxel para derecho):

- Al decir "arrastrar", queremos decir que los pesos del filtro no cambian en la imagen. En efecto, utilizaremos los mismos nueve pesos de filtro para calcular las activaciones de todas las neuronas de salida, cada vez con un conjunto diferente de neuronas de entrada. Llamamos a este parámetro compartir, y lo hacemos por dos razones:
- Al reducir el número de pesos, reducimos la huella de la memoria y evitamos el sobreajuste. El filtro resalta características específicas. Podemos suponer que esta característica es útil, independientemente de su posición en la imagen. Al compartir pesos, garantizamos que el filtro podrá ubicar la función en toda la imagen.
Hasta ahora, hemos descrito la relación de corte uno a uno, donde la salida es un solo corte, que toma la entrada de otro segmento (o una imagen). Esto funciona bien en escala de grises, pero cómo ¿Lo adaptamos para imágenes en color (relación n a 1)? ¡Una vez más, es simple! Primero, dividiremos el imagen en canales de color. En el caso de RGB, serían tres. Podemos pensar en cada color canal como un segmento de profundidad, donde los valores son las intensidades de píxeles para el color dado (R, G, o B), como se muestra en el siguiente ejemplo:
La combinación de sectores se denomina volumen de entrada con una profundidad de 3. Un filtro único de 3x3 es
aplicado a cada rebanada. La activación de una neurona de salida es solo la suma ponderada de filtros aplicados en todos los sectores. En otras palabras, combinaremos los tres filtros en un gran 3 x 3 x 3 + 1 filtro con 28 pesos (agregamos profundidad y un solo sesgo). Entonces, calcularemos el suma ponderada aplicando los pesos relevantes a cada segmento.
- Los mapas de características de entrada y salida tienen diferentes dimensiones. Digamos que tenemos una capa de entrada con tamaño (ancho, alto) y un filtro con dimensiones (filter_w, filter_h). Después de aplicar la convolución, las dimensiones de la capa de salida son (ancho - filtro_w + 1, altura - filtro_h + 1).
Como mencionamos, un filtro resalta una característica específica, como bordes o líneas. Pero, en general, muchas características son importantes y nos interesarán todas. ¿Cómo los destacamos a todos? Como de costumbre, es simple. Aplicaremos varios filtros en el conjunto de sectores de entrada. Cada filtro generará un segmento de salida único, que resalta la característica, detectada por el filtro (relación de n a m). Un sector de salida puede recibir información de:
- Todos los sectores de entrada, que es el estándar para capas convolucionales. En este escenario, un segmento de salida único es un caso de la relación n-a-1, que describimos anteriormente. Con múltiples segmentos de salida, la relación se convierte en n-m. En otras palabras, cada segmento de entrada contribuye a la salida de cada segmento de salida.
- Una sola porción de entrada. Esta operación se conoce como convolución profunda. Es un
tipo de reversión del caso anterior. En su forma más simple, aplicamos un filtro sobre un único segmento de entrada para producir un único segmento de salida. Este es un caso de la relación uno a uno, que describimos en la sección anterior. Pero también podemos especificar un multiplicador de canal (un entero m), donde aplicamos filtros m sobre un solo sector de salida para producir m sectores de salida. Este es un caso de relación de 1 a m. El número total de segmentos de salida es n * m.
Denotemos el ancho y la altura del filtro con Fw y Fh, la profundidad del volumen de entrada con D y la profundidad del volumen de salida con M. Luego, podemos calcular el número total de pesos W en una capa convolucional con el siguiente ecuación:
\begin{equation}
W=(D*F_w *F_h+1)*M
\end{equation}
Digamos que tenemos tres sectores y queremos aplicarles cuatro filtros de 5x5. Entonces la la capa convolucional tendrá un total de (3x5x5 + 1) * 4 = 304 pesos, y cuatro cortes de salida (volumen de salida con una profundidad de 4), un sesgo por corte. El filtro para cada segmento de salida tendrá tres parches de filtro de 5x5 para cada uno de los tres segmentos de entrada y un sesgo para un total de 3x5x5 + 1 = 76 pesos. La combinación de los mapas de salida se denomina volumen de salida con una profundidad de cuatro.
```
import numpy as np
def conv(image, im_filter):
"""
:param image: grayscale image as a 2-dimensional numpy array
:param im_filter: 2-dimensional numpy array
"""
# input dimensions
height = image.shape[0]
width = image.shape[1]
# output image with reduced dimensions
im_c = np.zeros((height - len(im_filter) + 1,width - len(im_filter) + 1))
# iterate over all rows and columns
for row in range(len(im_c)):
for col in range(len(im_c[0])):
# apply the filter
for i in range(len(im_filter)):
for j in range(len(im_filter[0])):
im_c[row, col] += image[row + i, col + j] *im_filter[i][j]
# fix out-of-bounds values
im_c[im_c > 255] = 255
im_c[im_c < 0] = 0
# plot images for comparison
import matplotlib.pyplot as plt
import matplotlib.cm as cm
plt.figure()
plt.imshow(image, cmap=cm.Greys_r)
plt.show()
plt.imshow(im_c, cmap=cm.Greys_r)
plt.show()
import requests
from PIL import Image
from io import BytesIO
# Cargar la imagen
url ="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Commander_Eileen_Collins_-_GPN-2000-001177.jpg/382px-Commander_Eileen_Collins_-_GPN-2000-001177.jpg?download"
resp = requests.get(url)
image_rgb =np.asarray(Image.open(BytesIO(resp.content)).convert("RGB"))
# Convertirla a escala de grises
image_grayscale = np.mean(image_rgb, axis=2, dtype=np.uint)
# Aplicar filtro de blur
blur = np.full([10, 10], 1. / 100)
conv(image_grayscale, blur)
sobel_x = [[-1, -2, -1],[0, 0, 0],
[1, 2, 1]]
conv(image_grayscale, sobel_x)
sobel_y = [[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]]
conv(image_grayscale, sobel_y)
```
# Stride y relleno en capas convolucionales
Hasta ahora, asumimos que el deslizamiento del filtro ocurre un píxel a la vez, pero ese no es siempre el caso. Podemos deslizar el filtro en múltiples posiciones. Este parámetro de las capas convolucionales se llama zancada. Por lo general, el paso es el mismo en todas las dimensiones de la entrada. En el siguiente diagrama, podemos ver una capa convolucional con un paso de 2:

Al usar una zancada mayor que 1, reducimos el tamaño del segmento de salida. En la sección anterior, presentamos una fórmula simple para el tamaño de salida, que incluía los tamaños de la entrada y el núcleo. Ahora, lo ampliaremos para incluir también el paso: ((ancho - filtro_w) / stride_w + 1, ((altura - filtro_h) / stride_h + 1).
Por ejemplo, el tamaño de salida de un corte cuadrado generado por una imagen de entrada de 28x28, convolucionado con un filtro 3x3 con zancada 1, sería 28-3 + 1 = 26. Pero con zancada 2, obtenemos (28-3) / 2 + 1 = 13. El efecto principal del paso más grande es un aumento en el campo receptivo de las neuronas de salida. Vamos a explicar esto con un ejemplo. Si usamos Stride 2, el tamaño del segmento de salida será aproximadamente cuatro veces menor que el de entrada. En otras palabras, una neurona de salida "cubrirá" el área, que es cuatro veces más grande, en comparación con las neuronas de entrada. Las neuronas en el
Las siguientes capas capturarán gradualmente la entrada de regiones más grandes de la imagen de entrada. Esto es importante, porque les permitiría detectar características más grandes y más complejas de la entrada.
Las operaciones de convolución que hemos discutido hasta ahora han producido una salida menor que la entrada. Pero, en la práctica, a menudo es deseable controlar el tamaño de la salida. Podemos resolver esto rellenando los bordes del segmento de entrada con filas y columnas de ceros antes de la operación de convolución. La forma más común de usar relleno es producir resultados con las mismas dimensiones que la entrada. En el siguiente diagrama, podemos ver una capa convolucional con relleno de 1:

Las neuronas blancas representan el relleno. Los segmentos de entrada y salida tienen las mismas dimensiones (neuronas oscuras). Esta es la forma más común de usar relleno. Los ceros recién rellenados participarán en la operación de convolución con el corte, pero no afectarán el resultado. La razón es que, aunque las áreas rellenadas estén conectadas con pesos a la siguiente capa, siempre multiplicaremos esos pesos por el valor rellenado, que es 0. Ahora agregaremos relleno a la fórmula del tamaño de salida. Deje que el tamaño del segmento de entrada sea I = (Iw, Ih), el tamaño del filtro F = (Fw, Fh), la zancada S = (Sw, Sh) y el relleno P = (Pw, Ph). Entonces el tamaño O = (Ow, Oh) del segmento de salida viene dado por las siguientes ecuaciones:
\begin{equation}
O_w=\frac{I_w+2P_w-F_w}{S_w}+1
\end{equation}
\begin{equation}
O_h=\frac{I_h+2P_h-F_h}{S_h}+1
\end{equation}
# Capas de pooling
En la sección anterior, explicamos cómo aumentar el campo receptivo de las neuronas usando un paso más grande que 1. Pero también podemos hacer esto con la ayuda de la agrupación de capas. Una capa de agrupación divide la porción de entrada en una cuadrícula, donde cada celda de la cuadrícula representa un campo receptivo de varias neuronas (tal como lo hace una capa convolucional). Luego, se aplica una operación de agrupación sobre cada celda de la cuadrícula. Existen diferentes tipos de capas de agrupación. Las capas de agrupación no cambian la profundidad del volumen, porque la operación de agrupación se realiza de forma independiente en cada segmento.
- Max pooling: es la forma más popular de pooling. La operación de agrupación máxima lleva a la neurona con el valor de activación más alto en cada campo receptivo local (celda de cuadrícula) y propaga solo ese valor hacia adelante. En la siguiente figura, podemos ver un ejemplo de agrupación máxima con un campo receptivo de 2x2:

- Average Pooling: es otro tipo de agrupación, donde la salida de cada campo receptivo es el valor medio de todas las activaciones dentro del campo. El siguiente es un ejemplo de agrupación promedio

Las capas de agrupación se definen por dos parámetros:
- Stride, que es lo mismo que con las capas convolucionales
- Tamaño del campo receptivo, que es el equivalente del tamaño del filtro en capas convolucionales.
# Estructura de una red neuronal convolucional

Normalmente, alternaríamos una o más capas convolucionales con una capa de agrupación. De esta forma, las capas convolucionales pueden detectar características en cada nivel del tamaño del campo receptivo. El tamaño del campo receptivo agregado de las capas más profundas es mayor que las del comienzo de la red. Esto les permite capturar características más complejas de regiones de entrada más grandes. Vamos a ilustrar esto con un ejemplo. Imagine que la red utiliza convoluciones de 3x3 con zancada 1 y agrupación de 2x2 con zancada 2:
- las neuronas de la primera capa convolucional recibirán información de 3x3 píxeles de la imagen.
- Un grupo de neuronas de salida 2x2 de la primera capa tendrá un tamaño de campo receptivo combinado de 4x4 (debido a la zancada).
- Después de la primera operación de agrupación, este grupo se combinará en una sola neurona de la capa de agrupación.
- La segunda operación de convolución toma información de las neuronas de agrupación 3x3. Por lo tanto, recibirá la entrada de un cuadrado con lados 3x4 = 12 (o un total de 12x12 = 144) píxeles de la imagen de entrada.
Utilizamos las capas convolucionales para extraer características de la entrada. Las características detectadas por las capas más profundas son muy abstractas, pero tampoco son legibles por los humanos. Para resolver este problema, generalmente agregamos una o más capas completamente conectadas después de la última capa convolucional / agrupación. En este ejemplo, la última capa (salida) completamente conectada utilizará softmax para estimar las probabilidades de clase de la entrada. Puede pensar en las capas totalmente conectadas como traductores entre el idioma de la red (que no entendemos) y el nuestro. Las capas convolucionales más profundas generalmente tienen más filtros (por lo tanto, mayor profundidad de volumen), en comparación con los iniciales. Un detector de características al comienzo de la red funciona en un pequeño campo receptivo. Solo puede detectar un número limitado de características, como bordes o líneas, compartidas entre todas las clases. Por otro lado, una capa más profunda detectaría características más complejas y numerosas. Por ejemplo, si tenemos varias clases, como automóviles, árboles o personas, cada una tendrá su propio conjunto de características, como neumáticos, puertas, hojas y caras, etc. Esto requeriría más detectores de características.
```
#Importen tensorflow, keras, de keras, importen Sequential, Dense, Activation,
#Convolution2D, MaxPooling, Flatten y np_utils
#Primero: Carguen datos de mnist
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Flatten
from keras.utils import np_utils
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 28, 28, 1)
X_test = X_test.reshape(10000, 28, 28, 1)
Y_train = np_utils.to_categorical(Y_train, 10)
Y_test = np_utils.to_categorical(Y_test, 10)
#Segundo: creen una red neuronal convolucional
model = Sequential([Convolution2D(filters=64,kernel_size=(3, 3),input_shape=(28, 28, 1)),Activation('sigmoid'),Convolution2D(filters=32,kernel_size=(3, 3)), Activation('sigmoid'),MaxPooling2D(pool_size=(4, 4)), Flatten(), Dense(64), Activation('relu'), Dense(10), Activation('softmax')])
model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='adadelta')
model.fit(X_train, Y_train, batch_size=100, epochs=5,validation_split=0.1, verbose=1)
score = model.evaluate(X_test, Y_test, verbose=1)
print('Test accuracy:', score[1])
import warnings
warnings.filterwarnings("ignore")
```
# Preprocesamiento de datos
Hasta ahora, hemos alimentado la red con entradas no modificadas. En el caso de las imágenes, estas son intensidades de píxeles en el rango [0: 255]. Pero eso no es óptimo. Imagine que tenemos una imagen RGB, donde las intensidades en uno de los canales de color son muy altas en comparación con los otros dos. Cuando alimentamos la imagen a la red, los valores de este canal serán dominantes, disminuyendo los demás. Esto podría sesgar los resultados, porque en realidad cada canal tiene la misma importancia. Para resolver esto, necesitamos preparar (o normalizar) los datos, antes de alimentarlos a la red. En la práctica, usaremos dos tipos de normalización:
- Feature Scaling: Esta operación escala todas las entradas en el rango [0,1]. Por ejemplo, un píxel con intensidad 125 tendría un valor escalado de. El escalado de características es rápido y fácil de implementar.
- Standard Score: Aquí μ y σ son la media y la desviación estándar de todos los datos de entrenamiento. Por lo general, se calculan por separado para cada dimensión de entrada. Por ejemplo, en una imagen RGB, calcularíamos la media μ y σ para cada canal. Debemos tener en cuenta que μ y σ deben calcularse solo en los datos de entrenamiento y luego aplicarse a los datos de la prueba.
# Dropout
Dropout es una técnica de regularización, que se puede aplicar a la salida de algunas de las capas de red. El dropout aleatorio y periódico elimina algunas de las neuronas (junto con sus conexiones de entrada y salida) de la red. Durante un mini lote de entrenamiento, cada neurona tiene una probabilidad p de ser descartada estocásticamente. Esto es para asegurar que ninguna neurona termine confiando demasiado en otras neuronas y "aprenda" algo útil para la red. El abandono se puede aplicar después de capas convolucionales, de agrupación o completamente conectadas. En la siguiente ilustración, podemos ver un abandono de capas completamente conectadas:

# Aumento de datos
Una de las técnicas de regularización más eficientes es el aumento de datos. Si los datos de entrenamiento son demasiado pequeños, la red podría comenzar a sobreajustarse. El aumento de datos ayuda a contrarrestar esto al aumentar artificialmente el tamaño del conjunto de entrenamiento. Usemos un ejemplo. En los ejemplos de MNIST y CIFAR-10, hemos entrenado la red en varias épocas. La red "verá" cada muestra del conjunto de datos una vez por época. Para evitar esto, podemos aplicar aumentos aleatorios a las imágenes, antes de usarlas para el entrenamiento. Las etiquetas permanecerán igual. Algunos de los aumentos de imagen más populares son:

```
import keras
from keras.datasets import cifar10
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten,BatchNormalization
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
batch_size = 50
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
Y_train = keras.utils.to_categorical(Y_train, 10)
Y_test = keras.utils.to_categorical(Y_test, 10)
data_generator = ImageDataGenerator(rotation_range=90, width_shift_range=0.1,height_shift_range=0.1, featurewise_center=True, featurewise_std_normalization=True, horizontal_flip=True)
data_generator.fit(X_train)
# standardize the test set
for i in range(len(X_test)):
X_test[i] = data_generator.standardize(X_test[i])
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',input_shape=X_train.shape[1:]))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(32, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(128, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.fit_generator( generator=data_generator.flow(x=X_train, y=Y_train, batch_size=batch_size),steps_per_epoch=len(X_train),epochs=100, validation_data=(X_test, Y_test),workers=4)
```
| github_jupyter |
# Investigating the historical running data to evaluate my performance (Session summaries)
Many runners use third-party apps to track running activities. These apps and its companion website provide many visual charts with analytical metrics to help runners review their running performances to set up new training plans or make adjustments. In this notebook, we will extract this running data and analyse it locally using runpandas. We also take the opportunity to illustrate the runpandas methods for summarizing the historical workouts and get some valuable insights.
## Looking at the data
The example data set used in this tutorial contains 68 sessions of a single female runner from the period of 2020 until 2021.
The code chunk below loads the data using the method `runpandas.read_directory_aggregate`, which allows the user to read all the tracking files of a support format in a directory and combine them in a data frame split by sessions based on the timestamps of each activity. It means that for each workout file will be stored in separate lines in the dataframe.
```
import warnings
warnings.filterwarnings('ignore')
import runpandas
session = runpandas.read_dir_aggregate(dirname='session/')
```
In pandas we use the ``pandas.MultiIndex`` which alows the dataframe have multiple columns as a row identifier, while having each index column related to another through a parent/child relationship. In our scenario we have the start time from each activity as the first index level and the timestamps from the activity as the second index level.
```
session
session.index #MultiIndex (start, timestamp)
```
Now let's see how many activities there are available for analysis. For this question, we also have an acessor ``runpandas.types.acessors.session._SessionAcessor`` that holds several methods for computing the basic running metrics across all the activities and some summary statistics.
```
#count the number of activities in the session
print ('Total Activities:', session.session.count())
```
We can compute the main running metrics (speed, pace, moving, etc) using the session methods available as like the ones available in the ``runpandas.types.metrics.MetricsAcessor`` . By the way, those methods are called inside each metric method, but applying in each of activities separatedely.
```
#In this example we compute the distance and the distance per position across all workouts
session = session.session.distance()
session
#comput the speed for each activity
session = session.session.speed(from_distances=True)
#compute the pace for each activity
session = session.session.pace()
#compute the inactivity periods for each activity
session = session.session.only_moving()
session
```
After all the computation done, let's going to the next step: the exploration and get some descriptive statistics.
## Exploring the data
After the loading and metrics computation for all the activities, now let's look further the data and get the basic summaries about the sessions: time spent, total distance, mean speed and other insightful statistics in each running activity. For this task, we may accomplish it by calling the method ``runpandas.types.session._SessionAcessor.summarize`` . It will return a basic Dataframe including all the aggregated statistics per activity from the season frame.
```
summary = session.session.summarize()
summary
```
Here, some descriptive statistics:
```
summary['day_diff'] = summary.index.to_series().diff().astype('timedelta64[D]').astype('Int64')
summary['pace_moving_all_mean'] = summary.mean_moving_pace.mean()
summary['distance_all_mean'] = round(summary.total_distance.mean()/1000,2)
summary['mean_speed'] = summary['mean_speed'] * 3.6 #convert from m/s to km/h
summary['max_speed'] = summary['max_speed'] * 3.6 #convert from m/s to km/h
summary['mean_moving_speed'] = summary['mean_moving_speed'] * 3.6 #convert from m/s to km/h
print('Session Interval:', (summary.index.to_series().max() - summary.index.to_series().min()).days, 'days')
print('Total Workouts:', len(summary), 'runnings')
print('Tota KM Distance:', summary['total_distance'].sum() / 1000)
print('Running Intervals (average):' , round(summary.day_diff.mean(), 2), 'days')
print('Average Pace (all runs):', summary.mean_pace.mean())
print('Average Moving Pace (all runs):', summary.mean_moving_pace.mean())
print('Average KM Distance (all runs):', round(summary.total_distance.mean()/ 1000,2))
```
As we can see above, we analyzed the period of 366 days (one year) of running workouts. In this period, she ran 68 times which achieved the total distance of 491 km! The average interval in days between two actitivies is 5 days, with average moving pace of 06'02" per km and average distance of 7.23km! Great numbers for a starter runner!
## Exploring the data with some visualizations
At this point, I have the data to start some visualization and analysis. The first question that popped was : How was the evolution of my average moving pace through all the runs and the corresponding average? Let's use the ``matplotlib`` package to illustrate the possible answer.
```
#let's convert the pace to float number in minutes
import datetime
summary['mean_moving_pace_float'] = summary['mean_moving_pace'] / datetime.timedelta(minutes=1)
summary['pace_moving_all_mean_float'] = summary['pace_moving_all_mean'] / datetime.timedelta(minutes=1)
#replace NA values with 0
summary['day_diff'].fillna(0, inplace=True)
import matplotlib.pyplot as plt
plt.subplots(figsize=(8, 5))
plt.plot(summary.index, summary.mean_moving_pace_float, color='silver')
plt.plot(summary.pace_moving_all_mean_float, color='purple', linestyle='dashed', label='average')
plt.title("Pace Evolution")
plt.xlabel("Runnings")
plt.ylabel("Pace")
plt.legend()
```
We can see some outliers at the running workouts at september 2020. Let's do some filtering (excluding those with "unreasonable" paces that might be mislabelled)
```
summary_without_outliers = summary[summary['mean_moving_pace_float'].between(5,10)]
plt.plot(summary_without_outliers.index, summary_without_outliers.mean_moving_pace_float, color='silver')
plt.plot(summary_without_outliers.pace_moving_all_mean_float, color='purple', linestyle='dashed', label='average')
plt.title("Pace Evolution")
plt.xlabel("Runnings")
plt.xticks(rotation=90)
plt.ylabel("Pace")
plt.legend()
```
That looks much better now. Now we can see clearly that her pace kept floating between 6'50 min and 6'00 min over all this year. I also created a new variable at my summary dataframe called ``day_diff``, which means the interval in day between consecutive workouts. This variable helped me to see if her pace grew up as the number of the days without running also increased.
```
#recompute the day_diff removing the outliers
summary_without_outliers['day_diff'] = summary_without_outliers.index.to_series().diff().astype('timedelta64[D]').astype('Int64')
summary_without_outliers['day_diff'].fillna(0, inplace=True)
fig,ax = plt.subplots(figsize=(8, 5))
ax.plot(summary_without_outliers.index, summary_without_outliers.mean_moving_pace_float, color='silver')
ax.set_xlabel('Runnings')
ax.set_ylabel('Pace',color='silver')
ax2=ax.twinx()
ax2.plot(summary_without_outliers.index, summary_without_outliers.day_diff.astype('int'),color='purple')
ax2.set_ylabel('Days Without Running',color='indigo', rotation=270)
plt.title('Pace vs Days without Running')
plt.show()
```
As we can see at the chart above, there isn't a linear correlation between the number of the days without running and her growing pace. In her case it might be the period of rest helped her to a better recovery. Can we have a statistical evidence ?
The following code will create a regression plot of her interval of days vs pace. I will use Seaborn to create plot. As we can see there is negative correlation between the two variables, although, it's not strong.
```
import seaborn as sns
sns.set(style="ticks", context="talk")
sns.regplot(x=summary_without_outliers.day_diff.astype('int'), y=summary_without_outliers.mean_moving_pace_float).set_title("Day Diff vs Pace")
```
Let's explore now the relationship between the mean pace and the average speed. It is obvious that the bigger the average speed the pace is lower. More fast means less time to cover the distance.
```
plt.subplots(figsize=(8, 5))
plt.plot(summary_without_outliers.index, summary_without_outliers.mean_moving_pace_float, color='silver', label='Average Pace')
plt.plot(summary_without_outliers.mean_moving_speed, color='purple', label = 'Average Speed')
plt.title("Average Speed x Average Pace")
plt.xlabel("Runnings")
plt.legend()
plt.show()
```
The chart below illustrates the evolution of the total distance covered at her runnings workouts. There is a higher variance of her run distances. The average distance for most of the runs is close to 7km. The second chart illustrating the histogram of the run distances show that most of her usual weekday runs are around 4-7km and my longer weekend runs (she started to run longer distances above 7kms very recently).
```
plt.subplots(figsize=(8, 5))
plt.plot(summary_without_outliers.index, summary_without_outliers.total_distance / 1000, color='silver')
plt.plot(summary_without_outliers.distance_all_mean, color='purple', linestyle='dashed', label='average')
plt.title("Distance Evolution")
plt.xlabel("Runs")
plt.ylabel("distance")
plt.legend()
plt.show()
(summary_without_outliers['total_distance'] / 1000.0).hist(bins=30)
```
Now let's see how faster she was and if there is a distinction between longer and shorter runs related to the speed (slower vs faster). Let's try plotting distance against speed to get a better idea.
```
plt.subplots(figsize=(8, 5))
plt.scatter(summary_without_outliers.total_distance/ 1000, summary_without_outliers.mean_moving_pace_float, color='purple', marker='s')
plt.title("Distance vs. pace")
plt.xlabel("Distance")
plt.ylabel("Pace")
plt.legend()
```
The distribution of her pace across the distance runs is very dispersed. It means a no clear trend there. But what if we restrict it just to this year ?
```
summary_without_outliers_2021 = summary_without_outliers[summary_without_outliers.index > '2021-01-01']
summary_without_outliers_2021
plt.subplots(figsize=(8, 5))
plt.scatter(summary_without_outliers_2021.total_distance/ 1000, summary_without_outliers_2021.mean_moving_pace_float, color='purple', marker='s')
plt.title("Distance vs. pace")
plt.xlabel("Distance")
plt.ylabel("Pace")
plt.legend()
```
It still does not give us a clear trend, but it depicts that she really started to achieve better performance when she started to run longer distances (paces loweer then 6'20 when she started to run above 10km).
Finally, let's see her performance evolution in pace over time in 5km, 10km and 15kms. For this analysis, I had to filter the session dataframe based on distances. In this tutorial, I will assume that the distances between 5km - 5.9km , 10km - 10.9km, 15km - 15.9km will be normalized as 5, 10 , 15km.
```
summary_without_outliers_5km = summary_without_outliers[summary_without_outliers['total_distance'].between(5000,5900)]
summary_without_outliers_10km = summary_without_outliers[summary_without_outliers['total_distance'].between(10000,10900)]
summary_without_outliers_15km = summary_without_outliers[summary_without_outliers['total_distance'].between(15000,15900)]
fig, axs = plt.subplots(3, sharex=True, figsize=(18, 16))
fig.suptitle('Average Moving Pace over time (5km, 10km, 15km)')
axs[0].plot(summary_without_outliers_5km.index, summary_without_outliers_5km.mean_moving_pace_float, marker='*')
axs[0].set_title('5km')
axs[1].plot(summary_without_outliers_10km.index, summary_without_outliers_10km.mean_moving_pace_float, marker='*')
axs[1].set_title('10km')
axs[2].plot(summary_without_outliers_15km.index, summary_without_outliers_15km.mean_moving_pace_float, marker='*')
axs[2].set_title('15km')
plt.xlabel('Date')
plt.show()
```
Her 5km got better until semptember 2020, but as she started to get greater distances, her performance got more slower. He 10km got better across 2021. Her 15km, since we only have one observation, there's no insight/trend there.
## Conclusions
* So, as expected, no major insight. However, always running help he to improve her performance, in order to maintain or lower her pace.
* She started to run greater distances with excelent paces, for instance the distance run of 15km with the moving pace 5'50.
* There is no trend about her run distance vs pace. We believe that we need to collect more data to might see any new insights.
In this tutorial, we showed the possibilities of using `runpandas` python package to perform several type of running analysis assisted by visualization and data handling packages such as Matplotlib and pandas. With the introduction of session feature, we now can analyse a group of activities and investigate new insights over time.
| github_jupyter |
# Import necessary depencencies
```
import pandas as pd
import numpy as np
import text_normalizer as tn
import model_evaluation_utils as meu
np.set_printoptions(precision=2, linewidth=80)
```
# Load and normalize data
```
dataset = pd.read_csv(r'movie_reviews.csv')
# take a peek at the data
print(dataset.head())
reviews = np.array(dataset['review'])
sentiments = np.array(dataset['sentiment'])
# build train and test datasets
train_reviews = reviews[:35000]
train_sentiments = sentiments[:35000]
test_reviews = reviews[35000:]
test_sentiments = sentiments[35000:]
# normalize datasets
norm_train_reviews = tn.normalize_corpus(train_reviews)
norm_test_reviews = tn.normalize_corpus(test_reviews)
```
# Traditional Supervised Machine Learning Models
## Feature Engineering
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# build BOW features on train reviews
cv = CountVectorizer(binary=False, min_df=0.0, max_df=1.0, ngram_range=(1,2))
cv_train_features = cv.fit_transform(norm_train_reviews)
# build TFIDF features on train reviews
tv = TfidfVectorizer(use_idf=True, min_df=0.0, max_df=1.0, ngram_range=(1,2),
sublinear_tf=True)
tv_train_features = tv.fit_transform(norm_train_reviews)
# transform test reviews into features
cv_test_features = cv.transform(norm_test_reviews)
tv_test_features = tv.transform(norm_test_reviews)
print('BOW model:> Train features shape:', cv_train_features.shape, ' Test features shape:', cv_test_features.shape)
print('TFIDF model:> Train features shape:', tv_train_features.shape, ' Test features shape:', tv_test_features.shape)
```
## Model Training, Prediction and Performance Evaluation
```
from sklearn.linear_model import SGDClassifier, LogisticRegression
lr = LogisticRegression(penalty='l2', max_iter=100, C=1)
svm = SGDClassifier(loss='hinge', n_iter=100)
# Logistic Regression model on BOW features
lr_bow_predictions = meu.train_predict_model(classifier=lr,
train_features=cv_train_features, train_labels=train_sentiments,
test_features=cv_test_features, test_labels=test_sentiments)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=lr_bow_predictions,
classes=['positive', 'negative'])
# Logistic Regression model on TF-IDF features
lr_tfidf_predictions = meu.train_predict_model(classifier=lr,
train_features=tv_train_features, train_labels=train_sentiments,
test_features=tv_test_features, test_labels=test_sentiments)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=lr_tfidf_predictions,
classes=['positive', 'negative'])
svm_bow_predictions = meu.train_predict_model(classifier=svm,
train_features=cv_train_features, train_labels=train_sentiments,
test_features=cv_test_features, test_labels=test_sentiments)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=svm_bow_predictions,
classes=['positive', 'negative'])
svm_tfidf_predictions = meu.train_predict_model(classifier=svm,
train_features=tv_train_features, train_labels=train_sentiments,
test_features=tv_test_features, test_labels=test_sentiments)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=svm_tfidf_predictions,
classes=['positive', 'negative'])
```
# Newer Supervised Deep Learning Models
```
import gensim
import keras
from keras.models import Sequential
from keras.layers import Dropout, Activation, Dense
from sklearn.preprocessing import LabelEncoder
```
## Prediction class label encoding
```
le = LabelEncoder()
num_classes=2
# tokenize train reviews & encode train labels
tokenized_train = [tn.tokenizer.tokenize(text)
for text in norm_train_reviews]
y_tr = le.fit_transform(train_sentiments)
y_train = keras.utils.to_categorical(y_tr, num_classes)
# tokenize test reviews & encode test labels
tokenized_test = [tn.tokenizer.tokenize(text)
for text in norm_test_reviews]
y_ts = le.fit_transform(test_sentiments)
y_test = keras.utils.to_categorical(y_ts, num_classes)
# print class label encoding map and encoded labels
print('Sentiment class label map:', dict(zip(le.classes_, le.transform(le.classes_))))
print('Sample test label transformation:\n'+'-'*35,
'\nActual Labels:', test_sentiments[:3], '\nEncoded Labels:', y_ts[:3],
'\nOne hot encoded Labels:\n', y_test[:3])
```
## Feature Engineering with word embeddings
```
# build word2vec model
w2v_num_features = 500
w2v_model = gensim.models.Word2Vec(tokenized_train, size=w2v_num_features, window=150,
min_count=10, sample=1e-3)
def averaged_word2vec_vectorizer(corpus, model, num_features):
vocabulary = set(model.wv.index2word)
def average_word_vectors(words, model, vocabulary, num_features):
feature_vector = np.zeros((num_features,), dtype="float64")
nwords = 0.
for word in words:
if word in vocabulary:
nwords = nwords + 1.
feature_vector = np.add(feature_vector, model[word])
if nwords:
feature_vector = np.divide(feature_vector, nwords)
return feature_vector
features = [average_word_vectors(tokenized_sentence, model, vocabulary, num_features)
for tokenized_sentence in corpus]
return np.array(features)
# generate averaged word vector features from word2vec model
avg_wv_train_features = averaged_word2vec_vectorizer(corpus=tokenized_train, model=w2v_model,
num_features=500)
avg_wv_test_features = averaged_word2vec_vectorizer(corpus=tokenized_test, model=w2v_model,
num_features=500)
# feature engineering with GloVe model
train_nlp = [tn.nlp(item) for item in norm_train_reviews]
train_glove_features = np.array([item.vector for item in train_nlp])
test_nlp = [tn.nlp(item) for item in norm_test_reviews]
test_glove_features = np.array([item.vector for item in test_nlp])
print('Word2Vec model:> Train features shape:', avg_wv_train_features.shape, ' Test features shape:', avg_wv_test_features.shape)
print('GloVe model:> Train features shape:', train_glove_features.shape, ' Test features shape:', test_glove_features.shape)
```
## Modeling with deep neural networks
### Building Deep neural network architecture
```
def construct_deepnn_architecture(num_input_features):
dnn_model = Sequential()
dnn_model.add(Dense(512, activation='relu', input_shape=(num_input_features,)))
dnn_model.add(Dropout(0.2))
dnn_model.add(Dense(512, activation='relu'))
dnn_model.add(Dropout(0.2))
dnn_model.add(Dense(512, activation='relu'))
dnn_model.add(Dropout(0.2))
dnn_model.add(Dense(2))
dnn_model.add(Activation('softmax'))
dnn_model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
return dnn_model
w2v_dnn = construct_deepnn_architecture(num_input_features=500)
```
### Visualize sample deep architecture
```
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(w2v_dnn, show_shapes=True, show_layer_names=False,
rankdir='TB').create(prog='dot', format='svg'))
```
### Model Training, Prediction and Performance Evaluation
```
batch_size = 100
w2v_dnn.fit(avg_wv_train_features, y_train, epochs=5, batch_size=batch_size,
shuffle=True, validation_split=0.1, verbose=1)
y_pred = w2v_dnn.predict_classes(avg_wv_test_features)
predictions = le.inverse_transform(y_pred)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=predictions,
classes=['positive', 'negative'])
glove_dnn = construct_deepnn_architecture(num_input_features=300)
batch_size = 100
glove_dnn.fit(train_glove_features, y_train, epochs=5, batch_size=batch_size,
shuffle=True, validation_split=0.1, verbose=1)
y_pred = glove_dnn.predict_classes(test_glove_features)
predictions = le.inverse_transform(y_pred)
meu.display_model_performance_metrics(true_labels=test_sentiments, predicted_labels=predictions,
classes=['positive', 'negative'])
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Template/geemap_colab.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Template/geemap_colab.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Template/geemap_colab.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install geemap
```
!pip install geemap
```
## Import geemap library
The [geemap](https://github.com/giswqs/geemap) Python package has two plotting backends: [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium). A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that Google Colab currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use `import geemap.eefolium as geemap`.
```
import ee
import geemap.eefolium as geemap
```
## Create an interactive map
```
Map = geemap.Map()
# Map
```
## Add Earth Engine data
```
# Add Earth Engine dataset
image = ee.Image('USGS/SRTMGL1_003')
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}
# Print the elevation of Mount Everest.
xy = ee.Geometry.Point([86.9250, 27.9881])
elev = image.sample(xy, 30).first().get('elevation').getInfo()
print('Mount Everest elevation (m):', elev)
# Add Earth Engine layers to Map
Map.addLayer(image, vis_params, 'DEM')
Map.addLayer(xy, {'color': 'red'}, 'Mount Everest')
# Center the map based on an Earth Engine object or coordinates (longitude, latitude)
# Map.centerObject(xy, 4)
Map.setCenter(86.9250, 27.9881, 4)
```
## Display the map
```
Map.addLayerControl()
Map
```
| github_jupyter |
! pip3 install -U scikit-learn scipy matplotlib
! pip3 install -U shap
! pip3 install -U lime
! pip3 install -U xgboost
! pip3 install -U eli5
! pip3 install -U seaborn
! pip3 install ipywidgets
! pip3 install lightgbm
! pip3 install mlxtend
! pip3 install -U catboost
```
import os
import numpy as np
import pandas as pd
import pickle
import itertools
from eli5 import show_prediction, show_weights
from eli5.sklearn import PermutationImportance
from sklearn import model_selection
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, roc_curve, auc, confusion_matrix, classification_report
from sklearn.compose import ColumnTransformer
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier, StackingClassifier
import xgboost as xgb
import lightgbm as lgb
from catboost import CatBoostClassifier, Pool
import seaborn as sns
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
from mlxtend.plotting import plot_decision_regions
from mlxtend.classifier import EnsembleVoteClassifier
import sys
sys.path.append('..')
from inxai import *
import shap
import lime
path = '../examples/'
def to_pickle(obj, name):
with open(path + name + '.pickle', 'wb') as f:
pickle.dump(obj, f)
def from_pickle(name):
return pd.read_pickle(path + name + '.pickle')
def approximateAreaUnderTheCurveAccLoss(acc_loss):
acc_los_abs = [0 if value < 0 else value for value in acc_loss]
return sum(acc_los_abs) / len(acc_los_abs) * 100
def AreaUnderTheCurve(acc_loss):
l = len(acc_loss)
return auc(np.linspace(0, 1, l), acc_loss)
df = pd.read_csv("../examples/data/compass/propublicaCompassRecividism_data_fairml.csv/propublica_data_for_fairml.csv")
print(df.shape)
display(df.columns)
df.head()
TARGET_COL = "Two_yr_Recidivism"
X = df.drop([TARGET_COL],axis=1)
y = df[TARGET_COL]
FEATURE_IDS = [str(i) for i in range(X.shape[1])]
X_train, X_test, y_train, y_test = train_test_split(pd.DataFrame(X), y, test_size=0.33, random_state=42)
print("TRAIN:")
print(X_train.head())
print("\nTEST:")
print(y_test.head())
```
## Modele
- svc_radial
- svc_lin
- xgbc
- lgbm
- rfc
- catboost
```
svc_radial = SVC(kernel='rbf',probability=True) # does not work with eli
svc_radial.fit(X_train, y_train)
svc_radial_preds = svc_radial.predict(X_test)
# to_pickle(svc_radial, 'model_svc_radial')
print(accuracy_score(y_test, svc_radial_preds))
print(classification_report(y_test, svc_radial_preds))
# show_prediction(svc_radial, X_train.iloc[1], feature_names = X.columns.tolist(),
# show_feature_values=True)
svc_lin = SVC(kernel='linear',probability=True)
svc_lin.fit(X_train, y_train)
svc_lin_preds = svc_lin.predict(X_test)
# to_pickle(svc_lin, 'model_svc_lin')
print(accuracy_score(y_test, svc_lin_preds))
print(classification_report(y_test, svc_lin_preds))
show_prediction(svc_lin, X_train.iloc[1], feature_names = X.columns.tolist(),
show_feature_values=True)
xgbc = xgb.XGBClassifier()
xgbc.fit(X_train, y_train)
xgbc_preds = xgbc.predict(X_test)
# to_pickle(xgbc, 'model_xgbc')
print(accuracy_score(y_test, xgbc_preds))
print(classification_report(y_test, xgbc_preds))
show_prediction(xgbc, X_train.iloc[1], feature_names = X.columns.tolist(),
show_feature_values=True)
params = {'metric' : 'auc',
'boosting_type' : 'gbdt',
'colsample_bytree' : 0.92,
# 'max_depth' : -1,
# 'n_estimators' : 200,
'min_child_samples': 4,
# 'min_child_weight': 0.1,
'subsample': 0.85,
# 'verbose' : -1,
'num_threads' : 4 }
lgbm = lgb.train(params, # does not work with eli
lgb.Dataset(X_train,label=y_train),
# 2500,
valid_sets=lgb.Dataset(X_test,label=y_test),
early_stopping_rounds= 15,
verbose_eval= 30
)
lgbm_preds = lgbm.predict(X_test)
lgbm_preds = (lgbm_preds > 0.5) * 1
# to_pickle(lgbm, 'model_lgbm')
print(accuracy_score(y_test, lgbm_preds))
print(classification_report(y_test, lgbm_preds))
# show_prediction(lgbm, X_train.iloc[1], feature_names = X.columns.tolist(), #estimator is not supported
# show_feature_values=True)
rfc = RandomForestClassifier()
rfc.fit(X_train, y_train)
rfc_preds = rfc.predict(X_test)
# to_pickle(rfc, 'model_rfc')
print(accuracy_score(y_test, rfc_preds))
print(classification_report(y_test, rfc_preds))
show_prediction(rfc, X_train.iloc[1], feature_names = X.columns.tolist(),
show_feature_values=True)
ctb = CatBoostClassifier(
custom_loss=['Accuracy'],
random_seed=42,
logging_level='Silent'
)
ctb.fit(X_train, y_train)
ctb_preds = ctb.predict(X_test)
to_pickle(ctb, 'model_ctb')
print(accuracy_score(y_test, ctb_preds))
print(classification_report(y_test, ctb_preds))
# show_prediction(ctb, X_train.iloc[1], feature_names = X.columns.tolist(), #estimator is not supported
# show_feature_values=True)
```
## LIME
```
# lime_svc_radial = generate_per_instance_importances(models=[svc_radial], X=X_test, y=y_test, framework='lime') # does not work
# to_pickle(lime_svc_radial, 'lime_svc_radial')
# lime_svc_radial
# lime_svc_lin = generate_per_instance_importances(models=svc_lin, X=X_test, y=y_test, framework='lime') # does not work
# to_pickle(lime_svc_lin, 'lime_svc_lin')
# lime_svc_lin
# lime_xgbc = generate_per_instance_importances(models=xgbc, X=X_test, y=y_test, framework='lime') # does not work
# to_pickle(lime_xgbc, 'lime_xgbc')
# lime_xgbc
# lime_lgbm = generate_per_instance_importances(models=lgbm, X=X_test, y=y_test, framework='lime') # does not work
# to_pickle(lime_lgbm, 'lime_lgbm')
# lime_lgbm
# lime_rfc = generate_per_instance_importances(models=rfc, X=X_test, y=y_test, framework='lime') # does not work
# to_pickle(lime_rfc, 'lime_lgbm')
# lime_rfc
```
## SHAP's
```
gm = GlobalFeatureMetric()
# shap_svc_radial = generate_per_instance_importances(models=svc_radial, X=X_test.iloc[0:10, ], y=y_test.iloc[0:10, ], framework='kernel_shap')
# to_pickle(shap_svc_radial, 'shap_svc_radial')
# print(len(shap_svc_radial))
# shap_svc_lin = generate_per_instance_importances(models=svc_lin, X=X_test.iloc[0:10, ], y=y_test.iloc[0:10, ], framework='kernel_shap')
# to_pickle(shap_svc_lin, 'shap_svc_lin')
# print(len(shap_svc_lin))
# # shap_xgbc = generate_per_instance_importances(models=xgbc, X=X_test.iloc[:, ], y=y_test.iloc[:, ], framework='kernel_shap')
# # to_pickle(shap_xgbc, 'shap_xgbc')
# print(len(shap_xgbc))
# shap_xgbc_tree = generate_per_instance_importances(models=xgbc, X=X_test.iloc[:, ], y=y_test.iloc[:, ], framework='tree_shap')
# to_pickle(shap_xgbc_tree, 'shap_xgbc_tree')
# print(len(shap_xgbc_tree))
# # shap_lgbm = generate_per_instance_importances(models=lgbm, X=X_test.iloc[0:10, ], y=y_test.iloc[0:10, ], framework='tree_shap') # nie dziala
# # to_pickle(shap_lgbm, 'shap_lgbm')
# # print(len(shap_lgbm))
# shap_rfc = generate_per_instance_importances(models=rfc, X=X_test.iloc[:, ], y=y_test.iloc[:, ], framework='tree_shap')
# to_pickle(shap_rfc, 'shap_rfc')
# print(len(shap_rfc))
# shap_ctb = generate_per_instance_importances(models=ctb, X=X_test.iloc[:, ], y=y_test.iloc[:, ], framework='tree_shap')
# to_pickle(shap_ctb, 'shap_ctb')
# print(len(shap_ctb))
```
### SHAP's baseline
```
shap_svc_radial = from_pickle('shap_svc_radial')
print(len(shap_svc_radial))
shap_svc_lin = from_pickle('shap_svc_lin')
print(len(shap_svc_lin))
shap_xgbc = from_pickle('shap_xgbc')
print(len(shap_xgbc))
shap_rfc = from_pickle('shap_rfc')
print(len(shap_rfc))
shap_ctb = from_pickle('shap_ctb')
print(len(shap_ctb))
# shap_tree_xgbc_rfc_ctb = generate_per_instance_importances(models=[xgbc, rfc, ctb], X=X_test, y=y_test, framework='tree_shap')
# to_pickle(shap_tree_xgbc_rfc_ctb, 'shap_tree_xgbc_rfc_ctb')
shap_tree_xgbc_rfc_ctb = from_pickle('shap_tree_xgbc_rfc_ctb')
len(shap_tree_xgbc_rfc_ctb)
```
### Consistency per model
```
# shap_xgbc_rfc_ctb_consistency = gm.consistency([shap_xgbc, shap_rfc, shap_ctb]) # ! porownojemy spojnosc m. modelami (LIME(...) rzuca bledem)
# to_pickle(shap_xgbc_rfc_ctb_consistency, 'shap_xgbc_rfc_ctb_consistency')
shap_xgbc_rfc_ctb_consistency = from_pickle('shap_xgbc_rfc_ctb_consistency')
len(shap_xgbc_rfc_ctb_consistency)
# shap_xgbc_rfc_ctb_consistency
shap_tree_xgbc_rfc_ctb_consistency = gm.consistency(shap_tree_xgbc_rfc_ctb)
len(shap_tree_xgbc_rfc_ctb_consistency)
sns.boxplot(x="variable", y="value", data=pd.melt(pd.DataFrame({'consistency':shap_xgbc_rfc_ctb_consistency})))
sns.boxplot(x="variable", y="value", data=pd.melt(pd.DataFrame({'consistency':shap_tree_xgbc_rfc_ctb_consistency})))
shap_xgbc_rfc_consistency = gm.consistency([shap_xgbc, shap_rfc])
shap_xgbc_ctb_consistency = gm.consistency([shap_xgbc, shap_ctb])
shap_rfc_ctb_consistency = gm.consistency([shap_rfc, shap_ctb])
sns.boxplot(x="variable", y="value", data=pd.melt(pd.DataFrame({'xgbc_rfc':shap_xgbc_rfc_consistency, 'xgbc_ctb':shap_xgbc_ctb_consistency, 'rfc_ctb':shap_rfc_ctb_consistency})))
# ctb - ???
```
### Stability
```
# shap_xgbc_stability = gm.stability(X_test, shap_xgbc ,epsilon=0.3)
# shap_rfc_stability = gm.stability(X_test, shap_rfc ,epsilon=0.3)
# shap_ctb_stability = gm.stability(X_test, shap_ctb ,epsilon=0.3)
# to_pickle(shap_xgbc_stability, 'shap_xgbc_stability')
# to_pickle(shap_rfc_stability, 'shap_rfc_stability')
# to_pickle(shap_ctb_stability, 'shap_ctb_stability')
shap_xgbc_stability = from_pickle('shap_xgbc_stability')
shap_rfc_stability = from_pickle('shap_rfc_stability')
shap_ctb_stability = from_pickle('shap_ctb_stability')
shap_xgbc
sns.boxplot(x="variable", y="value", data=pd.melt(pd.DataFrame({'xgbc':shap_xgbc_stability,'rfc':shap_rfc_stability,'ctb':shap_ctb_stability})))
sns.boxplot(x="variable", y="value", data=pd.melt(pd.DataFrame({'ctb':shap_ctb_stability})))
```
### Area under the ACCLOSS
```
xgbc_perm = PermutationImportance(xgbc, random_state=1).fit(X_test, y_test)
xgbc_perm_importances = xgbc_perm.feature_importances_
show_weights(xgbc_perm, feature_names=FEATURE_IDS)
xgbc_perm
xgbc_perm_importances
rfc_perm = PermutationImportance(rfc, random_state=1).fit(X_test, y_test)
rfc_perm_importances = rfc_perm.feature_importances_
show_weights(rfc_perm, feature_names=FEATURE_IDS)
ctb_perm = PermutationImportance(ctb, random_state=1).fit(X_test, y_test)
ctb_perm_importances = ctb_perm.feature_importances_
show_weights(ctb_perm, feature_names=FEATURE_IDS)
ct = ColumnTransformer([('_INXAI_categorical_noise_perturber', CategoricalNoisePerturber(),X_test.columns)])
shap_xgbc_acc_loss = gm.gradual_perturbation(model = xgbc, X = X_test, y = y_test, column_transformer = ct, importances_orig = xgbc_perm_importances, resolution=50, count_per_step=10)
to_pickle(shap_xgbc_acc_loss, 'shap_xgbc_acc_loss')
# shap_xgbc_acc_loss = from_pickle('shap_xgbc_acc_loss')
shap_rfc_acc_loss = gm.gradual_perturbation(model = rfc, X = X_test, y = y_test, column_transformer = ct, importances_orig = rfc_perm_importances, resolution=50, count_per_step=10)
to_pickle(shap_rfc_acc_loss, 'shap_rfc_acc_loss')
shap_ctb_acc_loss = gm.gradual_perturbation(model = ctb, X = X_test, y = y_test, column_transformer = ct, importances_orig = ctb_perm_importances, resolution=50, count_per_step=10)
to_pickle(shap_ctb_acc_loss, 'shap_ctb_acc_loss')
shap_xgbc_acc_loss = from_pickle('shap_xgbc_acc_loss')
shap_rfc_acc_loss = from_pickle('shap_rfc_acc_loss')
shap_ctb_acc_loss = from_pickle('shap_ctb_acc_loss')
print(approximateAreaUnderTheCurveAccLoss(shap_xgbc_acc_loss))
print(approximateAreaUnderTheCurveAccLoss(shap_rfc_acc_loss))
print(approximateAreaUnderTheCurveAccLoss(shap_ctb_acc_loss))
print(AreaUnderTheCurve(shap_xgbc_acc_loss))
print(AreaUnderTheCurve(shap_rfc_acc_loss))
print(AreaUnderTheCurve(shap_ctb_acc_loss))
```
# SHAP's optimization
## stability X acc_loss_area : xgbc
approximateAreaUnderTheCurveAccLoss : *_perm_importances -> 11 # gm.gradual_perturbation(model = *, X = X_test, y = y_test ....
stability : shap -> N * 11 # gm.stability(X_test, shap_xgbc ,epsilon=0.3)
consistency : shap -> m * N * 11 # gm.consistency([shap_xgbc, shap_rfc, shap_ctb])
```
pd.DataFrame(shap_xgbc).describe()
pd.DataFrame(xgbc_perm_importances).T
```
## Weights for SHAP
```
plt.plot(np.linspace(0, 100, 50), shap_xgbc_acc_loss)
plt.plot(np.linspace(0, 100, 50), shap_rfc_acc_loss)
plt.plot(np.linspace(0, 100, 50), shap_ctb_acc_loss)
plt.xlabel('Percentile of perturbation range', fontsize=13)
plt.ylabel('Loss of accuracy', fontsize=13)
plt.legend(['xgbc','rfc','ctb'])
w_xgbc = 1 / np.mean([np.linalg.norm(x) for x in shap_xgbc])
w_rfc = 1 / np.mean([np.linalg.norm(x) for x in shap_rfc])
w_ctb = 1 / np.mean([np.linalg.norm(x) for x in shap_ctb])
print(w_xgbc,w_rfc,w_ctb)
```
### weighted model
```
class weighted_model:
def __init__(self, w1, m1, w2 = 0.0, m2 = None, w3 = 0.0, m3 = None):
self.w1 = w1
self.w2 = w2
self.w3 = w3
self.m1 = m1
self.m2 = m2
self.m3 = m3
self.colnames = None
def predict(self, X):
weighted_proba = self.predict_proba(X)
return [0 if wp[0] > 0.5 else 1 for wp in weighted_proba]
def predict_proba(self, X):
w_ = self.w1 + self.w2 + self.w3
m2_predict_proba = 0.0 if self.w2 == 0.0 else self.m2.predict_proba(X)
m3_predict_proba = 0.0 if self.w3 == 0.0 else self.m3.predict_proba(X)
weighted_prediction = (self.w1 * self.m1.predict_proba(X) + self.w2 * m2_predict_proba + self.w3 * m3_predict_proba) / w_
return weighted_prediction
def fit(self, X):
self.colnames = X.columns
return self
weighted_model(w1=w_xgbc, m1=xgbc, w2=w_rfc, m2=rfc, w3=w_ctb, m3=ctb).predict_proba(X_test)[0]
weighted_model(w1=w_xgbc, m1=xgbc, w2=w_rfc, m2=rfc, w3=w_ctb, m3=ctb).predict(X_test)[0]
```
### weighted Permutation Importance
```
weighted_perm = PermutationImportance(weighted_model(w1=w_xgbc, m1=xgbc, w2=w_rfc, m2=rfc, w3=w_ctb, m3=ctb), random_state=42, scoring="accuracy").fit(X_test, y_test)
weighted_perm_importances = weighted_perm.feature_importances_
show_weights(weighted_perm, feature_names=FEATURE_IDS)
```
### weighted AUC for Accuracy Loss
```
ct = ColumnTransformer([('_INXAI_categorical_noise_perturber', CategoricalNoisePerturber(),X_test.columns)])
def weighted_acc_loss(w_xgbc, w_rfc, w_ctb):
return gm.gradual_perturbation(model = weighted_model(w1=w_xgbc, m1=xgbc, w2=w_rfc, m2=rfc, w3=w_ctb, m3=ctb), X = X_test, y = y_test, column_transformer = ct, importances_orig = weighted_perm_importances, resolution=50, count_per_step=10)
weighted_acc_loss = weighted_acc_loss(w_xgbc, w_rfc, w_ctb)
auc_shap_xgbc_acc_loss = AreaUnderTheCurve(shap_xgbc_acc_loss)
auc_shap_rfc_acc_loss = AreaUnderTheCurve(shap_rfc_acc_loss)
auc_shap_ctb_acc_loss = AreaUnderTheCurve(shap_ctb_acc_loss)
print(auc_shap_xgbc_acc_loss)
print(auc_shap_rfc_acc_loss)
print(auc_shap_ctb_acc_loss)
print("Naive vs from weighted model:")
print((w_xgbc * auc_shap_xgbc_acc_loss + w_rfc * auc_shap_rfc_acc_loss + w_ctb * auc_shap_ctb_acc_loss) / (w_xgbc + w_rfc + w_ctb) )
print(AreaUnderTheCurve(weighted_acc_loss))
```
## weighted shap
```
def weighted_shap_naive(w1, shap1, w2 = 0.0, shap2 = None, w3 = 0.0, shap3 = None):
w_ = w1 + w2 + w3
shapNone = [shap_i * 0.0 for shap_i in shap1]
shap1_w = [shap_i * w1 for shap_i in shap1]
shap2_w = shapNone if w2 == 0.0 else [shap_i * w2 for shap_i in shap2]
shap3_w = shapNone if w3 == 0.0 else [shap_i * w3 for shap_i in shap3]
shap_zip = zip(shap1_w, shap2_w, shap3_w)
return [(shap1_w_i + shap2_w_i + shap3_w_i) / w_ for shap1_w_i, shap2_w_i, shap3_w_i in shap_zip]
weighted_xgbc_rfc_ctb_shap_naive = weighted_shap_naive(w1=w_xgbc, shap1=shap_xgbc, w2=w_rfc, shap2=shap_rfc, w3=w_ctb, shap3=shap_ctb)
print(len(weighted_xgbc_rfc_ctb_shap_naive))
weighted_xgbc_rfc_ctb_shap = generate_per_instance_importances(models=weighted_model(w1=w_xgbc, m1=xgbc, w2=w_rfc, m2=rfc, w3=w_ctb, m3=ctb), X=X_test.iloc[:, ], y=y_test.iloc[:, ], framework='kernel_shap')
to_pickle(weighted_xgbc_rfc_ctb_shap, 'weighted_xgbc_rfc_ctb_shap')
print(len(weighted_xgbc_rfc_ctb_shap))
```
## Stability
```
weighted_xgbc_rfc_ctb_stability_naive = gm.stability(X_test, weighted_xgbc_rfc_ctb_shap_naive ,epsilon=0.3)
to_pickle(weighted_xgbc_rfc_ctb_stability_naive, 'weighted_xgbc_rfc_ctb_stability_naive')
# weighted_xgbc_rfc_ctb_stability_naive = from_pickle('weighted_xgbc_rfc_ctb_stability_naive')
weighted_xgbc_rfc_ctb_stability = gm.stability(X_test, weighted_xgbc_rfc_ctb_shap ,epsilon=0.3)
to_pickle(weighted_xgbc_rfc_ctb_stability, 'weighted_xgbc_rfc_ctb_stability')
# weighted_xgbc_rfc_ctb_stability = from_pickle('weighted_xgbc_rfc_ctb_stability')
1 + 1
```
### Consistency
```
weighted_xgbc_rfc_ctb_consistency = gm.consistency(weighted_xgbc_rfc_ctb_shap)
to_pickle(shap_weighted_xgbc_rfc_ctb_consistency, 'shap_weighted_xgbc_rfc_ctb_consistency')
# shap_weighted_xgbc_rfc_ctb_consistency = from_pickle('shap_weighted_xgbc_rfc_ctb_consistency')
len(shap_weighted_xgbc_rfc_ctb_consistency)
shap_weighted_xgbc_rfc_ctb_consistency
def weighted_consistency(w_xgbc, w_rfc, w_ctb):
w_ = w_xgbc + w_rfc + w_ctb
return gm.consistency([w_xgbc * shap_xgbc / w_, w_rfc * shap_rfc / w_, w_ctb * shap_ctb / w_])
weighted_consistency(w_xgbc, w_rfc, w_ctb)
```
## SMAC
- https://automl.github.io/SMAC3/master/examples/SMAC4BO_rosenbrock.html#sphx-glr-examples-smac4bo-rosenbrock-py
- https://automl.github.io/SMAC3/master/examples/SMAC4HPO_rosenbrock.html#sphx-glr-examples-smac4hpo-rosenbrock-py
| github_jupyter |
[Sascha Spors](https://orcid.org/0000-0001-7225-9992),
Professorship Signal Theory and Digital Signal Processing,
[Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
Faculty of Computer Science and Electrical Engineering (IEF),
[University of Rostock, Germany](https://www.uni-rostock.de/en/)
# Tutorial Signals and Systems (Signal- und Systemtheorie)
Summer Semester 2021 (Bachelor Course #24015)
- lecture: https://github.com/spatialaudio/signals-and-systems-lecture
- tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
WIP...
The project is currently under heavy development while adding new material for the summer semester 2021
Feel free to contact lecturer [frank.schultz@uni-rostock.de](https://orcid.org/0000-0002-3010-0294)
# Exercise 8: Discrete-Time Convolution
```
import matplotlib.pyplot as plt
import numpy as np
#from matplotlib.ticker import MaxNLocator
#from scipy import signal
# we create a undersampled and windowed impulse response of a RC-circuit lowpass
TRC = 1/6 # time constant in s
wRC = 1/TRC # cutoff angular frequency in rad/s
ws = 200/3*wRC # sampling angular frequency in rad/s, this yields aliasing!!
fs = ws/(2*np.pi) # sampling frequency in Hz
Ts = 1/fs # sampling intervall s
w = np.linspace(-10*ws, ws*10, 2**11) # angular frequency in rad/s
s = 1j*w # laplace variable along im-axis in rad/s
H = 1 / (s/wRC + 1) # frequency response
k = np.arange(np.int32(np.ceil(0.5/Ts)+1)) # sample index
h = (1/TRC * np.exp(-k*Ts/TRC)) # sampled impulse response, windowed!!
# normalize to achieve h[k=0] = 1, cf. convolution_ct_example2_AF3B15E0D3.ipynb
h *= TRC
Nh = h.size
kh = 0 # start of impulse response
plt.figure(figsize=(6, 6))
plt.subplot(2, 1, 1)
for nu in np.arange(-4, 5, 1):
plt.plot(w+nu*ws, 20*np.log10(np.abs(H)), 'C1')
plt.plot(w, 20*np.log10(np.abs(H)))
plt.plot([ws/2, ws/2], [-40, 0], 'C7')
plt.xticks(ws*np.arange(-4, 5, 1))
plt.xlim(-4*ws, +4*ws)
plt.ylim(-40, 0)
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'$20 \log_{10} |H(\omega)|$')
plt.grid(True)
plt.subplot(2, 1, 2)
plt.stem(k*Ts, h, use_line_collection=True,
linefmt='C0:', markerfmt='C0o', basefmt='C0:',
label=r'$h_d[k] = h[k T_s] \cdot T_{RC} = \mathrm{e}^{-k\cdot\frac{T_s}{T_{RC}}}$')
plt.xlabel(r'$k \cdot T_s$')
plt.legend()
plt.grid(True)
print(Ts, ws)
# signal
x = 2*np.ones(np.int32(np.ceil(2 / Ts))) # non-zero elements
Nx = x.size
kx = np.int32(np.ceil(1/Ts)) # start index for first non-zero entry
# discrete-time convolution
Ny = Nx+Nh-1
ky = kx+kh
y = np.convolve(x, h)
plt.figure(figsize=(12, 4))
k = np.arange(kx, kx+Nx)
ax = plt.subplot(1, 3, 1)
plt.stem(k*Ts, x, use_line_collection=True,
linefmt='C0:', markerfmt='C0.', basefmt='C0:',
label=r'$x[k]$')
plt.xlim(1, 3)
plt.xlabel(r'$k \cdot T_s$ / s')
plt.legend(loc='upper right')
k = np.arange(kh, kh+Nh)
ax = plt.subplot(1, 3, 2)
plt.stem(k*Ts, h, use_line_collection=True,
linefmt='C1:', markerfmt='C1.', basefmt='C1:',
label=r'$h[k]$')
plt.xlim(0, 0.5)
plt.ylim(0, 1)
plt.yticks(np.arange(0, 1.25, 0.25))
plt.xlabel(r'$k \cdot T_s$ / s')
plt.legend(loc='upper right')
plt.grid(True)
k = np.arange(ky, ky+Ny)
ax = plt.subplot(1, 3, 3)
plt.stem(k*Ts, y*Ts, use_line_collection=True,
linefmt='C2:', markerfmt='C2.', basefmt='C2:',
label=r'$y[k]\,/\,T_s = x[k]\ast h[k]$')
tmp = (1-np.exp(-3))/3
plt.plot([1, 3.5], [tmp, tmp], 'C3')
plt.xlim(1, 3.5)
plt.ylim(0, 0.4)
plt.yticks(np.arange(0, 0.5, 0.1))
plt.xlabel(r'$k \cdot T_s$ / s')
plt.legend(loc='upper right')
plt.grid(True)
plt.savefig('convolution_discrete_pt1_xhy.pdf')
plt.figure(figsize=(8, 4))
k = np.arange(ky, ky+Ny)
ax = plt.subplot(1, 2, 1)
plt.stem(k*Ts, y*Ts, use_line_collection=True,
linefmt='C2:', markerfmt='C2o', basefmt='C2:',
label=r'$y[k]\,/\,T_s = x[k]\ast h[k]$')
tmp = (1-np.exp(-3))/3
plt.plot([1, 3.5], [tmp, tmp], 'C3')
plt.xlim(1, 1.5)
plt.ylim(0, 0.4)
plt.yticks(np.arange(0, 0.5, 0.1))
plt.xlabel(r'$k \cdot T_s$ / s')
plt.legend(loc='upper right')
plt.grid(True)
ax = plt.subplot(1, 2, 2)
plt.stem(k, y*Ts, use_line_collection=True,
linefmt='C2:', markerfmt='C2o', basefmt='C2:',
label=r'$y[k]\,/\,T_s = x[k]\ast h[k]$')
tmp = (1-np.exp(-3))/3
plt.plot([1/Ts, 3.5/Ts], [tmp, tmp], 'C3')
plt.xlim(1/Ts, 1.5/Ts)
plt.ylim(0, 0.4)
plt.yticks(np.arange(0, 0.5, 0.1))
plt.xlabel(r'$k$')
plt.legend(loc='upper right')
plt.grid(True)
plt.savefig('convolution_discrete_pt1_y_over_kt_zoom.pdf')
```
## Copyright
This tutorial is provided as Open Educational Resource (OER), to be found at
https://github.com/spatialaudio/signals-and-systems-exercises
accompanying the OER lecture
https://github.com/spatialaudio/signals-and-systems-lecture.
Both are licensed under a) the Creative Commons Attribution 4.0 International
License for text and graphics and b) the MIT License for source code.
Please attribute material from the tutorial as *Frank Schultz,
Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
Computational Examples, University of Rostock* with
``main file, github URL, commit number and/or version tag, year``.
| github_jupyter |
# XGBoost trainer
This notebook function handles training and logging of xgboost models **only**, exposing both the sklearn and low level api"s.
## steps
1. generate an xgboost model configuration by selecting one of 5 available types
2. get a sample of data from a data source (random rows, consecutive rows, or the entire dataset, custom sample)
3. split the data into train, validation, and test sets (WIP, this will be parametrized cross-validator)
4. train the model using xgboost in one of its flavours (dask, gpu, mpi...)
5. dump the model
6. evaluate the model
```
# nuclio: ignore
import nuclio
### mlrun 0.4.7:
%nuclio config kind = "job"
%nuclio config spec.build.baseImage = "mlrun/ml-models"
%nuclio cmd -c pip install git+https://github.com/mlrun/mlutils.git@development scikit-plot
### mlrun 0.4.8
# %nuclio config kind = "job"
# %nuclio config spec.image = "mlrun/ml-models"
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
from mlutils import (get_sample,
get_splits,
gen_sklearn_model,
create_class,
eval_class_model,
gcf_clear)
from mlrun.execution import MLClientCtx
from mlrun.datastore import DataItem
from mlrun.artifacts import PlotArtifact, TableArtifact
from cloudpickle import dumps
import pandas as pd
import os
from typing import List
```
## generate an xgb model
generate a model config using the xgboost's sklearn api
```
def _gen_xgb_model(model_type: str, xgb_params: dict):
"""generate an xgboost model
Multiple model types that can be estimated using
the XGBoost Scikit-Learn API.
Input can either be a predefined json model configuration or one
of the five xgboost model types: "classifier", "regressor", "ranker",
"rf_classifier", or "rf_regressor".
In either case one can pass in a params dict to modify defaults values.
Based on `mlutils.models.gen_sklearn_model`, see the function
`sklearn_classifier` in this repository.
:param model_type: one of "classifier", "regressor",
"ranker", "rf_classifier", or
"rf_regressor"
:param xgb_params: class init parameters
"""
# generate model and fit function
mtypes = {
"classifier" : "xgboost.XGBClassifier",
"regressor" : "xgboost.XGBRegressor",
"ranker" : "xgboost.XGBRanker",
"rf_classifier": "xgboost.XGBRFClassifier",
"rf_regressor" : "xgboost.XGBRFRegressor"
}
if model_type.endswith("json"):
model_config = model_type
elif model_type in mtypes.keys():
model_config = mtypes[model_type]
else:
raise Exception("unrecognized model type, see help documentation")
return gen_sklearn_model(model_config, xgb_params)
```
## train
```
def train_model(
context: MLClientCtx,
model_type: str,
dataset: DataItem,
label_column: str = "labels",
encode_cols: dict = {},
sample: int = -1,
imbal_vec = [],
test_size: float = 0.25,
valid_size: float = 0.75,
random_state: int = 1,
models_dest: str = "models",
plots_dest: str = "plots",
eval_metrics: list= ["error", "auc"],
file_ext: str = "parquet",
model_pkg_file: str = "",
) -> None:
"""train an xgboost model.
Note on imabalanced data: the `imbal_vec` parameter represents the measured
class representations in the sample and can be used as a first step in tuning
an XGBoost model. This isn't a hyperparamter, merely an estimate that should
be set as 'constant' throughout tuning process.
:param context: the function context
:param model_type: the model type to train, "classifier", "regressor"...
:param dataset: ("data") name of raw data file
:param label_column: ground-truth (y) labels
:param encode_cols: dictionary of names and prefixes for columns that are
to hot be encoded.
:param sample: Selects the first n rows, or select a sample
starting from the first. If negative <-1, select
a random sample
:param imbal_vec: ([]) vector of class weights seen in sample
:param test_size: (0.05) test set size
:param valid_size: (0.75) Once the test set has been removed the
training set gets this proportion.
:param random_state: (1) sklearn rng seed
:param models_dest: destination subfolder for model artifacts
:param plots_dest: destination subfolder for plot artifacts
:param eval_metrics: (["error", "auc"]) learning curve metrics
:param file_ext: format for test_set_key hold out data
"""
# deprecate:
models_dest = models_dest or "models"
plots_dest = plots_dest or f"plots/{context.name}"
# get a sample from the raw data
raw, labels, header = get_sample(dataset, sample, label_column)
# hot-encode
if encode_cols:
raw = pd.get_dummies(raw,
columns=list(encode_cols.keys()),
prefix=list(encode_cols.values()),
drop_first=True)
# split the sample into train validate, test and calibration sets:
(xtrain, ytrain), (xvalid, yvalid), (xtest, ytest) = \
get_splits(raw, labels, 3, test_size, valid_size, random_state)
# save test data
context.log_dataset("test-set", df=pd.concat([xtest, ytest], axis=1), format=file_ext, index=False)
# get model config
model_config = _gen_xgb_model(model_type, context.parameters.items())
# imbalance param, binary, this needs to be checked:
if len(imbal_vec) == 2:
scale_pos_weight = imbal_vec[0]/imbal_vec[1]
model_config["CLASS"].update({"scale_pos_weight": scale_pos_weight})
# create model instance
XGBBoostClass = create_class(model_config["META"]["class"])
model = XGBBoostClass(**model_config["CLASS"])
# update the model config with training data and callbacks
model_config["FIT"].update({"X": xtrain,
"y": ytrain.values,
"eval_set":[(xtrain, ytrain), (xvalid, yvalid)],
"eval_metric": eval_metrics})
# run the fit
model.fit(**model_config["FIT"])
# evaluate model
eval_metrics = eval_class_model(context, xvalid, yvalid, model)
# just do this inside log_model?
model_plots = eval_metrics.pop("plots")
model_tables = eval_metrics.pop("tables")
for plot in model_plots:
context.log_artifact(plot, local_path=f"{plots_dest}/{plot.key}.html")
for tbl in model_tables:
context.log_artifact(tbl, local_path=f"{plots_dest}/{plot.key}.csv")
model_bin = dumps(model) # .get_booster())
context.log_model("model", body=model_bin,
artifact_path=os.path.join(context.artifact_path, models_dest),
#model_dir=models_dest,
model_file="model.pkl",
metrics=eval_metrics)
# probably have it wrong, cant see them from log_model so try this:
context.log_results(eval_metrics)
# nuclio: end-code
```
### mlconfig
```
from mlrun import mlconf
import os
mlconf.dbpath = mlconf.dbpath or "http://mlrun-api:8080"
mlconf.artifact_path = mlconf.artifact_path or f"{os.environ['HOME']}/artifacts"
```
### save
```
from mlrun import code_to_function
# create job function object from notebook code
fn = code_to_function("xgb_trainer")
# add metadata (for templates and reuse)
fn.spec.default_handler = "train_model"
fn.spec.description = "train multiple model types using xgboost"
fn.metadata.categories = ["training", "ml", "experimental"]
fn.metadata.labels = {"author": "yjb", "framework": "xgboost"}
fn.export("function.yaml")
```
### test function
```
if "V3IO_HOME" in list(os.environ):
# mlrun on the iguazio platform
from mlrun import mount_v3io
fn.apply(mount_v3io())
else:
# mlrun is setup using the instructions at
# https://github.com/mlrun/mlrun/blob/master/hack/local/README.md
from mlrun.platforms import mount_pvc
fn.apply(mount_pvc("nfsvol", "nfsvol", "/home/jovyan/data"))
gpus = False
task_params = {
"name" : "tasks xgb cpu trainer",
"params" : {
"model_type" : "classifier",
"CLASS_tree_method" : "gpu_hist" if gpus else "hist",
"CLASS_objective" : "binary:logistic",
"CLASS_booster" : "gbtree",
"FIT_verbose" : 0,
"imbal_vec" : [],
"label_column" : "labels"}}
```
### run locally
```
DATA_URL = "https://raw.githubusercontent.com/yjb-ds/testdata/master/data/classifier-data.csv"
from mlrun import run_local, NewTask
run = run_local(
NewTask(**task_params),
handler=train_model,
inputs={"dataset" : DATA_URL},
artifact_path=mlconf.artifact_path)
```
### run remotely
```
# only for v0.4.7:
fn.deploy(skip_deployed=True, with_mlrun=False)
run = fn.run(
NewTask(**task_params),
inputs={"dataset" : DATA_URL},
artifact_path=mlconf.artifact_path)
```
| github_jupyter |
```
from __future__ import print_function
import os
from skimage.transform import resize
from skimage.io import imsave, imread
import numpy as np
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, Dropout
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau, TensorBoard
from keras.utils import to_categorical
from keras import backend as K
from sklearn.model_selection import train_test_split
K.set_image_data_format('channels_last')
data_path = 'raw/'
image_rows = 256
image_cols = 256
n_classes = 8
smooth = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)
train_data_path = os.path.join(data_path, 'train')
annotation_data_path = os.path.join(data_path, 'annotations')
images = os.listdir(train_data_path)
total = len(images)
imgs = np.ndarray((total, image_rows, image_cols, 3), dtype=np.float32)
imgs_mask = np.ndarray((total, image_rows, image_cols), dtype=np.byte)
i = 0
print('-'*30)
print('Creating training images...')
print('-'*30)
for image_name in images:
img = imread(os.path.join(train_data_path, image_name))
img_mask = imread(os.path.join(annotation_data_path, image_name), as_gray=True)
img = np.array([img])
img_mask = np.array([img_mask])
imgs[i] = img
imgs_mask[i] = img_mask
if i % 100 == 0:
print('Done: {0}/{1} images'.format(i, total))
i += 1
print('Loading done.')
imgs_train, imgs_test, imgs_mask_train, imgs_mask_test = train_test_split(imgs, imgs_mask, test_size=0.10, random_state=42)
np.save('imgs_train.npy', imgs_train)
np.save('imgs_mask_train.npy', imgs_mask_train)
np.save('imgs_test.npy', imgs_test)
np.save('imgs_mask_test.npy', imgs_mask_test)
del imgs_train
del imgs_mask_train
del imgs_test
del imgs_mask_test
imgs_train = np.load('imgs_train.npy')
imgs_mask_train = np.load('imgs_mask_train.npy')
mean = np.mean(imgs_train) # mean for data centering
std = np.std(imgs_train) # std for data normalization
imgs_train -= mean
imgs_train /= std
imgs_mask_train = to_categorical(imgs_mask_train, num_classes=n_classes)
def UNet(input_shape=(256, 256, 3), classes=1):
inputs = Input(shape=input_shape)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
pool1 = Dropout(0.25)(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
pool2 = Dropout(0.5)(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
pool3 = Dropout(0.5)(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3)
conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
pool4 = Dropout(0.5)(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4)
conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5)
up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
up6 = Dropout(0.5)(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6)
conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6)
up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
up7 = Dropout(0.5)(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7)
conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7)
up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
up8 = Dropout(0.5)(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8)
conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8)
up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
up9 = Dropout(0.5)(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9)
conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9)
conv10 = Conv2D(classes, (1, 1), activation='softmax')(conv9)
model = Model(inputs=[inputs], outputs=[conv10])
return model
model = UNet(classes=n_classes)
model.compile(optimizer="adam", loss='categorical_crossentropy', metrics=[dice_coef])
early_stopping = EarlyStopping(patience=10, verbose=1)
model_checkpoint = ModelCheckpoint('weights.h5', monitor='val_loss', save_best_only=True)
reduce_lr = ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1)
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0,
write_graph=True, write_images=False)
print('-'*30)
print('Fitting model...')
print('-'*30)
model.fit(imgs_train, imgs_mask_train, batch_size=12, epochs=100, verbose=1, shuffle=True,
validation_split=0.1,
callbacks=[model_checkpoint, early_stopping, reduce_lr, tensorboard])
del imgs_train
del imgs_mask_train
imgs_test = np.load('imgs_test.npy')
imgs_mask_test = np.load('imgs_mask_test.npy')
mean = np.mean(imgs_test)
std = np.std(imgs_test)
imgs_test -= mean
imgs_test /= std
imgs_mask_test = to_categorical(imgs_mask_test, num_classes=8)
model.load_weights('weights.h5')
imgs_mask_predict = model.predict(imgs_test, verbose=1)
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.cm as cm
import matplotlib.colors as mcolors
import matplotlib.patches as mpatches
from matplotlib import rc
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
def discrete_cmap(N, base_cmap=None):
"""Create an N-bin discrete colormap from the specified input map"""
# Note that if base_cmap is a string or None, you can simply do
# return plt.cm.get_cmap(base_cmap, N)
# The following works for string, None, or a colormap instance:
base = plt.cm.get_cmap(base_cmap)
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
return base.from_list(cmap_name, color_list, N)
pred_dir = 'preds'
if not os.path.exists(pred_dir):
os.mkdir(pred_dir)
imgs_test = np.load('imgs_test.npy')
plt.rc('text', usetex=True)
for img_id in range(len(imgs_mask_predict)):
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(ncols=2,nrows=2,figsize=(8,8))
labels = [r'FLORESTA', r'DESMATAMENTO', r'HIDROGRAFIA', r'RESIDUO', r'NUVEM', r'NAO\_FLORESTA2', r'NAO\_FLORESTA']
c = plt.get_cmap('jet', n_classes)
img_mask_test = np.argmax(imgs_mask_test[img_id], axis = 2)
img_mask_predict = np.argmax(imgs_mask_predict[img_id], axis = 2)
im1 = ax1.imshow(imgs_test[img_id].astype(np.uint32))
ax1.set_title(r'\centering\sffamily\bfseries (a) Imagem Original', x=.5, y=-.15)
im2 = ax2.imshow(img_mask_test, cmap=c).set_clim(0, n_classes - 1)
ax2.set_title(r'\centering\sffamily\bfseries (b) Label Original', x=.5, y=-.15)
im3 = ax3.imshow(imgs_test[img_id].astype(np.uint32), alpha=0.5)
ax3.imshow(np.argmax(imgs_mask_predict[img_id], axis = 2), alpha=0.7, cmap='gray') # OVERLAY
ax3.set_title(r'\centering\sffamily\bfseries (c) Imagem Original + Label Rede Neural', x=.5, y=-.15)
im4 = ax4.imshow(img_mask_predict, cmap=c).set_clim(0, n_classes - 1)
ax4.set_title(r'\centering\sffamily\bfseries (d) Label Rede Neural', x=.5, y=-.15)
colors = [c(value + 1) for value in np.arange(0, n_classes)]
patches = [ mpatches.Patch(color=colors[i], label="{l}".format(l=labels[i]) ) for i in range(len(labels)) ]
plt.draw()
lgd = f.legend(borderaxespad=0, handles=patches, loc='center')
bb = lgd.get_bbox_to_anchor().inverse_transformed(ax2.transAxes)
xOffset = 1.5
bb.x0 += xOffset
bb.x1 += xOffset
lgd.set_bbox_to_anchor(bb, transform = ax2.transAxes)
plt.tight_layout()
f.savefig('graphs/graph_{}.png'.format(img_id), format='png', bbox_extra_artists=(lgd,), bbox_inches='tight', dpi=300)
plt.close(f)
# i=0
# for image_pred, image_mask_test, image_test in zip(imgs_mask_predict, imgs_mask_test, imgs_test):
# imsave(os.path.join(pred_dir, str(i) +'.png'), image_test.astype(np.uint8))
# imsave(os.path.join(pred_dir, str(i) +'_label.png'), np.argmax(image_mask_test, axis = 2).astype(np.uint8) * (255 // n_classes))
# imsave(os.path.join(pred_dir, str(i) +'_pred.png'), np.argmax(image_pred, axis = 2) * (255// n_classes))
# i += 1
```
| github_jupyter |
# PixdosepiX-OpenKBP---2020-AAPM-Grand-Challenge-
## Introduction
The aim of the OpenKBP Challenge is to advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP). Participants of the challenge will use a large dataset to train, test, and compare their prediction methods, using a set of standardized metrics, with those of other participants.
## Get and prepare data
```
!wget "###REPLACE WITH LINK TO DATASET IN CODALAB###"
from google.colab import drive
drive.mount('/content/drive')
!mv /content/e25ae3d9-03e1-4d2c-8af2-f9991193f54b train.zip
!unzip train.zip
!rm "/content/train-pats/.DS_Store"
!rm "/content/validation-pats-no-dose/.DS_Store"
```
## Import libraries
```
%tensorflow_version 2.x
import shutil
import json
import pandas as pd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import tensorflow as tf
from tensorflow.keras import *
from tensorflow.keras.layers import *
from IPython.display import clear_output
```
## Data loader and general functions
```
def create_hist(img):
h = np.squeeze(img).flatten()*100
return h
def create_out_file(in_url,out_url):
dir_main = os.listdir(in_url)
for patient in dir_main:
os.mkdir(out_url + "/" + patient)
def unravel_ct_dose(data_img):
array = np.zeros((128,128,128))
indices = tuple(map(tuple,np.unravel_index(tuple(data_img.index),(128,128,128),order="C")))
array[indices] = data_img.data.values
return array
def unravel_masks(data_img):
array = np.zeros((128,128,128))
indices = tuple(map(tuple,np.unravel_index(tuple(data_img.index),(128,128,128),order="C")))
array[indices] = 1
return array
def decode_to_CT_Dose(url_element):
array = pd.read_csv(url_element,index_col=0)
array = np.expand_dims(np.expand_dims(unravel_ct_dose(array),axis = 0),axis = 4)
return array
def decode_unique_mask(url_element):
array = pd.read_csv(url_element,index_col=0)
array = np.expand_dims(np.expand_dims(unravel_masks(array),axis = 0),axis = 4)
return array
def decode_voxel_dimensions(url_element):
array = np.loadtxt(url_element)
return array
def decode_fusion_maks(link,list_name_masks,dict_num_mask):
masks = np.zeros([1,128,128,128,10])
organs_patien = os.listdir(link)
for name in list_name_masks:
if name + ".csv" in organs_patien:
dir_mask = link + "/" + name + ".csv"
array = pd.read_csv(dir_mask,index_col=0)
array = unravel_masks(array)
masks[0,:,:,:,dict_num_mask[name]] = array
return masks
def get_patient_list(url_main):
return os.listdir(url_main)
def load_patient_train(dir_patient):
dict_images = {"ct":None,"dose":None,"masks":None}
ct = decode_to_CT_Dose(dir_patient + "/" + "ct.csv")
dose = decode_to_CT_Dose(dir_patient + "/" + "dose.csv")
list_masks = ['Brainstem',
'SpinalCord',
'RightParotid',
'LeftParotid',
'Esophagus',
'Larynx',
'Mandible',
'PTV56',
'PTV63',
'PTV70']
dict_num_mask = {"Brainstem":0,
"SpinalCord":1,
"RightParotid":2,
"LeftParotid":3,
"Esophagus":4,
"Larynx":5,
"Mandible":6,
"PTV56":7,
"PTV63":8,
"PTV70":9}
masks = decode_fusion_maks(dir_patient,list_masks,dict_num_mask)
dict_images["ct"] = ct
dict_images["dose"] = dose
dict_images["masks"] = masks
return dict_images
def load_patient(dir_patient):
dict_images = {"ct":None,"dose":None,"possible_dose_mask":None,"voxel_dimensions":None,"masks":None}
ct = decode_to_CT_Dose(dir_patient + "/" + "ct.csv")
dose = decode_to_CT_Dose(dir_patient + "/" + "dose.csv")
possible_dose_mask = decode_unique_mask(dir_patient + "/" + "possible_dose_mask.csv")
voxel_dimensions = decode_voxel_dimensions(dir_patient + "/" + "voxel_dimensions.csv")
list_masks = ['Brainstem',
'SpinalCord',
'RightParotid',
'LeftParotid',
'Esophagus',
'Larynx',
'Mandible',
'PTV56',
'PTV63',
'PTV70']
dict_num_mask = {"Brainstem":0,
"SpinalCord":1,
"RightParotid":2,
"LeftParotid":3,
"Esophagus":4,
"Larynx":5,
"Mandible":6,
"PTV56":7,
"PTV63":8,
"PTV70":9}
masks = decode_fusion_maks(dir_patient,list_masks,dict_num_mask)
dict_images["ct"] = ct
dict_images["dose"] = dose
dict_images["possible_dose_mask"] = possible_dose_mask
dict_images["voxel_dimensions"] = voxel_dimensions
dict_images["masks"] = masks
return dict_images
def load_patient_test(dir_patient):
dict_images = {"ct":None,"possible_dose_mask":None,"voxel_dimensions":None,"masks":None}
ct = decode_to_CT_Dose(dir_patient + "/" + "ct.csv")
possible_dose_mask = decode_unique_mask(dir_patient + "/" + "possible_dose_mask.csv")
voxel_dimensions = decode_voxel_dimensions(dir_patient + "/" + "voxel_dimensions.csv")
list_masks = ['Brainstem',
'SpinalCord',
'RightParotid',
'LeftParotid',
'Esophagus',
'Larynx',
'Mandible',
'PTV56',
'PTV63',
'PTV70']
dict_num_mask = {"Brainstem":0,
"SpinalCord":1,
"RightParotid":2,
"LeftParotid":3,
"Esophagus":4,
"Larynx":5,
"Mandible":6,
"PTV56":7,
"PTV63":8,
"PTV70":9}
masks = decode_fusion_maks(dir_patient,list_masks,dict_num_mask)
dict_images["ct"] = ct
dict_images["possible_dose_mask"] = possible_dose_mask
dict_images["voxel_dimensions"] = voxel_dimensions
dict_images["masks"] = masks
return dict_images
url_train = "/content/train-pats"
patients = get_patient_list(url_train)
for i,patient in enumerate(patients):
patients[i] = os.path.join(url_train,patient)
def load_images_to_net(patient_url):
images = load_patient_train(patient_url)
ct = tf.cast(np.where(images["ct"] <= 4500,images["ct"],0),dtype=tf.float32)
ct = (2*ct/4500) - 1
masks = tf.cast(images["masks"],dtype=tf.float32)
dose = tf.cast(np.where(images["dose"] <= 100,images["dose"],0),dtype=tf.float32)
dose = (2*dose/100) - 1
return ct,masks,dose
def load_images_to_net_test(patient_url):
images = load_patient_test(patient_url)
ct = ct = tf.cast(np.where(images["ct"] <= 4500,images["ct"],0),dtype=tf.float32)
ct = (2*ct/4500) - 1
masks = tf.cast(images["masks"],dtype=tf.float32)
possible_dose_mask = tf.cast(images["possible_dose_mask"],dtype=tf.float32)
voxel_dimensions = tf.cast(images["voxel_dimensions"],dtype=tf.float32)
return ct,masks,possible_dose_mask,voxel_dimensions
```
## Architecture

https://blog.paperspace.com/unpaired-image-to-image-translation-with-cyclegan/
### Create downsample and upsample functions
```
def downsample(filters, apply_batchnorm=True):
result = Sequential()
initializer = tf.random_normal_initializer(0,0.02)
#capa convolucional
result.add(Conv3D(filters,
kernel_size = 4,
strides = 2,
padding = "same",
kernel_initializer = initializer,
use_bias = not apply_batchnorm))
# Capa de batch normalization
if apply_batchnorm:
result.add(BatchNormalization())
#Capa de activacion (leak relu)
result.add(ReLU())
return result
def upsample(filters, apply_dropout=False):
result = Sequential()
initializer = tf.random_normal_initializer(0,0.02)
#capa convolucional
result.add(Conv3DTranspose(filters,
kernel_size = 4,
strides = 2,
padding = "same",
kernel_initializer = initializer,
use_bias = False))
# Capa de batch normalization
result.add(BatchNormalization())
if apply_dropout:
result.add(Dropout(0.5))
#Capa de activacion (leak relu)
result.add(ReLU())
return result
```
### Create generator-net
```
def Generator():
ct_image = Input(shape=[128,128,128,1])
roi_masks = Input(shape=[128,128,128,10])
inputs = concatenate([ct_image, roi_masks])
down_stack = [
downsample(64, apply_batchnorm=False), # (64x64x64x64)
downsample(128), #32 (32x32x32x128)
downsample(256), #16 (16x16x16x16x256)
downsample(512), #8 (8x8x8x512)
downsample(512), #4 (4x4x4x512)
downsample(512), #2 (2x2x2x512)
downsample(512), #1 (1x1x1x512)
]
up_stack = [
upsample(512,apply_dropout=True), #2 (2x2x2x512)
upsample(512,apply_dropout=True), #4 (4x4x4x512)
upsample(512), #8 (8x8x8x512)
upsample(256), #16 (16x16x16x256)
upsample(128), #32 (32x32x32x128)
upsample(64), #64 (64x64x64x64)
]
initializer = tf.random_normal_initializer(0,0.02)
last = Conv3DTranspose(filters=1,
kernel_size = 4,
strides = 2,
padding = "same",
kernel_initializer = initializer,
activation = "tanh") #(128x128x128x3)
x = inputs
s = []
concat = Concatenate()
for down in down_stack:
x = down(x)
s.append(x)
s = reversed(s[:-1])
for up,sk in zip(up_stack,s):
x = up(x)
x = concat([x,sk])
last = last(x)
return Model(inputs = [ct_image,roi_masks], outputs = last)
generator = Generator()
```
### Run generator-net
```
ct,masks,dose = load_images_to_net("/content/train-pats/pt_150")
gen_output = generator([ct,masks],training=True)
c = (ct[0,:,:,88,0]+1)/2
d = (dose[0,:,:,88,0]+1)/2
p = (gen_output[0,:,:,88,0]+1)/2
fig=plt.figure(figsize=(16, 16))
fig.add_subplot(3,3,1)
plt.title("ct")
plt.imshow(c)
fig.add_subplot(3,3,2)
plt.title("dose")
plt.imshow(d)
fig.add_subplot(3,3,3)
plt.title("predict")
plt.imshow(p)
fig.add_subplot(3,3,4)
plt.hist(create_hist(c), normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
fig.add_subplot(3,3,5)
plt.hist(create_hist(d), normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
fig.add_subplot(3,3,6)
plt.hist(create_hist(p), normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
fig.add_subplot(3,3,7)
plt.hist(create_hist((ct+1)/2),normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
fig.add_subplot(3,3,8)
plt.hist(create_hist((dose+1)/2), normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
fig.add_subplot(3,3,9)
plt.hist(create_hist((gen_output+1)/2), normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
plt.show()
```
### Create discriminator-net
```
def Discriminator():
ct_dis = Input(shape=[128,128,128,1], name = "ct_dis")
ct_masks = Input(shape=[128,128,128,10], name = "ct_masks")
dose_gen = Input(shape=[128,128,128,1], name = "dose_gen")
con = concatenate([ct_dis,ct_masks,dose_gen])
initializer = tf.random_normal_initializer(0,0.02)
down1 = downsample(64, apply_batchnorm = False)(con)
down2 = downsample(128)(down1)
down3 = downsample(256)(down2)
last = tf.keras.layers.Conv3D(filters = 1,
kernel_size = 4,
strides = 1,
kernel_initializer = initializer,
padding = "same")(down3)
return tf.keras.Model(inputs = [ct_dis,ct_masks,dose_gen],outputs = last)
discriminator = Discriminator()
```
### Run discriminator-net
```
disc_out = discriminator([ct,masks,gen_output],training = True)
w=16
h=16
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 4
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i +1)
plt.imshow(disc_out[0,:,:,i,0],vmin=-1,vmax=1,cmap = "RdBu_r")
plt.colorbar()
plt.show()
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 4
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i +1)
plt.imshow(disc_out[0,:,i,:,0],vmin=-1,vmax=1,cmap = "RdBu_r")
plt.colorbar()
plt.show()
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 4
for i in range(0, columns*rows):
fig.add_subplot(rows, columns, i +1)
plt.imshow(disc_out[0,i,:,:,0],vmin=-1,vmax=1,cmap = "RdBu_r")
plt.colorbar()
plt.show()
```
### Loss functions
```
# Funciones de coste adversarias
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(disc_real_output,disc_generated_output):
#Diferencia entre los true por ser real y el detectado por el discriminador
real_loss = loss_object(tf.ones_like(disc_real_output),disc_real_output)
#Diferencia entre los false por ser generado y el detectado por el discriminador
generated_loss = loss_object(tf.zeros_like(disc_generated_output),disc_generated_output)
total_dics_loss = real_loss + generated_loss
return total_dics_loss
LAMBDA = 100
def generator_loss(disc_generated_output,gen_output,target):
gen_loss = loss_object(tf.ones_like(disc_generated_output),disc_generated_output)
#mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target-gen_output))
total_gen_loss = gen_loss + (LAMBDA*l1_loss)
return total_gen_loss
```
### Configure checkpoint
```
import os
generator_optimizer = tf.keras.optimizers.Adam(2e-4,beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4,beta_1=0.5)
cpath = "/content/drive/My Drive/PAE_PYTHONQUANTIC/IA/OpenKBP/Auxiliary/checkpoint" #dir to checkpoints
checkpoint = tf.train.Checkpoint(generator_optimizer = generator_optimizer,
discriminator_optimizer = discriminator_optimizer,
generator = generator,
discriminator = discriminator)
```
## Train
### Train step
```
@tf.function
def train_step(ct,masks,dose):
with tf.GradientTape() as gen_tape, tf.GradientTape() as discr_tape:
output_image = generator([ct,masks], training=True)
output_gen_discr = discriminator([ct,masks,gen_output],training = True)
output_trg_discr = discriminator([ct,masks,dose], training = True)
discr_loss = discriminator_loss(output_trg_discr,output_gen_discr)
gen_loss = generator_loss(output_gen_discr,output_image,dose)
generator_grads = gen_tape.gradient(gen_loss, generator.trainable_variables)
discriminator_grads = discr_tape.gradient(discr_loss,discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_grads,generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(discriminator_grads,discriminator.trainable_variables))
```
### Restore metrics
```
def update_metrics():
with open(cpath + 'metrics_GAN.json') as f:
metrics_GAN = json.load(f)
return metrics_GAN
metrics_GAN = {"gen_loss":[],"discr_loss":[]}
check = os.listdir(cpath)
if "metrics_GAN.json" in check:
print("upadte metrics")
metrics_GAN = update_metrics()
```
### Define train-loop
```
def train(epochs):
check = os.listdir(cpath)
if len(cpath) > 0:
if "state.txt" in check:
start = int(np.loadtxt(cpath + "/state.txt"))
print("upload checkpoint model")
checkpoint.restore(tf.train.latest_checkpoint(cpath+"/"+str(start)))
else:
start = 0
metrics_GAN["gen_loss"].append("epoch" + str(start))
metrics_GAN["discr_loss"].append("epoch" + str(start))
print("Start training in epoch",start)
for epoch in range(start,epochs):
np.random.shuffle(patients)
imgi = 0
for patient in patients:
ct,masks,dose = load_images_to_net(patient)
print("epoch " + str(epoch) + " - train: " + str(imgi) + "/" + str(len(patients)))
train_step(ct,masks,dose)
if imgi % 10 == 0:
output_image = generator([ct,masks], training=True)
output_gen_discr = discriminator([ct,masks,gen_output],training = True)
output_trg_discr = discriminator([ct,masks,dose], training = True)
discr_loss = discriminator_loss(output_trg_discr,output_gen_discr)
gen_loss = generator_loss(output_gen_discr,output_image,dose)
metrics_GAN["gen_loss"].append(str(np.mean(gen_loss)))
metrics_GAN["discr_loss"].append(str(np.mean(discr_loss)))
imgi += 1
clear_output(wait=True)
imgi = 0
metrics_GAN["gen_loss"].append("epoch" + str(epoch))
metrics_GAN["discr_loss"].append("epoch" + str(epoch))
# saving (checkpoint) the model every 20 epochs
if (epoch + 1) % 20 == 0:
with open(cpath + '/metrics_GAN.json', 'w') as fp:
json.dump(metrics_GAN, fp)
state = np.array([epoch+1])
np.savetxt(cpath + "/state.txt",state)
os.mkdir(cpath+"/"+str(epoch+1))
checkpoint_prefix = os.path.join(cpath+"/"+str(epoch+1),"ckpt")
checkpoint.save(file_prefix = checkpoint_prefix)
```
### Initialize train for epochs
```
train(230)
```
## Evaluate
### Restore model
```
def upload_model():
check = os.listdir(cpath)
if len(cpath) > 0:
if "state.txt" in check:
start = int(np.loadtxt(cpath + "/state.txt"))
print("upload checkpoint model")
checkpoint.restore(tf.train.latest_checkpoint(cpath+"/"+str(start)))
upload_model()
```
### Loader patient to evaluate
```
ct,masks,possible_dose_mask,voxel_dimensions = load_images_to_net_test("/content/validation-pats-no-dose/pt_201")
gen_output = generator([ct,masks],training=True)
gen_mask = ((gen_output+1)/2)*possible_dose_mask
plt.imshow(gen_mask[0,:,:,75,0])
plt.imshow((ct[0,:,:,75,0]+1)/2)
plt.imshow((gen_output[0,:,:,75,0]+1)/2)
w = np.squeeze(gen_mask ).flatten()*100
plt.hist(w, normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
x = (gen_output[0,:,:,:,0]+1)/2
y = masks[0,:,:,:,8]
z = x * y
w = np.squeeze(z).flatten()*100
plt.hist(w, normed=True, bins=100,range=[1,100])
plt.ylabel('Probability')
```
## Export doses predictions
```
@tf.function
def predict_step(ct,masks):
output_image = generator([ct,masks], training=True)
return output_image
url_main_results = "/content/drive/My Drive/PAE_PYTHONQUANTIC/IA/OpenKBP/Auxiliary/results_v1"
url_main_validation = "/content/validation-pats-no-dose"
def export_csv(predict_dose,patient):
dictionary = {"data":np.ravel(predict_dose,order="C")}
array = pd.DataFrame(dictionary)
array = array[array["data"] != 0]
array.to_csv(url_main_results + "/" + patient + ".csv")
def validation():
patients = get_patient_list(url_main_validation)
os.mkdir(url_main_results)
inimg = 0
for patient in patients:
ct,masks,possible_dose_mask,voxel_dimensions = load_images_to_net_test(url_main_validation + "/" + patient)
output_image = predict_step(ct,masks)
#output_image = generator([ct,masks], training=True)
predict_dose = np.squeeze((((output_image+1)/2)*possible_dose_mask)*100)
export_csv(predict_dose,patient)
print(" - predict: " + str(inimg) + "/" + str(len(patients)))
inimg = inimg + 1
validation()
shutil.make_archive('/content/drive/My Drive/PAE_PYTHONQUANTIC/IA/OpenKBP/Auxiliary/submisions_v1/baseline', 'zip','/content/drive/My Drive/PAE_PYTHONQUANTIC/IA/OpenKBP/Auxiliary/results_v1')
```
| github_jupyter |
# <b>Document AI features demo 1</b>
The AIServiceVisionClient offers the document <b>text detection</b> feature. This notebook aims to provide overall clarity about the feature to the user in terms of requirements, usage and the output of the API.<br>
<ul>
<li>The raw output is saved as <code>response_document_demo1.json</code> file. </li>
<li>Detected text is displayed under <b>Display the lines of text detected</b> section.</li>
<li>The user can visualize the bounding boxes for the detected text under <b>View output document with bounding boxes</b> section. </li>
</ul>
### Steps to run the notebook:
<details>
<summary>Notebook session setup</summary>
<ol>
<li><font size="2">Installing the OCI Vision SDK</font></li>
<li><font size="2">Installing other dependencies</font></li>
<li><font size="2">Setup sample input documents</font></li>
<li><font size="2">Setup helper .py files</font></li>
<li><font size="2">Create output folder</font></li>
</ol>
</details>
<details>
<summary>Importing the required modules</summary>
</details>
<details>
<summary>Setting the input variables</summary>
<font size="2">The user can change the input variables, if necessary. They have been assigned default values.</font>
</details>
<details>
<summary>Running the main pipeline</summary>
<font size="2">Run all cells to get the output in the <code>output</code> directory. </font><br>
</details>
### Notebook session setup
<details>
<summary>Instructions</summary>
<ul>
<li><font size="2">The user needs to setup only once.</font></li>
<li><font size="2">Uncomment the commented cells and run once to setup.</font></li>
<li><font size="2">Comment back the same cells to avoid running again.</font></li>
</ul>
</details>
#### Installing the OCI Vision SDK
```
# !wget "https://objectstorage.us-ashburn-1.oraclecloud.com/n/axhheqi2ofpb/b/vision-demo-notebooks/o/vision_service_python_client-0.3.45-py2.py3-none-any.whl"
# !pip install vision_service_python_client-0.3.45-py2.py3-none-any.whl
# !rm vision_service_python_client-0.3.45-py2.py3-none-any.whl
```
#### Installing other dependencies
```
# !pip install matplotlib==3.3.4
# !pip install pandas==1.1.5
```
#### Setup sample input documents
```
# !wget "https://objectstorage.us-ashburn-1.oraclecloud.com/n/axhheqi2ofpb/b/vision-demo-notebooks/o/TextDetectionOnePage.pdf"
# !wget "https://objectstorage.us-ashburn-1.oraclecloud.com/n/axhheqi2ofpb/b/vision-demo-notebooks/o/table.pdf"
# !mkdir data
# !mv TextDetectionOnePage.pdf data
# !mv table.pdf data
```
#### Setup helper .py files
```
# !wget "https://objectstorage.us-ashburn-1.oraclecloud.com/n/axhheqi2ofpb/b/vision-demo-notebooks/o/analyze_document_utils.py"
# !mkdir helper
# !mv analyze_document_utils.py helper
```
#### Create output folder
```
# !mkdir output
```
### Imports
```
import base64
import os
import io
import oci
import json
from IPython.display import IFrame
import requests
from vision_service_python_client.ai_service_vision_client import AIServiceVisionClient
from vision_service_python_client.models.analyze_document_details import AnalyzeDocumentDetails
from vision_service_python_client.models.inline_document_details import InlineDocumentDetails
from vision_service_python_client.models.document_text_detection_feature import DocumentTextDetectionFeature
from helper.analyze_document_utils import is_url, clean_output
```
### Set input variables
<details>
<summary><font size="3">input_path</font></summary>
<font size="2">The user can provide the document URL or filepath from the notebook session.</font><br>
</details>
```
input_path = "data/TextDetectionOnePage.pdf"
```
### Authorize user config
```
config = oci.config.from_file('~/.oci/config')
```
### Get input document
```
if is_url(input_path):
file_content = requests.get(input_path).content
encoded_string = base64.b64encode(file_content)
input_path = 'data/' + os.path.basename(input_path)
open(input_path, 'wb').write(file_content)
else:
with open(input_path, "rb") as document_file:
encoded_string = base64.b64encode(document_file.read())
```
### View input document
```
if is_url(input_path):
display(IFrame('data/' + os.path.basename(input_path), width=600, height=500))
else:
display(IFrame(input_path, width=600, height=500))
```
### Create AI service vision client and get response object
```
ai_service_vision_client = AIServiceVisionClient(config=config)
analyze_document_details = AnalyzeDocumentDetails()
inline_document_details = InlineDocumentDetails()
text_detection_feature = DocumentTextDetectionFeature()
text_detection_feature.generate_searchable_pdf = False
features = [text_detection_feature]
inline_document_details.data = encoded_string.decode('utf-8')
analyze_document_details.document = inline_document_details
analyze_document_details.features = features
res = ai_service_vision_client.analyze_document(analyze_document_details=analyze_document_details)
```
### Clean and save the API response as json
```
res_json = json.loads(repr(res.data))
clean_res = clean_output(res_json)
with open('output/response_document_demo1.json', 'w') as fp:
json.dump(clean_res, fp)
```
### Display the lines of text detected
```
for i, page in enumerate(clean_res['pages']):
print('**************** PAGE NO.', i+1, '****************\n')
for line in page['lines']:
print(line['text'])
print('\n')
```
### View output document with bounding boxes
The user can uncomment and run the cells below to visualize the bounding boxes over the document. This visualization feature is currently supported for <b>PDF format only.</b>
#### Install dependencies
```
# !pip install fitz==0.0.1.dev2
# !pip install pymupdf==1.18.19
```
#### Imports
```
from helper.analyze_document_utils import add_text_bounding_boxes_to_pdf
import fitz
```
#### Add bounding boxes
```
doc = fitz.open(input_path)
doc = add_text_bounding_boxes_to_pdf(doc, clean_res)
output_path = 'output/' + 'output_' + os.path.basename(input_path)
doc.save(output_path)
```
#### Display output document
```
display(IFrame(output_path, width=600, height=500))
```
| github_jupyter |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" />
<div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 6. Interlinked Plots</h2></div>
Using hvPlot allows you to generate a number of different types of plot
quickly from a standard API by building [HoloViews](https://holoviews.org) objects, as discussed in the previous
notebook. These objects are rendered with Bokeh which offers a number of
standard ways to interact with your plot, such as panning and zooming
tools.
Many other modes of interactivity are possible when building an
exploratory visualization (such as a dashboard) and these forms of
interactivity cannot be achieved using hvPlot alone.
In this notebook, we will drop down to the HoloViews level of
representation to build a visualization directly that consists of linked plots that
update when you interactivity select a particular earthquake with the
mouse. The goal is to show how more sophisticated forms of interactivity can be built when needed, in a way that's fully compatible with all the examples shown in earlier sections.
First let us load our initial imports:
```
import numpy as np
import pandas as pd
import dask.dataframe as dd
import hvplot.pandas # noqa
import datashader.geo
from holoviews.element import tiles
```
And clean the data before filtering (for magnitude `>7`) and projecting to to Web Mercator as before:
```
df = dd.read_parquet('../data/earthquakes.parq').repartition(npartitions=4)
cleaned_df = df.copy()
cleaned_df['mag'] = df.mag.where(df.mag > 0)
cleaned_reindexed_df = cleaned_df.set_index(cleaned_df.time)
cleaned_reindexed_df = cleaned_reindexed_df.persist()
most_severe = cleaned_reindexed_df[cleaned_reindexed_df.mag >= 7].compute()
x, y = datashader.geo.lnglat_to_meters(most_severe.longitude, most_severe.latitude)
most_severe_projected = most_severe.join([pd.DataFrame({'easting': x}), pd.DataFrame({'northing': y})])
```
Towards the end of the previous notebook we generated a scatter plot of earthquakes
across the earth that had a magnitude `>7` that was projected using
datashader and overlaid on top of a map tile source:
```
high_mag_quakes = most_severe_projected.hvplot.points(x='easting', y='northing', c='mag',
title='Earthquakes with magnitude >= 7')
esri = tiles.ESRI().redim(x='easting', y='northing')
esri * high_mag_quakes
```
And saw how this object is a HoloViews `Points` object:
```
print(high_mag_quakes)
```
This object is an example of a HoloViews *Element* which is an object that can display itself. These elements are *thin* wrappers around your data and the raw input data is always available on the `.data` attribute. For instance, we can look at the `head` of the `most_severe_projected` `DataFrame` as follows:
```
high_mag_quakes.data.head()
```
We will now learn a little more about `HoloViews` elements, including how to build them up from scratch so that we can control every aspect of them.
### An Introduction to HoloViews Elements
HoloViews elements are the atomic, visualizable components that can be
rendered by a plotting library such as Bokeh. We don't actually need to use
hvPlot to create these element objects: we can create them directly by
importing HoloViews (and loading the extension if we have not loaded
hvPlot):
```
import holoviews as hv
hv.extension("bokeh") # Optional here as we have already loaded hvplot.pandas
```
Now we can create our own example of a `Points` element. In the next
cell we plot 100 points with a normal (independent) distrbutions in the
`x` and `y` directions:
```
xs = np.random.randn(100)
ys = np.random.randn(100)
hv.Points((xs, ys))
```
Now that the axis labels are 'x' and 'y', the default *dimensions* for
this element type. We can use a different set of dimensions along the x- and y-axis (say
'weight' and 'height') and we can also associate additional `fitness` information with each point if we wish:
```
xs = np.random.randn(100)
ys = np.random.randn(100)
fitness = np.random.randn(100)
height_v_weight = hv.Points((xs, ys, fitness), ['weight', 'height'], 'fitness')
height_v_weight
```
Now we can look at the printed representation of this object:
```
print(height_v_weight)
```
Here the printed representation shows the *key dimensions* that we specified in square brackets as `[weight,height]` and the additional *value dimension* `fitness` in parentheses as `(fitness)`. The *key dimensions* map to the axes and the *value dimensions* can be visually represented by other visual attributes as we shall see shortly.
For more information an HoloViews dimensions, see this [user guide](http://holoviews.org/user_guide/Annotating_Data.html).
#### Exercise
Visit the [HoloViews reference gallery](http://holoviews.org/reference/index.html) and browse
the available set of elements. Pick an element type and try running
one of the self-contained examples in the following cell.
### Setting Visual Options
The two `Points` elements above look quite different from the one
returned by hvplot showing the earthquake positions. This is because
hvplot makes use of the HoloViews *options system* to customize the
visual representation of these element objects.
Let us color the `height_v_weight` scatter by the fitness value and use a larger
point size:
```
height_v_weight.opts(color='fitness', size=8, colorbar=True, aspect='square')
```
#### Exercise
Copy the line above into the next cell and try changing the points to
'blue' or 'green' or another dimension of the data such as 'height' or 'weight'.
Are the results what you expect?
### The `help` system
You can learn more about the `.opts` method and the HoloViews options
system in the [corresponding user
guide](http://holoviews.org/user_guide/Applying_Customizations.html). To
easily learn about the available options from inside a notebook, you can
use `hv.help` and inspect the 'Style Options'.
```
# Commented as there is a lot of help output!
# hv.help(hv.Scatter)
```
At this point, we can have some insight to the sort of HoloViews object
hvPlot is building behind the scenes for our earthquake example:
```
esri * hv.Points(most_severe_projected, ['easting', 'northing'], 'mag').opts(color='mag', size=8, aspect='equal')
```
#### Exercise
Try using `hv.help` to inspect the options available for different element types such as the `Points` element used above. Copy the line above into the cell below and pick a `Points` option that makes sense to you and try using it in the `.opts` method.
<details><summary>Hint</summary><br>
If you can't decide on an option to pick, a good choice is `marker`. For instance, try:
* `marker='+'`
* `marker='d'`.
HoloViews uses [matplotlib's conventions](https://matplotlib.org/3.1.0/api/markers_api.html) for specifying the various marker types. Try finding out which ones are support by Bokeh.
</details>
### Custom interactivity for Elements
When rasterization of the population density data via hvplot was
introduced in the last notebook, we saw that the HoloViews object
returned was not an element but a *`DynamicMap`*.
A `DynamicMap` enables custom interactivity beyond the Bokeh defaults by
dynamically generating elements that get displayed and updated as the
plot is interacted with.
There is a counterpart to the `DynamicMap` that does not require a live
Python server to be running called the `HoloMap`. The `HoloMap`
container will not be covered in the tutorial but you can learn more
about them in the [containers user
guide](http://holoviews.org/user_guide/Dimensioned_Containers.html).
Now let us build a very simple `DynamicMap` that is driven by a *linked
stream* (specifically a `PointerXY` stream) that represents the position
of the cursor over the plot:
```
from holoviews import streams
pointer = streams.PointerXY(x=0, y=0) # x=0 and y=0 are the initialized values
def crosshair(x, y):
return hv.Ellipse(0,0,1) * hv.HLine(y) * hv.VLine(x)
hv.DynamicMap(crosshair, streams=[pointer])
```
Try moving your mouse over the plot and you should see the crosshair
follow your mouse position.
The core concepts here are:
* The plot shows an overlay built with the `*` operator introduced in
the previous notebook.
* There is a callback that returns this overlay that is built according
to the supplied `x` and `y` arguments. A DynamicMap always contains a
callback that returns a HoloViews object such as an `Element` or
`Overlay`
* These `x` and `y` arguments are supplied by the `PointerXY` stream
that reflect the position of the mouse on the plot.
#### Exercise
Look up the `Ellipse`, `HLine`, and `VLine` elements in the
[HoloViews reference guide](http://holoviews.org/reference/index.html) and see
if the definitions of these elements align with your initial intuitions.
#### Exercise (additional)
If you have time, try running one of the examples in the
'Streams' section of the [HoloViews reference guide](http://holoviews.org/reference/index.html) in the cell below. All the examples in the reference guide should be relatively short and self-contained.
### Selecting a particular earthquake with the mouse
Now we only need two more concepts before we can set up the appropriate
mechanism to select a particular earthquake on the hvPlot-generated
Scatter plot we started with.
First, we can attach a stream to an existing HoloViews element such as
the earthquake distribution generated with hvplot:
```
selection_stream = streams.Selection1D(source=high_mag_quakes)
```
Next we need to enable the 'tap' tool on our Scatter to instruct Bokeh
to enable the desired selection mechanism in the browser.
```
high_mag_quakes.opts(tools=['tap'])
```
The Bokeh default alpha of points which are unselected is going to be too low when we overlay these points on a tile source. We can use the HoloViews options system to pick a better default as follows:
```
hv.opts.defaults(hv.opts.Points(nonselection_alpha=0.4))
```
The tap tool is in the toolbar with the icon showing the concentric
circles and plus symbol. If you enable this tool, you should be able to pick individual earthquakes above by tapping on them.
Now we can make a DynamicMap that uses the stream we defined to show the index of the earthquake selected via the `hv.Text` element:
```
def labelled_callback(index):
if len(index) == 0:
return hv.Text(x=0,y=0, text='')
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe_projected.iloc[first_index]
return hv.Text(x=row.easting,y=row.northing,text='%d : %s' % (first_index, row.place)).opts(color='white')
labeller = hv.DynamicMap(labelled_callback, streams=[selection_stream])
```
This labeller receives the index argument from the Selection1D stream
which corresponds to the row of the original dataframe (`most_severe`)
that was selected. This lets us present the index and place value using
`hv.Text` which we then position at the corresponding latitude and
longitude to label the chosen earthquake.
Finally, we overlay this labeller `DynamicMap` over the original
plot. Now by using the tap tool you can see the index number of an
earthquake followed by the assigned place name:
```
(esri * high_mag_quakes.opts(tools=['tap']) * labeller).opts(hv.opts.Scatter(tools=['hover']))
```
#### Exercise
Pick an earthquake point above and using the displayed index, display the corresponding row of the `most_severe` dataframe using the `.iloc` method in the following cell.
### Building a linked earthquake visualizer
Now we will build a visualization that achieves the following:
* The user can select an earthquake with magnitude `>7` using the tap
tool in the manner illustrated in the last section.
* In addition to the existing label, we will add concentric circles to further highlight the
selected earthquake location.
* *All* earthquakes within 0.5 degrees of latitude and longitude of the
selected earthquake (~50km) will then be used to supply data for two linked
plots:
1. A histogram showing the distribution of magnitudes in the selected area.
2. A timeseries scatter plot showing the magnitudes of earthquakes over time in the selected area.
The first step is to generate a concentric-circle marker using a similar approach to the `labeller` above. We can write a function that uses `Ellipse` to mark a particular earthquake and pass it to a `DynamicMap`:
```
def mark_earthquake(index):
if len(index) == 0:
return hv.Overlay([])
first_index = index[0] # Pick only the first one if multiple are selected
row = most_severe_projected.iloc[first_index]
return ( hv.Ellipse(row.easting, row.northing, 1.5e6).opts(color='white', alpha=0.5)
* hv.Ellipse(row.easting, row.northing, 3e6).opts(color='white', alpha=0.5))
quake_marker = hv.DynamicMap(mark_earthquake, streams=[selection_stream])
```
Now we can test this component by building an overlay of the `ESRI` tile source, the `>=7` magnitude points and `quake_marked`:
```
esri* high_mag_quakes.opts(tools=['tap']) * quake_marker
```
Note that you may need to zoom in to your selected earthquake to see the
localized, lower magnitude earthquakes around it.
### Filtering earthquakes by location
We wish to analyse the earthquakes that occur around a particular latitude and longitude. To do this we will define a function that given a latitude and longitude, returns the rows of a suitable dataframe that corresponding to earthquakes within 0.5 degrees of that position:
```
def earthquakes_around_point(df, lat, lon, degrees_dist=0.5):
half_dist = degrees_dist / 2.0
return df[((df['latitude'] - lat).abs() < half_dist)
& ((df['longitude'] - lon).abs() < half_dist)].compute()
```
As it can be slow to filter our dataframes in this way, we can define the following function that can cache the result of filtering `cleaned_reindexed_df` (containing all earthquakes) based on an index pulled from the `most_severe` dataframe:
```
def index_to_selection(indices, cache={}):
if not indices:
return most_severe.iloc[[]]
index = indices[0] # Pick only the first one if multiple are selected
if index in cache: return cache[index]
row = most_severe.iloc[index]
selected_df = earthquakes_around_point(cleaned_reindexed_df, row.latitude, row.longitude)
cache[index] = selected_df
return selected_df
```
The caching will be useful as we know both of our planned linked plots (i.e the histogram and scatter over time) make use of the same earthquake selection once a particular index is supplied from a user selection. This particular caching strategy is rather awkward (and leaks memory!) but it simple and will serve for the current example. A better approach to caching will be presented in the [Advanced Dashboards](./08_Advanced_Dashboards.ipynb) section of the tutorial.
#### Exercise
Test the `index_to_selection` function above for the index you picked in the previous exercise. Note that the stream supplied a *list* of indices and that the function above only uses the first value given in that list. Do the selected rows look correct?:
#### Exercise
Convince yourself that the selected earthquakes are within 0.5$^o$ distance of each other in both latitude and longitude.
<details><summary>Hint</summary><br>
For a given `chosen` index, you can see the distance difference using the following code:
```python
chosen = 235
delta_long = index_to_selection([chosen]).longitude.max() - index_to_selection([chosen]).longitude.min()
delta_lat = index_to_selection([chosen]).latitude.max() - index_to_selection([chosen]).latitude.min()
print("Difference in longitude: %s" % delta_long)
print("Difference in latitude: %s" % delta_lat)
```
</details>
### Linked plots
So far we have overlayed the display updates on top of the existing
spatial distribution of earthquakes. However, there is no requirement
that the data is overlaid and we might want to simply attach an entirely
new, derived plot that dynamically updates to the side.
Using the same principles as we have already seen, we can define a
`DynamicMap` that returns `Histogram` distributions of earthquake
magnitude:
```
def histogram_callback(index):
title = 'Distribution of all magnitudes within half a degree of selection'
selected_df = index_to_selection(index)
return selected_df.hvplot.hist(y='mag', bin_range=(0,10), bins=20, color='red', title=title)
histogram = hv.DynamicMap(histogram_callback, streams=[selection_stream])
```
The only real difference in the approach here is that we can still use
`.hvplot` to generate our elements instead of declaring the HoloViews
elements explicitly. In this example, `.hvplot.hist` is used.
The exact same principles can be used to build the scatter callback and `temporal_distribution` `DynamicMap`:
```
def scatter_callback(index):
title = 'Temporal distribution of all magnitudes within half a degree of selection '
selected_df = index_to_selection(index)
return selected_df.hvplot.scatter('time', 'mag', color='green', title=title)
temporal_distribution = hv.DynamicMap(scatter_callback, streams=[selection_stream])
```
Lastly, let us define a `DynamicMap` that draws a `VLine` to mark the time at which the selected earthquake occurs so we can see which tremors may have been aftershocks immediately after that major earthquake occurred:
```
def vline_callback(index):
if not index:
return hv.VLine(0).opts(alpha=0)
row = most_severe.iloc[index[0]]
return hv.VLine(row.time).opts(line_width=2, color='black')
temporal_vline = hv.DynamicMap(vline_callback, streams=[selection_stream])
```
We now have all the pieces we need to build an interactive, linked visualization of earthquake data.
#### Exercise
Test the `histogram_callback` and `scatter_callback` callback functions by supplying your chosen index, remembering that these functions require a list argument in the following cell.
### Putting it together
Now we can combine the components we have already built as follows to create a dynamically updating plot together with an associated, linked histogram:
```
((esri * high_mag_quakes.opts(tools=['tap']) * labeller * quake_marker)
+ histogram + temporal_distribution * temporal_vline).cols(1)
```
We now have a custom interactive visualization that builds on the output of `hvplot` by making use of the underlying HoloViews objects that it generates.
## Conclusion
When exploring data it can be convenient to use the `.plot` API to quickly visualize a particular dataset. By calling `.plot` to generate different plots over the course of a session, it is possible to gradually build up a mental model of how a particular dataset is structured. While this works well for simple datasets, it can be more efficient to build a linked visualization with support for direct user interaction as a tool for more rapidly gaining insight.
In the workflow presented here, building such custom interaction is relatively quick and easy and does not involve throwing away prior code used to generate simpler plots. In the spirit of 'short cuts not dead ends', we can use the HoloViews output of `hvplot` that we used in our initial exploration to build rich visualizations with custom interaction to explore our data at a deeper level.
These interactive visualizations not only allow for custom interactions beyond the scope of `hvplot` alone, but they can display visual annotations not offered by the `.plot` API. In particular, we can overlay our data on top of tile sources, generate interactive textual annotations, draw shapes such a circles, mark horizontal and vertical marker lines and much more. Using HoloViews you can build visualizations that allow you to directly interact with your data in a useful and intuitive manner.
In this notebook, the earthquakes plotted were either filtered early on by magnitude (`>=7`) or dynamically to analyse only the earthquakes within a small geographic distance. This allowed us to use Bokeh directly without any special handing and without having to worry about the performance issues that would be occur if we were to try to render the whole dataset at once.
In the next section we will see how such large datasets can be visualized directly using Datashader.
| github_jupyter |
# 数学函数、字符串和对象
## 本章介绍Python函数来执行常见的数学运算
- 函数是完成一个特殊任务的一组语句,可以理解为一个函数相当于一个小功能,但是在开发中,需要注意一个函数的长度最好不要超过一屏
- Python中的内置函数是不需要Import导入的
<img src="../Photo/15.png"></img>
## 尝试练习Python内置函数
## Python中的math模块提供了许多数学函数
<img src="../Photo/16.png"></img>
<img src="../Photo/17.png"></img>
```
import math
%time
a = 10
res = math.fabs(a)
print(res)
%time
a = 10
res = abs(a)
print(res)
x = eval(input('given a number'))
res = math.fabs(2*x)
print(res)
b = -2
res = math.ceil(b)
print(res)
b = 3.5
res = math.floor(b)
print(res)
```
## 两个数学常量PI和e,可以通过使用math.pi 和math.e调用
## EP:
- 通过math库,写一个程序,使得用户输入三个顶点(x,y)返回三个角度
- 注意:Python计算角度为弧度制,需要将其转换为角度
<img src="../Photo/18.png">
```
a = 1
b = 1
c = math.sqrt(2)
A = math.acos((math.pow(a,2)-math.pow(b,2)-math.pow(c,2))/(-2*b*c))
B = math.acos((math.pow(b,2)-math.pow(a,2)-math.pow(c,2))/(-2*a*c))
C = math.acos((math.pow(c,2)-math.pow(b,2)-math.pow(a,2))/(-2*b*a))
A = math.degrees(A)
B = math.degrees(B)
C = math.degrees(C)
print(A)
print(B)
print(C)
x1,y1 = ( input)
```
## 字符串和字符
- 在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“
- 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用
```
a = 'joker'
print(type(a))
b = "joker
is
a
good
man"
'joker \'
```
## ASCII码与Unicode码
- <img src="../Photo/19.png"></img>
- <img src="../Photo/20.png"></img>
- <img src="../Photo/21.png"></img>
## 函数ord、chr
- ord 返回ASCII码值
- chr 返回字符
## EP:
- 利用ord与chr进行简单邮箱加密
## 转义序列 \
- a = "He said,"Johon's program is easy to read"
- 转掉它原来的意思
- 一般情况下只有当语句与默认方法相撞的时候,就需要转义
## 高级print
- 参数 end: 以什么方式结束打印
- 默认换行打印
## 函数str
- 将类型强制转换成字符串类型
- 其他一些以后会学到(list,set,tuple...)
## 字符串连接操作
- 直接使用 “+”
- join() 函数
## EP:
- 将 “Welcome” “to” "Python" 拼接
- 将int型 100 与 “joker is a bad man” 拼接
- 从控制台读取字符串
> 输入一个名字返回夸奖此人
## 实例研究:最小数量硬币
- 开发一个程序,让用户输入总金额,这是一个用美元和美分表示的浮点值,返回一个由美元、两角五分的硬币、一角的硬币、五分硬币、以及美分个数
<img src="../Photo/22.png"></img>
- Python弱项,对于浮点型的处理并不是很好,但是处理数据的时候使用的是Numpy类型
<img src="../Photo/23.png"></img>
## id与type
- id 查看内存地址,在判断语句中将会使用
- type 查看元素类型
## 其他格式化语句见书
# Homework
- 1
<img src="../Photo/24.png"><img>
<img src="../Photo/25.png"><img>
```
import math
r = eval(input('请输入中心距离'))
s = 2 * r * (math.sin(math.pi/5))
A1 = 5 * s * s
A2 = 4 * math.tan(math.pi/5)
Area = A1/A2
print(Area)
r = eval(input('请输入中心距离'))
s = 2 * r * (math.sin(math.pi/5))
A1 = 5 * s * s
A2 = 4 * math.tan(math.pi/5)
Area = A1/A2
print(Area)
```
- 2
<img src="../Photo/26.png"><img>
```
x1,y1 = eval(input('请输入经度'))
x2,y2 = eval(input('请输入纬度'))
d1 = math.sin(math.radians(x1)) * math.sin(math.radians(x2))
d2 = math.cos(math.radians(x1)) * math.cos(math.radians(x2)) * math.cos(math.radians(y1-y2))
d3 = math.acos(d1 + d2)
d = 6371.0 * d3
print(d,"km")
```
- 3
<img src="../Photo/27.png"><img>
```
s = eval(input('请输入边长'))
A1 = 5 * s ** 2
A2 = 4 * math.tan(math.pi/5)
Area = A1/A2
print('The area of the pentagon is',Area)
s = eval(input('请输入边长'))
A1 = 5 * s ** 2
A2 = 4 * math.tan(math.pi/5)
Area = A1/A2
print('The area of the pentagon is',Area)
```
- 5
<img src="../Photo/29.png"><img>
<img src="../Photo/30.png"><img>
```
a = eval(input('请输入整数'))
chr(a)
```
- 6
<img src="../Photo/31.png"><img>
```
name = input('employee name')#名字
hours = eval(input('hours'))# 时间/小时
rate = eval(input('rate'))# 每小时的报酬
fed = eval(input('fed'))# 联邦预扣税率
state = eval(input('state'))# 州预扣税率
gross = hours * rate # 总工资(未扣税)
fed_1 = fed * gross # 扣除联邦预扣税费
state_1 = state * gross #扣除州预扣税费
net = eval('gross-fed_1-state_1')# 总体-应扣税费
tot = fed_1+state_1 # 应扣税费总和
print(name)
print(hours)
print(rate)
print(fed)
print(state)
print(gross)
print('Deductions:')
print('Federal Withholding(20.0%):',fed_1)
print('State Withholding(9.0%):',state_1)
print('Total Deduction:',tot)
print(net)
```
- 7
<img src="../Photo/32.png"><img>
- 8 进阶:
> 加密一串文本,并将解密后的文件写入本地保存
| github_jupyter |
# Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
```
import os
os.chdir(os.getcwd() + '/..')
# Run some setup code for this notebook
import random
import numpy as np
import matplotlib.pyplot as plt
from utils.data_utils import get_CIFAR10_data
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data('datasets/cifar-10-batches-py', subtract_mean=True)
for k, v in data.iteritems():
print('%s: ' % k, v.shape)
from utils.metrics_utils import rel_error
```
# Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file `layers/layers.py`, implement the forward pass for the convolution layer in the function `conv_forward_naive`.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
```
from layers.layers import conv_forward_naive
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around 2e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
```
# Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
```
from scipy.misc import imread, imresize
kitten, puppy = imread('test/kitten.jpg'), imread('test/puppy.jpg')
# kitten is wide, and puppy is already square
print kitten.shape, puppy.shape
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
print kitten_cropped.shape, puppy.shape
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
```
# Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function `conv_backward_naive` in the file `layers/layers.py`. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
```
from layers.layers import conv_backward_naive
from utils.gradient_check import eval_numerical_gradient_array
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-8'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
```
# Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function `max_pool_forward_naive` in the file `layers/layers.py`. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
```
from layers.layers import max_pool_forward_naive
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
```
# Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function `max_pool_backward_naive` in the file `layers/layers.py`. You don't need to worry about computational efficiency.
from aCheck your implementation with numeric gradient checking by running the following:
```
from layers.layers import max_pool_backward_naive
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
```
# Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file `layers/fast_conv_layers.py`.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the `layers` directory:
```bash
python setup.py build_ext --inplace
```
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
**NOTE:** The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
```
from layers.fast_conv_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
print
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
from layers.fast_conv_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
```
# Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file `layers/layer_utils.py`, you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
```
from layers.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from layers.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file `classifiers/cnn.py` and complete the implementation of the `ThreeLayerConvNet` class. Run the following cells to help you debug:
## Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization this should go up.
```
from classifiers.cnn import ThreeLayerConvNet
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
print(np.log(10))
```
## Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.
```
from utils.gradient_check import eval_numerical_gradient
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
```
## Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
```
from base.solver import Solver
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
```
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
```
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
```
## Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
```
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
```
## Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
```
from utils.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
```
# Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape `(N, D)` and produces outputs of shape `(N, D)`, where we normalize across the minibatch dimension `N`. For data coming from convolutional layers, batch normalization needs to accept inputs of shape `(N, C, H, W)` and produce outputs of shape `(N, C, H, W)` where the `N` dimension gives the minibatch size and the `(H, W)` dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the `C` feature channels by computing statistics over both the minibatch dimension `N` and the spatial dimensions `H` and `W`.
## Spatial batch normalization: forward
In the file `layers/layers.py`, implement the forward pass for spatial batch normalization in the function `spatial_batchnorm_forward`. Check your implementation by running the following:
```
from layers.layers import spatial_batchnorm_forward, spatial_batchnorm_backward
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
```
## Spatial batch normalization: backward
In the file `layers/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_batchnorm_backward`. Run the following to check your implementation using a numeric gradient check:
```
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
| github_jupyter |
```
import collections
import itertools
import math
import numpy as np
import pandas as pd
from mesa import Model, Agent
from mesa.time import RandomActivation
class TrafficModel(Model):
def __init__(self, n_agents, seed=None):
super().__init__(seed=seed)
self.n_agents = n_agents
self.scheduler = RandomActivation(self)
self.agents = []
# initial set of positions
positions = np.linspace(0, math.tau, n_agents, endpoint=False)
# for each position, create agent
for unique_id, pos in enumerate(positions):
car = Car(pos, unique_id, self)
self.scheduler.add(car)
self.agents.append(car)
# set car in front
for agent_a, agent_b in zip(self.agents,
self.agents[1::] + [self.agents[0],]):
agent_a.car_in_front = agent_b
def step(self):
self.scheduler.step()
class Car(Agent):
accelaration = 0.05
decelaration = 0.075
# max_speed = 0.1
def __init__(self, pos, unique_id, model):
super().__init__(unique_id, model)
self.pos = pos
self.car_in_front = None
self.max_speed = 0.2 + self.random.normalvariate(0, 0.0125)
self.speed = self.max_speed
self.lookahead = self.max_speed * 2
def determine_distance(self):
distance = self.car_in_front.pos - self.pos
# distance to front can be negative for last
# created agent to first created agent
if distance < 0:
distance += math.tau
return distance
def step(self):
distance = self.determine_distance()
# do we need to break?
if distance < self.lookahead:
self.speed = max(self.speed - self.decelaration, 0)
elif self.speed < self.max_speed:
self.speed = min(self.speed + self.accelaration, self.max_speed)
self.pos += self.speed
def visualize_model(model):
fig, ax = plt.subplots()
positions = [agent.pos for agent in model.agents]
x_position = np.cos(positions)
y_position = np.sin(positions)
points = ax.scatter(x_position, y_position)
ax.set_aspect('equal')
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlim(xmin=-1.5, xmax=1.5)
ax.set_ylim(ymin=-1.5, ymax=1.5)
return fig, points
model = TrafficModel(20)
visualize_model(model)
model.step()
plt.show()
from matplotlib.animation import FuncAnimation
from matplotlib import animation, rc, collections
from matplotlib import pyplot as plt
from IPython.display import HTML
number_of_cars = 20
model = TrafficModel(number_of_cars)
fig, points = visualize_model(model)
ax = plt.gca()
def update(frame):
model.step()
positions = [agent.pos for agent in model.agents]
x_position = np.cos(positions)
y_position = np.sin(positions)
pos = np.asarray([x_position, y_position]).T
points.set_offsets(pos)
ax.set(title=str(number_of_cars) + " cars on the road, t= " + str(model.scheduler.steps))
return positions
anim = FuncAnimation(fig, update, interval=100, frames=300);
HTML(anim.to_html5_video())
#writervideo = animation.FFMpegWriter(fps=12,bitrate=900)
#anim.save("cars.mp4", writer=writervideo,dpi=200)
```
| github_jupyter |
Wayne Nixalo - 2017-Jun-12 17:27
Code-Along of Lesson 5 JNB.
Lesson 5 NB: https://github.com/fastai/courses/blob/master/deeplearning1/nbs/lesson5.ipynb
[Lecture](https://www.youtube.com/watch?v=qvRL74L81lg)
```
import theano
%matplotlib inline
import sys, os
sys.path.insert(1, os.path.join('utils'))
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
model_path = 'data/imdb/models/'
%mkdir -p $model_path # -p : make intermediate directories as needed
```
## Setup data
We're going to look at the IMDB dataset, which contains movie reviews from IMDB, along with their sentiment. Keras comes with some helpers for this dataset.
```
from keras.datasets import imdb
idx = imdb.get_word_index()
```
This is the word list:
```
idx_arr = sorted(idx, key=idx.get)
idx_arr[:10]
```
...and this is the mapping from id to word:
```
idx2word = {v: k for k, v in idx.iteritems()}
```
We download the reviews using code copied from keras.datasets:
```
# getting the dataset directly bc keras's versn makes some changes
path = get_file('imdb_full.pkl',
origin='https://s3.amazonaws.com/text-datasets/imdb_full.pkl',
md5_hash='d091312047c43cf9e4e38fef92437263')
f = open(path, 'rb')
(x_train, labels_train), (x_test, labels_test) = pickle.load(f)
# apparently cpickle can be x1000 faster than pickle? hmm
len(x_train)
```
Here's the 1st review. As you see, the words have been replaced by ids. The ids can be looked up in idx2word.
```
', '.join(map(str, x_train[0]))
```
The first word of the first review is 23022. Let's see what that is.
```
idx2word[23022]
x_train[0]
```
Here's the whole review, mapped from ids to words.
```
' '.join([idx2word[o] for o in x_train[0]])
```
The labels are 1 for positive, 0 for negative
```
labels_train[:10]
```
Reduce vocabulary size by setting rare words to max index.
```
vocab_size = 5000
trn = [np.array([i if i < vocab_size-1 else vocab_size-1 for i in s]) for s in x_train]
test = [np.array([i if i < vocab_size-1 else vocab_size-1 for i in s]) for s in x_test]
```
Look at distribution of lengths of sentences
```
lens = np.array(map(len, trn))
(lens.max(), lens.min(), lens.mean())
```
Pad (with zero) or truncate each sentence to make consistent length.
```
seq_len = 500
# keras.preprocessing.sequence
trn = sequence.pad_sequences(trn, maxlen=seq_len, value=0)
test = sequence.pad_sequences(test, maxlen=seq_len, value=0)
```
This results in nice rectangular matrices that can be passed to ML algorithms. Reviews shorter than 500 words are prepadded with zeros, those greater are truncated.
```
trn.shape
trn[0]
```
## Create simple models
### Single hidden layer NN
This simplest model that tends to give reasonable results is a single hidden layer net. So let's try that. Note that we can't expect to get any useful results by feeding word ids directly into a neural net - so instead we use an embedding to replace them with a vector of 32 (initially random) floats for each word in the vocab.
```
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.summary()
# model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# redoing on Linux
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
```
The [Stanford paper](http://ai.stanford.edu/~amaas/papers/wvSent_acl2011.pdf) that this dataset is from cites a state of the art accuacy (without unlabelled data) of 0.883. So we're short of that, but on the right track.
### Single Conv layer with Max Pooling
A CNN is likely to work better, since it's designed to take advantage of ordered data. We'll need to use a 1D CNN, since a sequence of words is 1D.
```
# the embedding layer is always the first step in every NLP model
# --> after that layer, you don't have words anymore: vectors
conv1 = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, dropout=0.2),
Dropout(0.2),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.2),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
conv1.summary()
conv1.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
# conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)
# redoing on Linux w/ GPU
conv1.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)
```
That's well past the Stanford paper's accuracy - another win for CNNs!
*Heh, the above take a lot longer than 4s on my Mac*
```
conv1.save_weights(model_path + 'conv1.h5')
# conv1.load_weights(model_path + 'conv1.h5')
```
## Pre-trained Vectors
You may want to look at wordvectors.ipynb before moving on.
In this section, we replicate the previous CNN, but using pre-trained embeddings.
```
def get_glove_dataset(dataset):
"""Download the requested glove dataset from files.fast.ai
and return a location that can be passed to load_vectors.
"""
# see wordvectors.ipynb for info on how these files were
# generated from the original glove data.
md5sums = {'6B.50d' : '8e1557d1228decbda7db6dfd81cd9909',
'6B.100d': 'c92dbbeacde2b0384a43014885a60b2c',
'6B.200d': 'af271b46c04b0b2e41a84d8cd806178d',
'6B.300d': '30290210376887dcc6d0a5a6374d8255'}
glove_path = os.path.abspath('data/glove.6B/results')
%mkdir -p $glove_path
return get_file(dataset,
'https://files.fast.ai/models/glove/' + dataset + '.tgz',
cache_subdir=glove_path,
md5_hash=md5sums.get(dataset, None),
untar=True)
# not able to download from above, so using code from wordvectors_CodeAlong.ipynb to load
def get_glove(name):
with open(path+ 'glove.' + name + '.txt', 'r') as f: lines = [line.split() for line in f]
words = [d[0] for d in lines]
vecs = np.stack(np.array(d[1:], dtype=np.float32) for d in lines)
wordidx = {o:i for i,o in enumerate(words)}
save_array(res_path+name+'.dat', vecs)
pickle.dump(words, open(res_path+name+'_words.pkl','wb'))
pickle.dump(wordidx, open(res_path+name+'_idx.pkl','wb'))
# # adding return filename
# return res_path + name + '.dat'
def load_glove(loc):
return (load_array(loc + '.dat'),
pickle.load(open(loc + '_words.pkl', 'rb')),
pickle.load(open(loc + '_idx.pkl', 'rb')))
def load_vectors(loc):
return (load_array(loc + '.dat'),
pickle.load(open(loc + '_words.pkl', 'rb')),
pickle.load(open(loc + '_idx.pkl', 'rb')))
# apparently pickle is a `bit-serializer` or smth like that?
# this isn't working, so instead..
vecs, words, wordidx = load_vectors(get_glove_dataset('6B.50d'))
# trying to load the glove data I downloaded directly, before:
vecs, words, wordix = load_vectors('data/glove.6B/' + 'glove.' + '6B.50d' + '.txt')
# vecs, words, wordix = load_vectors('data/glove.6B/' + 'glove.' + '6B.50d' + '.tgz')
# not successful. get_file(..) returns filepath as '.tar' ? as .tgz doesn't work.
# ??get_file # keras.utils.data_utils.get_file(..)
# that doesn't work either, but method from wordvectors JNB worked so:
path = 'data/glove.6B/'
# res_path = path + 'results/'
res_path = 'data/imdb/results/'
%mkdir -p $res_path
# this way not working; so will pull vecs,words,wordidx manually:
# vecs, words, wordidx = load_vectors(get_glove('6B.50d'))
get_glove('6B.50d')
vecs, words, wordidx = load_glove(res_path + '6B.50d')
# NOTE: yay it worked..!..
def create_emb():
n_fact = vecs.shape[1]
emb = np.zeros((vocab_size, n_fact))
for i in xrange(1, len(emb)):
word = idx2word[i]
if word and re.match(r"^[a-zA-Z0-9\-]*$", word):
src_idx = wordidx[word]
emb[i] = vecs[src_idx]
else:
# If we can't find the word in glove, randomly initialize
emb[i] = normal(scale=0.6, size=(n_fact,))
# This is our "rare word" id - we want to randomly initialize
emb[-1] = normal(scale=0.6, size=(n_fact,))
emb /= 3
return emb
emb = create_emb()
# this embedding matrix is now the glove word vectors, indexed according to
# the imdb dataset.
```
We pass out embedding matrix to the Embedding constructor, and set it to non-trainable.
```
model = Sequential([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2,
weights=[emb], trainable=False),
Dropout(0.25),
Convolution1D(64, 5, border_mode='same', activation='relu'),
Dropout(0.25),
MaxPooling1D(),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')])
# this is copy-pasted of the previous code, with the addition of the
# weights being the pre-trained embeddings.
# We figure the weights are pretty good, so we'll initially set
# trainable to False. Will finetune due to some words missing or etc..
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# running on GPU
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
```
We've already beated our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.
```
model.layers[0].trainable=True
model.optimizer.lr=1e-4
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# running on GPU
model.optimizer.lr=1e-4
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# the above was supposed to be 3 total epochs but I did 4 by mistake
model.save_weights(model_path+'glove50.h5')
```
## Multi-size CNN
This is an implementation of a multi-size CNN as show in Ben Bowles' [blog post.](https://quid.com/feed/how-quid-uses-deep-learning-with-small-data)
```
from keras.layers import Merge
```
We use the functional API to create multiple ocnv layers of different sizes, and then concatenate them.
```
graph_in = Input((vocab_size, 50))
convs = [ ]
for fsz in xrange(3, 6):
x = Convolution1D(64, fsz, border_mode='same', activation='relu')(graph_in)
x = MaxPooling1D()(x)
x = Flatten()(x)
convs.append(x)
out = Merge(mode='concat')(convs)
graph = Model(graph_in, out)
emb = create_emb()
```
We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.
```
model = Sequential ([
Embedding(vocab_size, 50, input_length=seq_len, dropout=0.2, weights=[emb]),
Dropout(0.2),
graph,
Dropout(0.5),
Dense(100, activation='relu'),
Dropout(0.7),
Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# on GPU
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
```
Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why! *hmmm*
```
model.layers[0].trainable=False
model.optimizer.lr=1e-5
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
# on gpu
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64)
conv1.save_weights(model_path + 'conv1_1.h5')
# conv1.load_weights(model_path + 'conv1.h5')
```
This more complex architecture has given us another boost in accuracy.
## LSTM
We haven't covered this bit yet!
```
model = Sequential([
Embedding(vocab_size, 32, input_length=seq_len, mask_zero=True,
W_regularizer=l2(1e-6), dropout=0.2),
LSTM(100, consume_less='gpu'),
Dense(1, activation='sigmoid')])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64)
# NOTE: if this took 100s/epoch using TitanX's or Tesla K80s ... use the Linux machine for this
conv1.save_weights(model_path + 'LSTM_1.h5')
```
| github_jupyter |
```
# Dependencies:
import os
import pandas as pd
import datetime
import time
import pymongo
from bs4 import BeautifulSoup
from splinter import Browser
import requests
from time import sleep
```
# News Article
```
# URL of page to be scraped
url = 'https://mars.nasa.gov/news/'
# Retrieve page with the requests module
html = requests.get(url).text
# Parse HTML with Beautiful Soup
title_soup = BeautifulSoup(html, 'html.parser')
# results are returned
news_title = title_soup.find('div', class_="content_title").text
news_par = title_soup.find('div', class_='rollover_description_inner').text
print(news_title)
print(news_par)
# Close splinter browser
browser.quit()
```
# Mars Image
```
# Initialize Splinter
browser = Browser('chrome', headless=False)
# Directing splinter to webpage
mars_image_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(mars_image_url)
# Move to a second page
target1 = "a[class='button fancybox']"
browser.find_by_tag(target1).click()
# Move to next page
browser.find_by_text('more info ').click()
# Move to next page with image url
target2 = "figure[class='lede']"
browser.find_by_tag(target2).click()
# Create BeautifulSoup object
image_soup = BeautifulSoup(browser.html, 'html.parser')
# Identify and return url source
image_link = image_soup.find('img', src = True)
# Extract url
featured_image_url = image_link['src']
print(featured_image_url)
# Close splinter browser
browser.quit()
```
# Mars Weather
```
# URL of page to be scrapped
mars_weather_url = 'https://twitter.com/MarsWxReport?lang=en'
# Directing splinter to webpage
browser = Browser('chrome', headless=False)
browser.visit(mars_weather_url)
time.sleep(5)
weather_soup = BeautifulSoup(browser.html, 'html.parser')
mars_weather = weather_soup.find("div", class_="css-901oao r-hkyrab r-1qd0xha r-a023e6 r-16dba41 r-ad9z0x r-bcqeeo r-bnwqim r-qvutc0").text
print(mars_weather)
browser.quit()
```
# Mars Facts
```
# URL of page to be scrapped
mars_facts_url = 'https://space-facts.com/mars/'
# Convert html table into a datafram
mars_facts_table = pd.read_html(mars_facts_url)
mars_facts_table
mars_facts = mars_facts_table[2]
mars_facts.columns = ["Description", "Value"]
mars_facts
mars_html_table = mars_facts.to_html()
mars_html_table
mars_html_table.replace('\n', '')
print(mars_html_table)
```
# Mars Hemisphere
```
# Mars hemisphere name and image to be scraped
usgs_url = 'https://astrogeology.usgs.gov'
hemispheres_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser = Browser('chrome', headless=False)
browser.visit(hemispheres_url)
time.sleep(5)
hemispheres_html = browser.html
pics_soup = BeautifulSoup(hemispheres_html, 'html.parser')
# Mars hemispheres products data
all_mars_hemispheres = pics_soup.find('div', class_='collapsible results')
mars_hemispheres = all_mars_hemispheres.find_all('div', class_='item')
hemisphere_image_urls = []
# Iterate through each hemisphere data
for i in mars_hemispheres:
# Collect Title
hemisphere = i.find('div', class_="description")
title = hemisphere.h3.text
# Collect image link by browsing to hemisphere page
hemisphere_link = hemisphere.a["href"]
browser.visit(usgs_url + hemisphere_link)
image_html = browser.html
image_soup = BeautifulSoup(image_html, 'html.parser')
image_link = image_soup.find('div', class_='downloads')
image_url = image_link.find('li').a['href']
# Create Dictionary to store title and url info
image_dict = {}
image_dict['title'] = title
image_dict['img_url'] = image_url
hemisphere_image_urls.append(image_dict)
print(hemisphere_image_urls)
# Close the browser after scraping
browser.quit()
mars_dict = {
"news_title": news_title,
"news_par": news_par,
"featured_image_url": featured_image_url,
"mars_weather": mars_weather,
"fact_table": str(mars_html_table),
"hemisphere_images": hemisphere_image_urls
}
mars_dict
```
| github_jupyter |
```
import networkx as nx
import networkx.algorithms as algos
from networkx.algorithms import approximation
from timeUtils import clock, elapsed
class networkApproximationAlgorithms():
def __init__(self, g, debug=False):
self.debug = debug
self.g = g
self.isDirected = self.g.is_directed()
def connectivity(self):
key = "Connectivity"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
network_connectivity = approximation.all_pairs_node_connectivity(self.g)
pair_connectivity = approximation.node_connectivity(self.g)
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"Network": network_connectivity, "Pair": pair_connectivity}}
return retval
def kcomponents(self):
key = "KComponents"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
if self.isDirected is False:
components = approximation.k_components(self.g)
else:
components = None
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"Components": components}}
return retval
def clique(self):
key = "Clique"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
maxclique = approximation.max_clique(g)
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"Max": maxclique}}
return retval
def clustering(self):
key = "Clustering"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
if self.isDirected is False:
coeff = approximation.average_clustering(g)
else:
coeff = None
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"Coefficient": coeff}}
return retval
def dominatingset(self):
key = "DominatingSet"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
if self.isDirected is False:
minweighted = approximation.min_weighted_dominating_set(g)
minedge = approximation.min_edge_dominating_set(g)
else:
minweighted = None
minedge = approximation.min_edge_dominating_set(g)
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"MinWeightedSet": minweighted, "MinEdgeSet": minedge}}
return retval
def independentset(self):
key = "IndependentSet"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
try:
maxindep = approximation.maximum_independent_set(g)
except:
maxindep = None
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"MaxSet": maxindep}}
return retval
def graphmatching(self):
key = "GraphMatching"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
minmaxmatch = approximation.min_maximal_matching(g)
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"Matching": minmaxmatch}}
return retval
def ramsey(self):
key = "GraphMatching"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
try:
ramsey = approximation.ramsey_R2(g)
except:
ramsey = None
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"RamseyR2": ramsey}}
return retval
def vertexcover(self):
key = "VertexCover"
if self.debug:
start, cmt = clock("Computing {0}".format(key))
if self.isDirected is False:
minweighted = approximation.min_weighted_vertex_cover(g)
else:
minweighted = None
if self.debug:
elapsed(start, cmt)
retval = {"Key": key, "Value": {"MinWeighted": minweighted}}
return retval
_, _ = clock("Last Run")
mynet = network()
mynet.addEdge(['A', 'B'], {"weight": 3})
mynet.addEdge(['A', 'C'], {"weight": 2})
mynet.addEdge(['A', 'D'], {"weight": 6})
mynet.addEdge(['B', 'C'], {"weight": 1})
mynet.addEdge(['C', 'C'], {"weight": 1})
mynet.addEdge(['C', 'D'], {"weight": 1})
mynet.addEdge(['D', 'A'], {"weight": 1})
mynet.setDebug(True)
nf = networkApproximationAlgorithms(g)
nf.connectivity()
nf.kcomponents()
nf.clique()
nf.clustering()
nf.dominatingset()
nf.independentset()
nf.graphmatching()
nf.ramsey()
nf.vertexcover()
for k,v in g.adj.items():
print(k,v)
G.add_edge('spam', 'spamX', we)
G.edges()
edgelist = [(0, 1), (1, 2), (2, 3)]
H = nx.Graph(edgelist)
G = nx.Graph()
attrs = {'weight': 7, 'capacity': 5, 'length': 342.7}
G.add_edge(("spam", "me"), attrs)
G.edges()
for k,v in G.adj.items():
print(k,v)
dg
DG = nx.DiGraph()
>>> DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)])
>>> DG.out_degree(1, weight='weight')
>>> DG.degree(1, weight='weight')
>>> list(DG.successors(1))
>>> list(DG.neighbors(1))
H = nx.Graph(DG)
DG.edges()
#H.edges()
for k,v in DG.adj.items():
print(k,v)
for k,v in H.adj.items():
print(k,v)
help(G.add_edge)
```
| github_jupyter |
# Barycenters of persistence diagrams
Theo Lacombe
https://tlacombe.github.io/
## A statistical descriptor in the persistence diagram space
This tutorial presents the concept of barycenter, or __Fréchet mean__, of a family of persistence diagrams. Fréchet means, in the context of persistence diagrams, were initially introduced in the seminal papers:
- Probability measures on the space of persistence diagrams, by Mileyko, Mukherjee, and Harer. https://math.hawaii.edu/~yury/papers/probpers.pdf ,
- Fréchet means for distributions of persistence diagrams, by Turner, Mileyko, Mukherjee and Harer, https://arxiv.org/pdf/1206.2790.pdf
and later studied in https://arxiv.org/pdf/1901.03048.pdf (theoretical viewpoint) and https://arxiv.org/pdf/1805.08331.pdf (computational viewpoint).
## Motivation and mathematical formulation
Recall that given an object $X$, say a point cloud embedded in the Euclidean space $\mathbb{R}^d$, one can compute its persistence diagram $\mathrm{Dgm}(X)$ which is a point cloud supported on a half-plane $\Omega \subset \mathbb{R}^2$ (see this tutorial https://github.com/GUDHI/TDA-tutorial/blob/master/Tuto-GUDHI-persistence-diagrams.ipynb for an introduction to persistence diagrams).
Now, consider that instead of building one diagram $\mathrm{Dgm}(X)$ from one object $X$, you observe a collection of objects $X_1 \dots X_n$ and compute their respective diagrams, let's call them $\mu_1 \dots \mu_n$. How can you build a statistical summary of this information?
Fréchet means is one way to do so. It mimics the notion of arithmetic mean in metric spaces. First, recall that the space of persistence diagrams, equipped with either the bottleneck (https://gudhi.inria.fr/python/latest/bottleneck_distance_user.html) or the Wasserstein (https://gudhi.inria.fr/python/latest/wasserstein_distance_user.html) metrics is **not** a linear space. Therefore, the notion of arithmetic mean cannot be faithfully transposed to the context of persistence diagrams.
To overcome this limitation, one relies on _Fréchet means_. In Euclidean spaces, one of the characterization of the arithmetic mean
$$ \overline{x} = \frac{1}{n} \sum_{i=1}^n x_i $$
of a sample $x_1 \dots x_n \in \mathbb{R}^d$ is that it minimizes the _variance_ of the sample, that is the map
$$\mathcal{E} : x \mapsto \sum_{i=1}^n \|x - x_i \|_2^2 $$
has a unique minimizer, that turns out to be $\overline{x}$.
Although the former formula does not make sense in general metric spaces, the map $\mathcal{E}$ can still be defined, in particular in the context of persistence diagrams. Therefore, a _Fréchet mean_ of $\mu_1 \dots \mu_n$ is any minimizer, should it exist, of the map
$$ \mathcal{E} : \mu \mapsto \sum_{i=1}^n d_2(\mu, \mu_i)^2, $$
where $d_2$ denotes the so-called Wasserstein-2 distance between persistence diagrams.
It has been proved that Fréchet means of persistence diagrams always exist in the context of averaging finitely many diagrams. Their computation remains however challenging.
## A Lagrangian algorithm
We showcase here one of the algorithm used to _estimate_ barycenters of a (finite) family of persistence diagrams (note that their exact computation is intractable in general). This algorithm was introduced by Turner et al. (https://arxiv.org/pdf/1206.2790.pdf) and adopts a _lagrangian_ perspective. Roughly speaking (see details in their paper), this algorithm consists in iterating the following:
- Let $\mu$ be a current estimation of the barycenter of $\mu_1 \dots \mu_n$.
- (1) Compute $\sigma_i$ ($1 \leq i \leq n$) the optimal (partial) matching between $\mu$ and $\mu_i$.
- (2) For each point $x$ of the diagram $\mu$, apply $x \mapsto \mathrm{mean}((\sigma_i(x))_i)$, where $\mathrm{mean}$ is the arithemtic mean in $\mathbb{R}^2$.
- (3) If $\mu$ didn't change, return $\mu$. Otherwise, go back to (1).
This algorithm is proved to converge ($\mathcal{E}$ decreases at each iteration) to a _local_ minimum of the map $\mathcal{E}$. Indeed, the map $\mathcal{E}$ is **not convex**, which can unfortunately lead to arbritrary bad local minima. Furthermore, its combinatorial aspect (one must compute $n$ optimal partial matching at each iteration step), makes it too computationally expensive when dealing with a large number of large diagrams. It is however a fairly decent attempt when dealing with few diagrams with few points.
The solution $\mu^*$ returned by the algorithm is a persistence diagram with the following property:
each point $x \in \mu^*$ is the mean of one point (or the diagonal) $\sigma_i(x)$ in each of the $\mu_i$s. These are called _groupings_.
**Note:** This algorithm is said to be based on a _Lagrangian_ approach by opposition to _Eulerian_ , from fluid dynamics formalism (https://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specification_of_the_flow_field). Roughly speaking, Lagrangian models track the position of each particule individually (here, the points in the barycenter estimate), while Eulerian models instead measure the quantity of mass that is present in each location of the space. We will present in a next version of this tutorial an Eulerian approach to solve (approximately) this problem.
## Illustration
### Imports and preliminary tests
```
import gudhi
print("Current gudhi version:", gudhi.__version__)
print("Version >= 3.2.0 is required for this tutorial")
# Note: %matplotlib notebook allows for iteractive 3D plot.
#%matplotlib notebook
%matplotlib inline
from gudhi.wasserstein.barycenter import lagrangian_barycenter as bary
from gudhi.persistence_graphical_tools import plot_persistence_diagram
import numpy as np
import matplotlib.pyplot as plt
```
### Exemple
Let us consider three persistence diagrams.
```
diag1 = np.array([[0., 1.], [0, 2], [1, 2], [1.32, 1.87], [0.7, 1.2]])
diag2 = np.array([[0, 1.5], [0.5, 2], [1.2, 2], [1.3, 1.8], [0.4, 0.8]])
diag3 = np.array([[0.2, 1.1], [0.1, 2.2], [1.3, 2.1], [0.5, 0.9], [0.6, 1.1]])
diags = [diag1, diag2, diag3]
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
colors=['r', 'b', 'g']
for diag, c in zip(diags, colors):
plot_persistence_diagram(diag, axes=ax, colormap=c)
ax.set_title("Set of 3 persistence diagrams", fontsize=22)
```
Now, let us compute (more precisely, estimate) a barycenter of `diags`.
Using the verbose option, we can get access to a `log` (dictionary) that contains complementary informations.
```
b, log = bary(diags,
init=0,
verbose=True) # we initialize our estimation on the first diagram (the red one.)
print("Energy reached by this estimation of the barycenter: E=%.2f." %log['energy'])
print("Convergenced made after %s steps." %log['nb_iter'])
```
Using the `groupings` provided in logs, we can have a better visibility on what is happening.
```
G = log["groupings"]
def proj_on_diag(x):
return ((x[1] + x[0]) / 2, (x[1] + x[0]) / 2)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
colors = ['r', 'b', 'g']
for diag, c in zip(diags, colors):
plot_persistence_diagram(diag, axes=ax, colormap=c)
def plot_bary(b, diags, groupings, axes):
# n_y = len(Y.points)
for i in range(len(diags)):
indices = G[i]
n_i = len(diags[i])
for (y_j, x_i_j) in indices:
y = b[y_j]
if y[0] != y[1]:
if x_i_j >= 0: # not mapped with the diag
x = diags[i][x_i_j]
else: # y_j is matched to the diagonal
x = proj_on_diag(y)
ax.plot([y[0], x[0]], [y[1], x[1]], c='black',
linestyle="dashed")
ax.scatter(b[:,0], b[:,1], color='purple', marker='d', label="barycenter (estim)")
ax.legend()
ax.set_title("Set of diagrams and their barycenter", fontsize=22)
plot_bary(b, diags, G, axes=ax)
```
Note that, as the problem is not convex, the output (and its quality, i.e. energy) might depend on optimization.
Energy: lower is better.
```
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
colors = ['r', 'b', 'g']
for i, ax in enumerate(axs):
for diag, c in zip(diags, colors):
plot_persistence_diagram(diag, axes=ax, colormap=c)
b, log = bary(diags, init=i, verbose=True)
e = log["energy"]
G = log["groupings"]
# print(G)
plot_bary(b, diags, groupings=G, axes=ax)
ax.set_title("Barycenter estim with init=%s. Energy: %.2f" %(i, e), fontsize=14)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.